Skip to main content
GPT-OSS 120B is OpenAI’s open-weight 120B parameter language model, offering powerful text generation capabilities with advanced reasoning and instruction-following abilities.

Try in playground

Test GPT-OSS 120B in the Runpod Hub playground.
Endpointhttps://api.runpod.ai/v2/gpt-oss-120b/runsync
Pricing$10.00 per 1M tokens
TypeText generation
This endpoint is fully compatible with the OpenAI API. See the OpenAI compatibility examples below.

Request

All parameters are passed within the input object in the request body.
input.prompt
string
required
Prompt for text generation.
input.max_tokens
integer
default:"512"
Maximum number of tokens to output.
input.temperature
float
default:"0.7"
Randomness of the output. Lower values make output more predictable and deterministic. Range: 0.0-1.0.
input.top_p
float
Nucleus sampling threshold. Samples from the smallest set of words whose cumulative probability exceeds this threshold.
input.top_k
integer
Restricts sampling to the top K most probable words.
input.stop
string
Stops generation if the given string is encountered.
curl -X POST "https://api.runpod.ai/v2/gpt-oss-120b/runsync" \
  -H "Authorization: Bearer $RUNPOD_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "input": {
      "prompt": "Explain the concept of quantum entanglement in simple terms:",
      "max_tokens": 512,
      "temperature": 0.7
    }
  }'

Response

id
string
Unique identifier for the request.
status
string
Request status. Returns COMPLETED on success, FAILED on error.
delayTime
integer
Time in milliseconds the request spent in queue before processing began.
executionTime
integer
Time in milliseconds the model took to generate the response.
output
object
The generation result containing the text and usage information.
output.choices
array
Array containing the generated text.
output.cost
float
Cost of the generation in USD.
output.usage
object
Token usage information with input and output counts.
{
  "delayTime": 30,
  "executionTime": 4521,
  "id": "sync-a1b2c3d4-e5f6-7890-abcd-ef1234567890-u1",
  "output": [
    {
      "choices": [
        {
          "tokens": [
            "Quantum entanglement is a phenomenon where two particles become connected in such a way that measuring one particle instantly affects the other, no matter how far apart they are..."
          ]
        }
      ],
      "cost": 0.005,
      "usage": {
        "input": 15,
        "output": 485
      }
    }
  ],
  "status": "COMPLETED"
}

OpenAI API compatibility

GPT-OSS 120B is fully compatible with the OpenAI API format. You can use the OpenAI Python client to interact with this endpoint.
Python (OpenAI SDK)
from openai import OpenAI

client = OpenAI(
    api_key=RUNPOD_API_KEY,
    base_url="https://api.runpod.ai/v2/gpt-oss-120b/openai/v1",
)

response = client.chat.completions.create(
    model="gpt-oss-120b",
    messages=[
        {
            "role": "system",
            "content": "You are a helpful assistant.",
        },
        {
            "role": "user",
            "content": "Explain the concept of quantum entanglement in simple terms.",
        },
    ],
    max_tokens=512,
)

print(response.choices[0].message.content)
For streaming responses, add stream=True:
Python (Streaming)
response = client.chat.completions.create(
    model="gpt-oss-120b",
    messages=[
        {"role": "user", "content": "Write a short story about space exploration."}
    ],
    max_tokens=512,
    stream=True,
)

for chunk in response:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")
For more details, see Send vLLM requests and the OpenAI API compatibility guide.

Cost calculation

GPT-OSS 120B charges $10.00 per 1M tokens. Example costs:
TokensCost
1,000 tokens$0.01
10,000 tokens$0.10
100,000 tokens$1.00
1,000,000 tokens$10.00