Skip to main content

Endpoint operations

RunPod's endpoint operations allow you to control the complete lifecycle of your Serverless workloads. This guide demonstrates how to submit, monitor, manage, and retrieve results from jobs running on your Serverless endpoints.

Operation overview

  • /run: Submit an asynchronous job that processes in the background while you receive an immediate job ID.
  • /runsync: Submit a synchronous job and wait for the complete results in a single response.
  • /status: Check the current status, execution details, and results of a previously submitted job.
  • /stream: Receive incremental results from a job as they become available.
  • /cancel: Stop a job that is in progress or waiting in the queue.
  • /retry: Requeue a failed or timed-out job using the same job ID and input parameters.
  • /purge-queue: Clear all pending jobs from the queue without affecting jobs already in progress.
  • /health: Monitor the operational status of your endpoint, including worker and job statistics.

Submitting jobs

RunPod offers two primary methods for submitting jobs, each suited for different use cases.

Asynchronous jobs (/run)

Use asynchronous jobs for longer-running tasks that don't require immediate results. This approach returns immediately with a job ID and then processes the job in the background. This approach is particularly useful for operations that require significant processing time, or when you want to manage multiple jobs concurrently.

  • Payload limit: 10 MB
  • Job availability: Results are available for 30 minutes after completion
# Submit an asynchronous job
curl -X POST https://api.runpod.ai/v2/{endpoint_id}/run \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer ${API_KEY}' \
-d '{"input": {"prompt": "Your prompt"}}'

Synchronous jobs (/runsync)

Use synchronous jobs for shorter tasks where you need immediate results. Synchronous jobs waits for job completion before returning the complete result in a single response. This simplifies your code by eliminating the need for status polling, which works best for quick operations (under 30 seconds).

  • Payload limit: 20 MB
  • Job availability: Results are available for 60 seconds after completion
# Submit a synchronous job
curl -X POST https://api.runpod.ai/v2/{endpoint_id}/runsync \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer ${API_KEY}' \
-d '{"input": {"prompt": "Your prompt"}}'

Monitoring jobs

Checking job status (/status)

For asynchronous jobs, you can check the status at any time using the job ID. The status endpoint provides:

  • Current job state (IN_QUEUE, IN_PROGRESS, COMPLETED, FAILED, etc.).
  • Execution statistics (queue delay, processing time).
  • Job output (if completed).
# Check job status
curl -X GET https://api.runpod.ai/v2/{endpoint_id}/status/{job_id} \
-H 'Authorization: Bearer ${API_KEY}'
tip

You can use the /status operation to configure the time-to-live (TTL) for an individual job by appending a TTL parameter when checking the status of a job. For example, https://api.runpod.ai/v2/{endpoint_id}/status/{job_id}?ttl=6000 sets the TTL for the job to 6 seconds. Use this when you want to tell the system to remove a job result sooner than the default retention time.

Streaming results (/stream)

For jobs that generate output incrementally or for very large outputs, use the stream endpoint to receive partial results as they become available. This is especially useful for:

  • Text generation tasks where you want to display output as it's created
  • Long-running jobs where you want to show progress
  • Large outputs that benefit from incremental processing
# Stream job results
curl -X GET https://api.runpod.ai/v2/{endpoint_id}/stream/{job_id} \
-H 'Authorization: Bearer ${API_KEY}'
note

The maximum size for a single streamed payload chunk is 1 MB. Larger outputs will be split across multiple chunks.

Endpoint health monitoring (/health)

The health endpoint provides a quick overview of your endpoint's operational status. Use it to monitor worker availability, track job queue status, identify potential bottlenecks, and determine if scaling adjustments are needed.

# Check endpoint health
curl -X GET https://api.runpod.ai/v2/{endpoint_id}/health \
-H 'Authorization: Bearer ${API_KEY}'

Managing jobs

Cancelling jobs (/cancel)

Cancel jobs that are no longer needed or taking too long to complete. This operation stops jobs that are in progress, removes jobs from the queue if they are not yet started, and returns immediately with the job's canceled status.

# Cancel a job
curl -X POST https://api.runpod.ai/v2/{endpoint_id}/cancel/{job_id} \
-H 'Authorization: Bearer ${API_KEY}'

Retrying failed jobs (/retry)

Retry jobs that have failed or timed out without having to submit a new job request. This operation maintains the same job ID for tracking and requeues the job with the original input parameters, removing the previous output (if any). It can only be used for jobs with a FAILED or TIMED_OUT status.

# Retry a failed job
curl -X POST https://api.runpod.ai/v2/{endpoint_id}/retry/{job_id} \
-H 'Authorization: Bearer ${API_KEY}'
important

Job results expire after a set period:

  • Asynchronous jobs (/run): Results available for 30 minutes
  • Synchronous jobs (/runsync): Results available for 1 minute

Once expired, jobs cannot be retried.

Purging the queue (/purge-queue)

Clear all pending jobs from the queue when you need to reset or cancel multiple jobs at once. This is useful for error recovery, clearing outdated requests, resetting after configuration changes, and managing resource allocation.

# Purge the job queue
curl -X POST https://api.runpod.ai/v2/{endpoint_id}/purge-queue \
-H 'Authorization: Bearer ${API_KEY}'
caution

The purge-queue operation only affects jobs waiting in the queue. Jobs already in progress will continue to run.

Rate limits and quotas

RunPod enforces rate limits to ensure fair platform usage. These limits apply per endpoint and operation:

OperationMethodRate LimitConcurrent Limit
/runPOST1000 requests per 10 seconds200 concurrent
/runsyncPOST2000 requests per 10 seconds400 concurrent
/status, /status-sync, /streamGET/POST2000 requests per 10 seconds400 concurrent
/cancelPOST100 requests per 10 seconds20 concurrent
/purge-queuePOST2 requests per 10 secondsN/A
/openai/*POST2000 requests per 10 seconds400 concurrent
/requestsGET10 requests per 10 seconds2 concurrent

Requests will receive a 429 (Too Many Requests) status if:

  • Queue size exceeds 50 jobs AND
  • Queue size exceeds endpoint.WorkersMax * 500

Exceeding these limits will result in HTTP 429 (Too Many Requests) responses. Implement appropriate retry logic with exponential backoff in your applications to handle rate limiting gracefully.

Best practices

  • Use asynchronous endpoints for jobs that take more than a few seconds to complete.
  • Implement polling with backoff when checking status of asynchronous jobs.
  • Set appropriate timeouts in your client applications.
  • Monitor endpoint health regularly to detect issues early.
  • Implement error handling for all API calls.
  • Use webhooks for notification-based workflows instead of polling. See Send requests for implementation details.
  • Cancel unneeded jobs to free up resources and reduce costs.

Troubleshooting

IssuePossible CausesSolutions
Job stuck in queueNo available workers, max workers limit reachedIncrease max workers, check endpoint health
Timeout errorsJob takes longer than execution timeoutIncrease timeout in job policy, optimize job processing
Failed jobsWorker errors, input validation issuesCheck logs, verify input format, retry with fixed input
Rate limitingToo many requests in short timeImplement backoff strategy, batch requests when possible
Missing resultsResults expiredRetrieve results within expiration window (30 min for async, 1 min for sync)