Manage Pods
Authentication
RunPod uses API Keys for all API requests. Go to Settings to manage your API keys.
GraphQL API Spec
If you need detailed queries, mutations, fields, and inputs, look at the GraphQL Spec.
Create Pods
A Pod consists of the following resources:
- 0 or more GPUs - A pod can be started with 0 GPUs for the purposes of accessing data, though GPU-accelerated functions and web services will fail to work.
- vCPU
- System RAM
- Container Disk
- It's temporary and removed when the pod is stopped or terminated.
- You only pay for the container disk when the pod is running.
- Instance Volume
- Data persists even when you reset or stop a Pod. Volume is removed when the Pod is terminated.
- You pay for volume storage even when the Pod is stopped.
Create On-Demand Pod
- cURL
- GraphQL
- Output
curl --request POST \
--header 'content-type: application/json' \
--url 'https://api.runpod.io/graphql?api_key=${YOUR_API_KEY}' \
--data '{"query": "mutation { podFindAndDeployOnDemand( input: { cloudType: ALL, gpuCount: 1, volumeInGb: 40, containerDiskInGb: 40, minVcpuCount: 2, minMemoryInGb: 15, gpuTypeId: \"NVIDIA RTX A6000\", name: \"RunPod Tensorflow\", imageName: \"runpod/tensorflow\", dockerArgs: \"\", ports: \"8888/http\", volumeMountPath: \"/workspace\", env: [{ key: \"JUPYTER_PASSWORD\", value: \"rn51hunbpgtltcpac3ol\" }] } ) { id imageName env machineId machine { podHostId } } }"}'
mutation {
podFindAndDeployOnDemand(
input: {
cloudType: ALL
gpuCount: 1
volumeInGb: 40
containerDiskInGb: 40
minVcpuCount: 2
minMemoryInGb: 15
gpuTypeId: "NVIDIA RTX A6000"
name: "RunPod Tensorflow"
imageName: "runpod/tensorflow"
dockerArgs: ""
ports: "8888/http"
volumeMountPath: "/workspace"
env: [{ key: "JUPYTER_PASSWORD", value: "rn51hunbpgtltcpac3ol" }]
}
) {
id
imageName
env
machineId
machine {
podHostId
}
}
}
{
"data": {
"podFindAndDeployOnDemand": {
"id": "50qynxzilsxoey",
"imageName": "runpod/tensorflow",
"env": [
"JUPYTER_PASSWORD=rn51hunbpgtltcpac3ol"
],
"machineId": "hpvdausak8xb",
"machine": {
"podHostId": "50qynxzilsxoey-64410065"
}
}
}
}
Create Spot Pod
- cURL
- GraphQL
- Output
curl --request POST \
--header 'content-type: application/json' \
--url 'https://api.runpod.io/graphql?api_key=${YOUR_API_KEY}' \
--data '{"query": "mutation { podRentInterruptable( input: { bidPerGpu: 0.2, cloudType: SECURE, gpuCount: 1, volumeInGb: 40, containerDiskInGb: 40, minVcpuCount: 2, minMemoryInGb: 15, gpuTypeId: \"NVIDIA RTX A6000\", name: \"RunPod Pytorch\", imageName: \"runpod/pytorch\", dockerArgs: \"\", ports: \"8888/http\", volumeMountPath: \"/workspace\", env: [{ key: \"JUPYTER_PASSWORD\", value: \"vunw9ybnzqwpia2795p2\" }] } ) { id imageName env machineId machine { podHostId } } }"}'
mutation {
podRentInterruptable(
input: {
bidPerGpu: 0.2
cloudType: SECURE
gpuCount: 1
volumeInGb: 40
containerDiskInGb: 40
minVcpuCount: 2
minMemoryInGb: 15
gpuTypeId: "NVIDIA RTX A6000"
name: "RunPod Pytorch"
imageName: "runpod/pytorch"
dockerArgs: ""
ports: "8888/http"
volumeMountPath: "/workspace"
env: [{ key: "JUPYTER_PASSWORD", value: "vunw9ybnzqwpia2795p2" }]
}
) {
id
imageName
env
machineId
machine {
podHostId
}
}
}
{
"data": {
"podRentInterruptable": {
"id": "fkjbybgpwuvmhk",
"imageName": "runpod/pytorch",
"env": [
"JUPYTER_PASSWORD=vunw9ybnzqwpia2795p2"
],
"machineId": "hpvdausak8xb",
"machine": {
"podHostId": "fkjbybgpwuvmhk-64410065"
}
}
}
}
Filter by Allowed CUDA Versions
You can pass in the allowedCudaVersions
as a list of CUDA versions that you want to allow for the GPU in the pod.
This helps in specifying the compatible CUDA versions for your GPU setup.
- cURL
- GraphQL
- Output
curl --request POST \
--header 'content-type: application/json' \
--url 'https://api.runpod.io/graphql?api_key=${YOUR_API_KEY}' \
--data '{
"query": "mutation { podResume( input: { podId: \"inzk6tzuz833h5\", gpuCount: 1, allowedCudaVersions: [\"12.0\", \"12.1\", \"12.2\", \"12.3\"] } ) { id desiredStatus imageName env machineId machine { podHostId } } }"
}'
mutation {
podResume(input: {
podId: "inzk6tzuz833h5",
gpuCount: 1,
allowedCudaVersions: ["12.0", "12.1", "12.2", "12.3"]
}) {
id
desiredStatus
imageName
env
machineId
machine {
podHostId
}
}
}
{
"data": {
"podResume": {
"id": "inzk6tzuz833h5",
"desiredStatus": "RUNNING",
"imageName": "runpod/tensorflow",
"env": [
{ "key": "JUPYTER_PASSWORD", "value": "ywm4c9r15j1x6gfrds5n" }
],
"machineId": "hpvdausak8xb",
"machine": {
"podHostId": "inzk6tzuz833h5-64410065"
}
}
}
}
Start Pods
Start On-Demand Pod
- cURL
- GraphQL
- Output
curl --request POST \
--header 'content-type: application/json' \
--url 'https://api.runpod.io/graphql?api_key=${YOUR_API_KEY}' \
--data '{"query": "mutation { podResume( input: { podId: \"inzk6tzuz833h5\", gpuCount: 1 } ) { id desiredStatus imageName env machineId machine { podHostId } } }"}'
mutation {
podResume(input: {podId: "inzk6tzuz833h5", gpuCount: 1}) {
id
desiredStatus
imageName
env
machineId
machine {
podHostId
}
}
}
{
"data": {
"podResume": {
"id": "inzk6tzuz833h5",
"desiredStatus": "RUNNING",
"imageName": "runpod/tensorflow",
"env": [
"JUPYTER_PASSWORD=ywm4c9r15j1x6gfrds5n"
],
"machineId": "hpvdausak8xb",
"machine": {
"podHostId": "inzk6tzuz833h5-64410065"
}
}
}
}
Start Spot Pod
- cURL
- GraphQL
- Output
curl --request POST \
--header 'content-type: application/json' \
--url 'https://api.runpod.io/graphql?api_key=${YOUR_API_KEY}' \
--data '{"query": "mutation { podBidResume( input: { podId: \"d62t7qg9n5vtan\", bidPerGpu: 0.2, gpuCount: 1 } ) { id desiredStatus imageName env machineId machine { podHostId } } }"}'
mutation {
podBidResume(input: {podId: "d62t7qg9n5vtan", bidPerGpu: 0.2, gpuCount: 1}) {
id
desiredStatus
imageName
env
machineId
machine {
podHostId
}
}
}
{
"data": {
"podBidResume": {
"id": "d62t7qg9n5vtan",
"desiredStatus": "RUNNING",
"imageName": "runpod/tensorflow",
"env": [
"JUPYTER_PASSWORD=vunw9ybnzqwpia2795p2"
],
"machineId": "hpvdausak8xb",
"machine": {
"podHostId": "d62t7qg9n5vtan-64410065"
}
}
}
}
Filter by CUDA Version
You can pass in the allowedCudaVersions
as a list of CUDA versions that you want to allow for the GPU in the pod.
This helps in specifying the compatible CUDA versions for your GPU setup.
- cURL
- GraphQL
- Output
curl --request POST \
--header 'content-type: application/json' \
--url 'https://api.runpod.io/graphql?api_key=${YOUR_API_KEY}' \
--data '{
"query": "mutation { podRentInterruptable( input: { bidPerGpu: 0.2, cloudType: SECURE, gpuCount: 1, volumeInGb: 40, containerDiskInGb: 40, minVcpuCount: 2, minMemoryInGb: 15, gpuTypeId: \"NVIDIA RTX A6000\", name: \"RunPod Pytorch\", imageName: \"runpod/pytorch\", dockerArgs: \"\", ports: \"8888/http\", volumeMountPath: \"/workspace\", env: [{ key: \"JUPYTER_PASSWORD\", value: \"vunw9ybnzqwpia2795p2\" }], allowedCudaVersions: [\"12.0\", \"12.1\", \"12.2\", \"12.3\"] } ) { id imageName env machineId machine { podHostId } } }"
}'
mutation {
podRentInterruptable(input: {
bidPerGpu: 0.2,
cloudType: SECURE,
gpuCount: 1,
volumeInGb: 40,
containerDiskInGb: 40,
minVcpuCount: 2,
minMemoryInGb: 15,
gpuTypeId: "NVIDIA RTX A6000",
name: "RunPod Pytorch",
imageName: "runpod/pytorch",
dockerArgs: "",
ports: "8888/http",
volumeMountPath: "/workspace",
env: [{ key: "JUPYTER_PASSWORD", value: "vunw9ybnzqwpia2795p2" }],
allowedCudaVersions: ["12.0", "12.1", "12.2", "12.3"]
}) {
id
imageName
env
machineId
machine {
podHostId
}
}
}
{
"data": {
"podRentInterruptable": {
"id": "your_pod_id",
"imageName": "runpod/pytorch",
"env": [
{ "key": "JUPYTER_PASSWORD", "value": "vunw9ybnzqwpia2795p2" }
],
"machineId": "your_machine_id",
"machine": {
"podHostId": "your_pod_host_id"
}
}
}
}
Stop Pods
- cURL
- GraphQL
- Output
curl --request POST \
--header 'content-type: application/json' \
--url 'https://api.runpod.io/graphql?api_key=${YOUR_API_KEY}' \
--data '{"query": "mutation { podStop(input: {podId: \"riixlu8oclhp\"}) { id desiredStatus } }"}'
mutation {
podStop(input: {podId: "riixlu8oclhp"}) {
id
desiredStatus
}
}
{
"data": {
"podStop": {
"id": "riixlu8oclhp",
"desiredStatus": "EXITED"
}
}
}