Quick reference
| Operation | Mutation/Query |
|---|---|
| Create on-demand Pod | podFindAndDeployOnDemand |
| Create spot Pod | podRentInterruptable |
| Start Pod | podResume |
| Start spot Pod | podBidResume |
| Stop Pod | podStop |
| List all Pods | myself { pods { ... } } |
| Get Pod by ID | pod(input: {podId: "..."}) |
| List GPU types | gpuTypes |
Create a Pod
On-demand Pod
On-demand Pods provide guaranteed compute at a fixed price.- cURL
- GraphQL
- Output
curl --request POST \
--header 'content-type: application/json' \
--url 'https://api.runpod.io/graphql?api_key=${YOUR_API_KEY}' \
--data '{"query": "mutation { podFindAndDeployOnDemand( input: { cloudType: ALL, gpuCount: 1, volumeInGb: 40, containerDiskInGb: 40, minVcpuCount: 2, minMemoryInGb: 15, gpuTypeId: \"NVIDIA RTX A6000\", name: \"Runpod Tensorflow\", imageName: \"runpod/tensorflow\", dockerArgs: \"\", ports: \"8888/http\", volumeMountPath: \"/workspace\", env: [{ key: \"JUPYTER_PASSWORD\", value: \"your-password\" }] } ) { id imageName env machineId machine { podHostId } } }"}'
mutation {
podFindAndDeployOnDemand(
input: {
cloudType: ALL
gpuCount: 1
volumeInGb: 40
containerDiskInGb: 40
minVcpuCount: 2
minMemoryInGb: 15
gpuTypeId: "NVIDIA RTX A6000"
name: "Runpod Tensorflow"
imageName: "runpod/tensorflow"
dockerArgs: ""
ports: "8888/http"
volumeMountPath: "/workspace"
env: [{ key: "JUPYTER_PASSWORD", value: "your-password" }]
}
) {
id
imageName
env
machineId
machine {
podHostId
}
}
}
{
"data": {
"podFindAndDeployOnDemand": {
"id": "50qynxzilsxoey",
"imageName": "runpod/tensorflow",
"env": ["JUPYTER_PASSWORD=your-password"],
"machineId": "hpvdausak8xb",
"machine": {
"podHostId": "50qynxzilsxoey-64410065"
}
}
}
}
Spot Pod
Spot Pods offer lower prices but can be interrupted when demand is high.- cURL
- GraphQL
- Output
curl --request POST \
--header 'content-type: application/json' \
--url 'https://api.runpod.io/graphql?api_key=${YOUR_API_KEY}' \
--data '{"query": "mutation { podRentInterruptable( input: { bidPerGpu: 0.2, cloudType: SECURE, gpuCount: 1, volumeInGb: 40, containerDiskInGb: 40, minVcpuCount: 2, minMemoryInGb: 15, gpuTypeId: \"NVIDIA RTX A6000\", name: \"Runpod Pytorch\", imageName: \"runpod/pytorch\", dockerArgs: \"\", ports: \"8888/http\", volumeMountPath: \"/workspace\", env: [{ key: \"JUPYTER_PASSWORD\", value: \"your-password\" }] } ) { id imageName env machineId machine { podHostId } } }"}'
mutation {
podRentInterruptable(
input: {
bidPerGpu: 0.2
cloudType: SECURE
gpuCount: 1
volumeInGb: 40
containerDiskInGb: 40
minVcpuCount: 2
minMemoryInGb: 15
gpuTypeId: "NVIDIA RTX A6000"
name: "Runpod Pytorch"
imageName: "runpod/pytorch"
dockerArgs: ""
ports: "8888/http"
volumeMountPath: "/workspace"
env: [{ key: "JUPYTER_PASSWORD", value: "your-password" }]
}
) {
id
imageName
env
machineId
machine {
podHostId
}
}
}
{
"data": {
"podRentInterruptable": {
"id": "fkjbybgpwuvmhk",
"imageName": "runpod/pytorch",
"env": ["JUPYTER_PASSWORD=your-password"],
"machineId": "hpvdausak8xb",
"machine": {
"podHostId": "fkjbybgpwuvmhk-64410065"
}
}
}
}
Filter by CUDA version
UseallowedCudaVersions to restrict Pods to machines with specific CUDA versions.
- cURL
- GraphQL
curl --request POST \
--header 'content-type: application/json' \
--url 'https://api.runpod.io/graphql?api_key=${YOUR_API_KEY}' \
--data '{
"query": "mutation { podFindAndDeployOnDemand( input: { cloudType: ALL, gpuCount: 1, volumeInGb: 40, containerDiskInGb: 40, gpuTypeId: \"NVIDIA RTX A6000\", name: \"Runpod Pytorch\", imageName: \"runpod/pytorch\", allowedCudaVersions: [\"12.0\", \"12.1\", \"12.2\", \"12.3\"] } ) { id imageName machineId } }"
}'
mutation {
podFindAndDeployOnDemand(
input: {
cloudType: ALL
gpuCount: 1
volumeInGb: 40
containerDiskInGb: 40
gpuTypeId: "NVIDIA RTX A6000"
name: "Runpod Pytorch"
imageName: "runpod/pytorch"
allowedCudaVersions: ["12.0", "12.1", "12.2", "12.3"]
}
) {
id
imageName
machineId
}
}
Start a Pod
Resume a stopped Pod. UsepodResume for on-demand Pods or podBidResume for spot Pods.
On-demand Pod
- cURL
- GraphQL
- Output
curl --request POST \
--header 'content-type: application/json' \
--url 'https://api.runpod.io/graphql?api_key=${YOUR_API_KEY}' \
--data '{"query": "mutation { podResume( input: { podId: \"YOUR_POD_ID\", gpuCount: 1 } ) { id desiredStatus imageName } }"}'
mutation {
podResume(input: { podId: "YOUR_POD_ID", gpuCount: 1 }) {
id
desiredStatus
imageName
}
}
{
"data": {
"podResume": {
"id": "YOUR_POD_ID",
"desiredStatus": "RUNNING",
"imageName": "runpod/tensorflow"
}
}
}
mutation {
podResume(input: {
podId: "YOUR_POD_ID",
gpuCount: 1,
allowedCudaVersions: ["12.0", "12.1", "12.2", "12.3"]
}) {
id
desiredStatus
}
}
Spot Pod
- cURL
- GraphQL
- Output
curl --request POST \
--header 'content-type: application/json' \
--url 'https://api.runpod.io/graphql?api_key=${YOUR_API_KEY}' \
--data '{"query": "mutation { podBidResume( input: { podId: \"YOUR_POD_ID\", bidPerGpu: 0.2, gpuCount: 1 } ) { id desiredStatus imageName } }"}'
mutation {
podBidResume(input: { podId: "YOUR_POD_ID", bidPerGpu: 0.2, gpuCount: 1 }) {
id
desiredStatus
imageName
}
}
{
"data": {
"podBidResume": {
"id": "YOUR_POD_ID",
"desiredStatus": "RUNNING",
"imageName": "runpod/tensorflow"
}
}
}
Stop a Pod
Stopping a Pod releases the GPU while preserving your volume data.- cURL
- GraphQL
- Output
curl --request POST \
--header 'content-type: application/json' \
--url 'https://api.runpod.io/graphql?api_key=${YOUR_API_KEY}' \
--data '{"query": "mutation { podStop(input: {podId: \"YOUR_POD_ID\"}) { id desiredStatus } }"}'
mutation {
podStop(input: { podId: "YOUR_POD_ID" }) {
id
desiredStatus
}
}
{
"data": {
"podStop": {
"id": "YOUR_POD_ID",
"desiredStatus": "EXITED"
}
}
}
Query Pods
List all Pods
- cURL
- GraphQL
- Output
curl --request POST \
--header 'content-type: application/json' \
--url 'https://api.runpod.io/graphql?api_key=${YOUR_API_KEY}' \
--data '{"query": "query { myself { pods { id name runtime { uptimeInSeconds gpus { id gpuUtilPercent memoryUtilPercent } container { cpuPercent memoryPercent } } } } }"}'
query {
myself {
pods {
id
name
runtime {
uptimeInSeconds
ports {
ip
isIpPublic
privatePort
publicPort
type
}
gpus {
id
gpuUtilPercent
memoryUtilPercent
}
container {
cpuPercent
memoryPercent
}
}
}
}
}
{
"data": {
"myself": {
"pods": [
{
"id": "ldl1dxirsim64n",
"name": "Runpod Pytorch",
"runtime": {
"uptimeInSeconds": 3931,
"ports": [
{
"ip": "100.65.0.101",
"isIpPublic": false,
"privatePort": 8888,
"publicPort": 60141,
"type": "http"
}
],
"gpus": [
{
"id": "GPU-e0488b7e-6932-795b-a125-4472c16ea72c",
"gpuUtilPercent": 0,
"memoryUtilPercent": 0
}
],
"container": {
"cpuPercent": 0,
"memoryPercent": 0
}
}
}
]
}
}
}
Get Pod by ID
- cURL
- GraphQL
- Output
curl --request POST \
--header 'content-type: application/json' \
--url 'https://api.runpod.io/graphql?api_key=${YOUR_API_KEY}' \
--data '{"query": "query { pod(input: {podId: \"YOUR_POD_ID\"}) { id name runtime { uptimeInSeconds gpus { id gpuUtilPercent memoryUtilPercent } } } }"}'
query {
pod(input: { podId: "YOUR_POD_ID" }) {
id
name
runtime {
uptimeInSeconds
ports {
ip
isIpPublic
privatePort
publicPort
type
}
gpus {
id
gpuUtilPercent
memoryUtilPercent
}
container {
cpuPercent
memoryPercent
}
}
}
}
{
"data": {
"pod": {
"id": "YOUR_POD_ID",
"name": "Runpod Pytorch",
"runtime": {
"uptimeInSeconds": 11,
"ports": [
{
"ip": "100.65.0.101",
"isIpPublic": false,
"privatePort": 8888,
"publicPort": 60141,
"type": "http"
}
],
"gpus": [
{
"id": "GPU-e0488b7e-6932-795b-a125-4472c16ea72c",
"gpuUtilPercent": 0,
"memoryUtilPercent": 0
}
],
"container": {
"cpuPercent": 0,
"memoryPercent": 0
}
}
}
}
}
Query GPU types
List available GPU types to find thegpuTypeId needed when creating Pods.
List all GPU types
- cURL
- GraphQL
- Output
curl --request POST \
--header 'content-type: application/json' \
--url 'https://api.runpod.io/graphql?api_key=${YOUR_API_KEY}' \
--data '{"query": "query { gpuTypes { id displayName memoryInGb } }"}'
query {
gpuTypes {
id
displayName
memoryInGb
}
}
{
"data": {
"gpuTypes": [
{
"id": "NVIDIA GeForce RTX 3070",
"displayName": "RTX 3070",
"memoryInGb": 8
},
{
"id": "NVIDIA GeForce RTX 3080",
"displayName": "RTX 3080",
"memoryInGb": 10
},
{
"id": "NVIDIA RTX A6000",
"displayName": "RTX A6000",
"memoryInGb": 48
}
]
}
}
Get GPU type details
Query a specific GPU type to see pricing and availability.- cURL
- GraphQL
- Output
curl --request POST \
--header 'content-type: application/json' \
--url 'https://api.runpod.io/graphql?api_key=${YOUR_API_KEY}' \
--data '{"query": "query { gpuTypes(input: {id: \"NVIDIA GeForce RTX 3090\"}) { id displayName memoryInGb secureCloud communityCloud lowestPrice(input: {gpuCount: 1}) { minimumBidPrice uninterruptablePrice } } }"}'
query {
gpuTypes(input: { id: "NVIDIA GeForce RTX 3090" }) {
id
displayName
memoryInGb
secureCloud
communityCloud
lowestPrice(input: { gpuCount: 1 }) {
minimumBidPrice
uninterruptablePrice
}
}
}
{
"data": {
"gpuTypes": [
{
"id": "NVIDIA GeForce RTX 3090",
"displayName": "RTX 3090",
"memoryInGb": 24,
"secureCloud": false,
"communityCloud": true,
"lowestPrice": {
"minimumBidPrice": 0.163,
"uninterruptablePrice": 0.3
}
}
]
}
}
Check GPU availability
Use thestockStatus field to check availability before creating a Pod. Values include High, Medium, Low, and None.
- cURL
- GraphQL
- Output (high stock)
- Output (low stock)
curl --request POST \
--header 'content-type: application/json' \
--url 'https://api.runpod.io/graphql?api_key=${YOUR_API_KEY}' \
--data '{"query": "query { gpuTypes(input: { id: \"NVIDIA RTX A4000\" }) { id displayName lowestPrice(input: { gpuCount: 1, secureCloud: true }) { stockStatus minimumBidPrice uninterruptablePrice availableGpuCounts } } }"}'
query {
gpuTypes(input: { id: "NVIDIA RTX A4000" }) {
id
displayName
lowestPrice(input: { gpuCount: 1, secureCloud: true }) {
stockStatus
minimumBidPrice
uninterruptablePrice
availableGpuCounts
}
}
}
{
"data": {
"gpuTypes": [
{
"id": "NVIDIA RTX A4000",
"displayName": "RTX A4000",
"lowestPrice": {
"stockStatus": "High",
"minimumBidPrice": 0.2,
"uninterruptablePrice": 0.35,
"availableGpuCounts": [1, 2, 4]
}
}
]
}
}
{
"data": {
"gpuTypes": [
{
"id": "NVIDIA RTX A4000",
"displayName": "RTX A4000",
"lowestPrice": {
"stockStatus": "Low",
"minimumBidPrice": 0.16,
"uninterruptablePrice": 0.24,
"availableGpuCounts": [1, 2, 3, 4, 5, 6, 7]
}
}
]
}
}
If
stockStatus is Low, there are very few GPUs available. Consider selecting an alternative GPU type or trying again later.