Skip to main content

Runpod console

The web interface for managing your compute resources, account, teams, and billing.

Serverless

A pay-as-you-go compute solution designed for dynamic autoscaling in production AI/ML apps.

Pod

A dedicated GPU or CPU instance for containerized AI/ML workloads, such as training models, running inference, or other compute-intensive tasks.

Public Endpoint

An AI model API hosted by Runpod that you can access directly without deploying your own infrastructure.

Instant Cluster

A managed compute cluster with high-speed networking for multi-node distributed workloads like training large AI models.

Network volume

Persistent storage that exists independently of your other compute resources and can be attached to multiple Pods or Serverless endpoints to share data between machines.

S3-compatible API

A storage interface compatible with Amazon S3 for uploading, downloading, and managing files in your network volumes.

Runpod Hub

A repository for discovering, deploying, and sharing preconfigured AI projects optimized for Runpod.

Container

A Docker-based environment that packages your code, dependencies, and runtime into a portable unit that runs consistently across machines.

Data center

Physical facilities where Runpod’s GPU and CPU hardware is located. Your choice of data center can affect latency, available GPU types, and pricing.

Machine

The physical server hardware within a data center that hosts your workloads. Each machine contains CPUs, GPUs, memory, and storage.
I