Runpod provides an S3-protocol compatible API for direct access to your network volumes. This allows you to manage files on your network volumes without launching a Pod, reducing cost and operational friction. Using the S3-compatible API does not affect pricing. Network volumes are billed hourly at $0.07/GB/month for the first 1TB, and $0.05/GB/month for additional storage.

Datacenter availability

The S3-compatible API is available for network volumes in select datacenters. Each datacenter has a unique endpoint URL that you’ll use when calling the API:
DatacenterEndpoint URL
EUR-IS-1https://s3api-eur-is-1.runpod.io/
EU-RO-1https://s3api-eu-ro-1.runpod.io/
EU-CZ-1https://s3api-eu-cz-1.runpod.io/
US-KS-2https://s3api-us-ks-2.runpod.io/
Create your network volume in a supported datacenter to use the S3-compatible API.

Setup and authentication

1

Create a network volume

First, create a network volume in a supported datacenter. See Network volumes -> Create a network volume for detailed instructions.
2

Create an S3 API key

Next, you’ll need to generate a new key called an “S3 API key” (this is separate from your Runpod API key).
  1. Go to the Settings page in the Runpod console.
  2. Expand S3 API Keys and select Create an S3 API key.
  3. Name your key and select Create.
  4. Save the access key (e.g., user_***...) and secret (e.g., rps_***...) to use in the next step.
For security, Runpod will show your API key secret only once, so you may wish to save it elsewhere (e.g., in your password manager, or in a GitHub secret). Treat your API key secret like a password and don’t share it with anyone.
3

Configure AWS CLI

To use the S3-compatible API with your Runpod network volumes, you must configure your AWS CLI with the Runpod S3 API key you created.
  1. If you haven’t already, install the AWS CLI on your local machine.
  2. Run the command aws configure in your terminal.
  3. Provide the following when prompted:
    • AWS Access Key ID: Enter your Runpod user ID. You can find this in the Secrets section of the Runpod console, in the description of your S3 API key. By default, the description will look similar to: Shared Secret for user_2f21CfO73Mm2Uq2lEGFiEF24IPw 1749176107073. user_2f21CfO73Mm2Uq2lEGFiEF24IPw is the user ID (yours will be different).
    • AWS Secret Access Key: Enter your Runpod S3 API key’s secret access key.
    • Default Region name: You can leave this blank.
    • Default output format: You can leave this blank or set it to json.
This will configure the AWS CLI to use your Runpod S3 API key by storing these details in your AWS credentials file (typically at ~/.aws/credentials).

Using the S3-compatible API

You can use the S3-compatible API to interact with your Runpod network volumes using standard S3 tools: Core AWS CLI operations such as ls, cp, mv, rm, and sync function as expected.

s3 CLI examples

When using aws s3 commands, you must pass in the endpoint URL for your network volume using the --endpoint-url flag and the datacenter ID using the --region flag. Unlike traditional S3 key-value stores, object names in the Runpod S3-compatible API correspond to actual file paths on your network volume. Object names containing special characters (e.g., #) may need to be URL-encoded to ensure proper processing.

s3api CLI example

You can also use aws s3api commands (instead of aws s3) to interact with the S3-compatible API. For example, here’s how you could use aws s3api get-object to download an object from a network volume:
aws s3api get-object --bucket NETWORK_VOLUME_ID \
    --key REMOTE_FILE \
    --region DATACENTER \
    --endpoint-url https://s3api-DATACENTER.runpod.io/ \
    LOCAL_FILE
Replace LOCAL_FILE with the desired path and name of the file after download—for example: ~/local-dir/my-file.txt. For a list of available s3api commands, see the AWS s3api reference.

Boto3 Python example

You can also use the Boto3 library to interact with the S3-compatible API, using it to transfer files to and from a Runpod network volume. The script below demonstrates how to upload a file to a Runpod network volume using the Boto3 library. It takes command-line arguments for the network volume ID (as an S3 bucket), the datacenter-specific S3 endpoint URL, the local file path, the desired object (file path on the network volume), and the AWS Region (which corresponds to the Runpod datacenter ID).

Uploading very large files

You can upload large files to network volumes using S3 multipart upload operations (see the compatibility reference below). You can also download this helper script, which dramatically improves reliability when uploading very large files (10GB+) by handling timeouts and retries automatically. Click here to download the script on GitHub. Here’s an example of how to run the script using command line arguments:
./upload_large_file.py --file /path/to/large/file.mp4 \
     --bucket NETWORK_VOLUME_ID \
     --access_key YOUR_ACCESS_KEY_ID \
     --secret_key YOUR_SECRET_ACCESS_KEY \
     --endpoint https://s3api-eur-is-1.runpod.io/ \
     --region EUR-IS-1

S3 API compatibility reference

The tables below show which S3 API operations and AWS CLI commands are currently supported. Use the tables below to understand what functionality is available and plan your development workflows accordingly. For detailed information on these operations, refer to the AWS S3 API documentation.
If a function is not listed below, that means it’s not currently implemented. We are continuously expanding the S3-compatible API based on user needs and usage patterns.

Known issues and limitations

Reference documentation

For comprehensive documentation on AWS S3 commands and libraries, refer to: