Skip to main content


Learn how to build and deploy applications on the RunPod platform with this set of tutorials. Covering tools, technologies, and deployment methods, including Containers, Docker, and Serverless implementation.


Explore how to run and deploy AI applications using RunPod's Serverless platform.


  • Generate images with SDXL Turbo: Learn how to build a web application using RunPod's Serverless Workers and SDXL Turbo from Stability AI, a fast text-to-image model, and send requests to an Endpoint to generate images from text-based inputs.
  • Run Google's Gemma model: Deploy Google's Gemma model on RunPod's vLLM Worker, create a Serverless Endpoint, and interact with the model using OpenAI APIs and Python.
  • Run your first serverless endpoint with Stable Diffusion: Use RunPod's Stable Diffusion v1 inference endpoint to generate images, set up your serverless worker, start a job, check job status, and retrieve results.



Discover how to leverage RunPod Pods to run and manage your AI applications.


  • Fine tune an LLM with Axolotl on RunPod: Learn how to fine-tune large language models with Axolotl on RunPod, a streamlined workflow for configuring and training AI models with GPU resources, and explore examples for LLaMA2, Gemma, LLaMA3, and Jamba.
  • Run Fooocus in Jupyter Notebook: Learn how to run Fooocus, an open-source image generating model, in a Jupyter Notebook and launch the Gradio-based interface in under 5 minutes, with minimal requirements of 4GB Nvidia GPU memory and 8GB system memory.
  • How To Connect to a Pod Instance through VSCode: Learn how to connect to a RunPod Pod instance through VSCode for seamless development and management.
  • Build Docker Images on Runpod with Bazel: Learn how to build Docker images on RunPod using Bazel, a powerful build tool for creating consistent and efficient builds.
  • Set up Ollama on your GPU Pod: Set up Ollama, a powerful language model, on a GPU Pod using RunPod, and interact with it through HTTP API requests, harnessing the power of GPU acceleration for your AI projects.
  • Run your first Fast Stable Diffusion with Jupyter Notebook: Deploy a Jupyter Notebook to RunPod and generate your first image with Stable Diffusion in just 20 minutes, requiring Hugging Face user access token, RunPod infrastructure, and basic familiarity with the platform.



Understand the use of Docker images and containers within the RunPod ecosystem.

  • Persist data outside of containers: Learn how to persist data outside of containers by creating named volumes, mounting volumes to data directories, and accessing persisted data from multiple container runs and removals in Docker.
  • Containers overview: Discover the world of containerization with Docker, a platform for isolated environments that package applications, frameworks, and libraries into self-contained containers for consistent and reliable deployment across diverse computing environments.
  • Dockerfile: Learn how to create a Dockerfile to customize a Docker image and use an entrypoint script to run a command when the container starts, making it a reusable and executable unit for deploying and sharing applications.
  • Docker commands: RunPod enables BYOC development with Docker, providing a reference sheet for commonly used Docker commands, including login, images, containers, Dockerfile, volumes, network, and execute.


Explore how to integrate RunPod with other tools and platforms like OpenAI, SkyPilot, and Charm's Mods.


  • Overview: Use the OpenAI SDK to integrate with your Serverless Endpoints.



  • Running RunPod on Mods: Learn to integrate into Charm's Mods tool chain and use RunPod as the Serverless Endpoint.


Learn how to migrate from other tools and technologies to RunPod.


  • Cog Migration: Migrate your Cog model from to RunPod by following this step-by-step guide, covering setup, model identification, Docker image building, and serverless endpoint creation.


  • Banana migration: Quickly migrate from Banana to RunPod with Docker, leveraging a bridge between the two environments for a seamless transition. Utilize a Dockerfile to encapsulate your environment and deploy existing projects to RunPod with minimal adjustments.