Skip to main content
The @runpod/ai-sdk-provider package integrates Runpod Public Endpoints with the Vercel AI SDK. This gives you a streamlined, type-safe interface for text generation, streaming, and image generation in JavaScript and TypeScript projects. The Vercel AI SDK is a popular open-source library for building AI applications. By using the Runpod provider, you can access Runpod’s Public Endpoints using the same patterns and APIs you’d use with other AI providers like OpenAI or Anthropic.

Why use the Vercel AI SDK?

  • Unified interface: Use the same generateText, streamText, and generateImage functions regardless of which AI provider you’re using.
  • Type safety: Full TypeScript support with typed responses and parameters.
  • Streaming built-in: First-class support for streaming text responses.
  • Framework integrations: Works seamlessly with Next.js, React, Svelte, and other frameworks.
  • Provider switching: Easily switch between Runpod and other providers without rewriting your code.

Installation

Install the Runpod provider alongside the Vercel AI SDK:
npm install @runpod/ai-sdk-provider ai

Configuration

Default configuration

The provider reads your API key from the RUNPOD_API_KEY environment variable by default. Import the runpod instance and start using it immediately:
import { runpod } from "@runpod/ai-sdk-provider";
Set the environment variable in your shell or .env file:
export RUNPOD_API_KEY="your-api-key"

Custom configuration

For more control, use createRunpod to create a custom provider instance:
import { createRunpod } from "@runpod/ai-sdk-provider";

const runpod = createRunpod({
  apiKey: "your-api-key",
  baseURL: "https://api.runpod.ai/v2",
  headers: {
    "X-Custom-Header": "value",
  },
});
OptionDescriptionDefault
apiKeyYour Runpod API keyRUNPOD_API_KEY env var
baseURLBase URL for API requestshttps://api.runpod.ai/v2
headersCustom HTTP headers to include with requests{}

Text generation

Basic text generation

Use generateText to generate text from a prompt:
import { runpod } from "@runpod/ai-sdk-provider";
import { generateText } from "ai";

const { text, finishReason, usage } = await generateText({
  model: runpod("qwen3-32b-awq"),
  prompt: "Write a Python function that checks if a number is prime:",
});

console.log(text);
console.log(`Tokens used: ${usage.totalTokens}`);
The response includes:
  • text: The generated text
  • finishReason: Why generation stopped (stop, length, etc.)
  • usage: Token counts (promptTokens, completionTokens, totalTokens)

Chat conversations

For multi-turn conversations, pass a messages array instead of a prompt:
import { runpod } from "@runpod/ai-sdk-provider";
import { generateText } from "ai";

const { text } = await generateText({
  model: runpod("qwen3-32b-awq"),
  messages: [
    {
      role: "system",
      content: "You are a helpful coding assistant. Be concise.",
    },
    {
      role: "user",
      content: "How do I read a JSON file in Python?",
    },
  ],
});

console.log(text);

Generation parameters

Control the generation behavior with additional parameters:
const { text } = await generateText({
  model: runpod("qwen3-32b-awq"),
  prompt: "Write a creative story about a robot:",
  temperature: 0.8, // Higher = more creative (0-1)
  maxTokens: 500, // Maximum tokens to generate
  topP: 0.9, // Nucleus sampling threshold
});

Streaming

For real-time output (useful for chat interfaces), use streamText:
import { runpod } from "@runpod/ai-sdk-provider";
import { streamText } from "ai";

const { textStream } = await streamText({
  model: runpod("qwen3-32b-awq"),
  prompt: "Explain quantum computing in simple terms:",
  temperature: 0.7,
});

for await (const chunk of textStream) {
  process.stdout.write(chunk);
}

Streaming with callbacks

You can also use callbacks to handle streaming events:
import { runpod } from "@runpod/ai-sdk-provider";
import { streamText } from "ai";

const result = await streamText({
  model: runpod("qwen3-32b-awq"),
  prompt: "Write a poem about the ocean:",
  onChunk: ({ chunk }) => {
    if (chunk.type === "text-delta") {
      process.stdout.write(chunk.textDelta);
    }
  },
  onFinish: ({ text, usage }) => {
    console.log(`\n\nTotal tokens: ${usage.totalTokens}`);
  },
});

Image generation

Text-to-image

Generate images using models like Flux:
import { runpod } from "@runpod/ai-sdk-provider";
import { experimental_generateImage as generateImage } from "ai";
import { writeFileSync } from "fs";

const { image } = await generateImage({
  model: runpod.image("black-forest-labs-flux-1-dev"),
  prompt: "A serene mountain landscape at sunset, photorealistic",
  aspectRatio: "16:9",
});

// Save the image to a file
writeFileSync("output.png", image.uint8Array);

// Or access as base64
console.log(image.base64);
The response includes:
  • image.uint8Array: Binary image data
  • image.base64: Base64-encoded image
  • image.mimeType: Image MIME type (e.g., image/png)

Image editing

Edit existing images by providing reference images:
import { runpod } from "@runpod/ai-sdk-provider";
import { experimental_generateImage as generateImage } from "ai";

const { image } = await generateImage({
  model: runpod.image("google-nano-banana-edit"),
  prompt: {
    text: "Add modern Scandinavian furniture to this room",
    images: ["https://example.com/empty-room.png"],
  },
  aspectRatio: "16:9",
});
For models that support multiple reference images:
const { image } = await generateImage({
  model: runpod.image("google-nano-banana-edit"),
  prompt: {
    text: "Combine these into an epic band photo",
    images: [
      "https://example.com/drummer.png",
      "https://example.com/guitarist.png",
      "https://example.com/bassist.png",
      "https://example.com/singer.png",
    ],
  },
});

Provider options

Pass model-specific parameters using providerOptions:
const { image } = await generateImage({
  model: runpod.image("black-forest-labs-flux-1-dev"),
  prompt: "A sunset over the ocean",
  providerOptions: {
    runpod: {
      negative_prompt: "blurry, low quality, distorted",
      num_inference_steps: 30,
      guidance: 7.5,
      seed: 42,
      enable_safety_checker: true,
    },
  },
});
OptionDescription
negative_promptElements to exclude from the image
num_inference_stepsNumber of denoising steps (higher = more detail)
guidanceHow closely to follow the prompt (0-10)
seedSeed for reproducible results (-1 for random)
enable_safety_checkerEnable content safety filtering
maxPollAttemptsMax polling attempts for async generation
pollIntervalMillisMilliseconds between status polls

Supported models

Text models

Model IDDescription
qwen3-32b-awqQwen3 32B with AWQ quantization. Good for general text and code generation.
gpt-oss-120bGPT OSS 120B. Supports tool calling.

Image models

Model IDDescription
black-forest-labs-flux-1-devFlux Dev. High quality, detailed images.
black-forest-labs-flux-1-schnellFlux Schnell. Fast generation, good for prototyping.
google-nano-banana-editNano Banana Edit. Supports multiple reference images.
bytedance-seedream-4-0-t2iSeedream 4.0. Text-to-image with good prompt adherence.
For a complete list of available models and their parameters, see the model reference.

Example: Chat application

Here’s a complete example of a simple chat application using streaming:
import { runpod } from "@runpod/ai-sdk-provider";
import { streamText } from "ai";
import * as readline from "readline";

const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout,
});

const messages: { role: "user" | "assistant"; content: string }[] = [];

async function chat(userMessage: string) {
  messages.push({ role: "user", content: userMessage });

  const { textStream } = await streamText({
    model: runpod("qwen3-32b-awq"),
    system: "You are a helpful assistant.",
    messages,
  });

  let assistantMessage = "";
  process.stdout.write("\nAssistant: ");

  for await (const chunk of textStream) {
    process.stdout.write(chunk);
    assistantMessage += chunk;
  }

  messages.push({ role: "assistant", content: assistantMessage });
  console.log("\n");
}

function prompt() {
  rl.question("You: ", async (input) => {
    if (input.toLowerCase() === "exit") {
      rl.close();
      return;
    }
    await chat(input);
    prompt();
  });
}

console.log('Chat started. Type "exit" to quit.\n');
prompt();

Next steps