@runpod/ai-sdk-provider package integrates Runpod Public Endpoints with the Vercel AI SDK. This gives you a streamlined, type-safe interface for text generation, streaming, and image generation in JavaScript and TypeScript projects.
The Vercel AI SDK is a popular open-source library for building AI applications. By using the Runpod provider, you can access Runpod’s Public Endpoints using the same patterns and APIs you’d use with other AI providers like OpenAI or Anthropic.
Why use the Vercel AI SDK?
- Unified interface: Use the same
generateText,streamText, andgenerateImagefunctions regardless of which AI provider you’re using. - Type safety: Full TypeScript support with typed responses and parameters.
- Streaming built-in: First-class support for streaming text responses.
- Framework integrations: Works seamlessly with Next.js, React, Svelte, and other frameworks.
- Provider switching: Easily switch between Runpod and other providers without rewriting your code.
Installation
Install the Runpod provider alongside the Vercel AI SDK:Configuration
Default configuration
The provider reads your API key from theRUNPOD_API_KEY environment variable by default. Import the runpod instance and start using it immediately:
.env file:
Custom configuration
For more control, usecreateRunpod to create a custom provider instance:
| Option | Description | Default |
|---|---|---|
apiKey | Your Runpod API key | RUNPOD_API_KEY env var |
baseURL | Base URL for API requests | https://api.runpod.ai/v2 |
headers | Custom HTTP headers to include with requests | {} |
Text generation
Basic text generation
UsegenerateText to generate text from a prompt:
text: The generated textfinishReason: Why generation stopped (stop,length, etc.)usage: Token counts (promptTokens,completionTokens,totalTokens)
Chat conversations
For multi-turn conversations, pass amessages array instead of a prompt:
Generation parameters
Control the generation behavior with additional parameters:Streaming
For real-time output (useful for chat interfaces), usestreamText:
Streaming with callbacks
You can also use callbacks to handle streaming events:Image generation
Text-to-image
Generate images using models like Flux:image.uint8Array: Binary image dataimage.base64: Base64-encoded imageimage.mimeType: Image MIME type (e.g.,image/png)
Image editing
Edit existing images by providing reference images:Provider options
Pass model-specific parameters usingproviderOptions:
| Option | Description |
|---|---|
negative_prompt | Elements to exclude from the image |
num_inference_steps | Number of denoising steps (higher = more detail) |
guidance | How closely to follow the prompt (0-10) |
seed | Seed for reproducible results (-1 for random) |
enable_safety_checker | Enable content safety filtering |
maxPollAttempts | Max polling attempts for async generation |
pollIntervalMillis | Milliseconds between status polls |
Supported models
Text models
| Model ID | Description |
|---|---|
qwen3-32b-awq | Qwen3 32B with AWQ quantization. Good for general text and code generation. |
gpt-oss-120b | GPT OSS 120B. Supports tool calling. |
Image models
| Model ID | Description |
|---|---|
black-forest-labs-flux-1-dev | Flux Dev. High quality, detailed images. |
black-forest-labs-flux-1-schnell | Flux Schnell. Fast generation, good for prototyping. |
google-nano-banana-edit | Nano Banana Edit. Supports multiple reference images. |
bytedance-seedream-4-0-t2i | Seedream 4.0. Text-to-image with good prompt adherence. |
Example: Chat application
Here’s a complete example of a simple chat application using streaming:Next steps
- Model reference: View all available models and their parameters.
- Make API requests: Learn about the REST API for lower-level control.
- @runpod/ai-sdk-provider on GitHub: View the source code and contribute.
- Vercel AI SDK documentation: Learn more about the AI SDK.