DeepSeek Provider

The DeepSeek provider offers access to powerful language models through the DeepSeek API, including their DeepSeek-V3 model.

API keys can be obtained from the DeepSeek Platform.

Setup

The DeepSeek provider is available via the @ai-sdk/deepseek module. You can install it with:

pnpm
npm
yarn
pnpm add @ai-sdk/deepseek

Provider Instance

You can import the default provider instance deepseek from @ai-sdk/deepseek:

import { deepseek } from '@ai-sdk/deepseek';

For custom configuration, you can import createDeepSeek and create a provider instance with your settings:

import { createDeepSeek } from '@ai-sdk/deepseek';
const deepseek = createDeepSeek({
apiKey: process.env.DEEPSEEK_API_KEY ?? '',
});

You can use the following optional settings to customize the DeepSeek provider instance:

  • baseURL string

    Use a different URL prefix for API calls. The default prefix is https://api.deepseek.com/v1.

  • apiKey string

    API key that is being sent using the Authorization header. It defaults to the DEEPSEEK_API_KEY environment variable.

  • headers Record<string,string>

    Custom headers to include in the requests.

  • fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>

    Custom fetch implementation.

Language Models

You can create language models using a provider instance:

import { deepseek } from '@ai-sdk/deepseek';
import { generateText } from 'ai';
const { text } = await generateText({
model: deepseek('deepseek-chat'),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});

DeepSeek language models can be used in the streamText and streamUI functions (see AI SDK Core and AI SDK RSC).

Cache Token Usage

DeepSeek provides context caching on disk technology that can significantly reduce token costs for repeated content. You can access the cache hit/miss metrics through the providerMetadata property in the response:

import { deepseek } from '@ai-sdk/deepseek';
import { generateText } from 'ai';
const result = await generateText({
model: deepseek('deepseek-chat'),
prompt: 'Your prompt here',
});
console.log(result.providerMetadata);
// Example output: { deepseek: { promptCacheHitTokens: 1856, promptCacheMissTokens: 5 } }

The metrics include:

  • promptCacheHitTokens: Number of input tokens that were cached
  • promptCacheMissTokens: Number of input tokens that were not cached

For more details about DeepSeek's caching system, see the DeepSeek caching documentation.

Model Capabilities

ModelText GenerationObject GenerationImage InputTool UsageTool Streaming
deepseek-chat
deepseek-reasoner

Please see the DeepSeek docs for a full list of available models. You can also pass any available provider model ID as a string if needed.