Anthropic Provider
The Anthropic provider contains language model support for the Anthropic Messages API.
Setup
The Anthropic provider is available in the @ai-sdk/anthropic
module. You can install it with
pnpm add @ai-sdk/anthropic
Provider Instance
You can import the default provider instance anthropic
from @ai-sdk/anthropic
:
import { anthropic } from '@ai-sdk/anthropic';
If you need a customized setup, you can import createAnthropic
from @ai-sdk/anthropic
and create a provider instance with your settings:
import { createAnthropic } from '@ai-sdk/anthropic';
const anthropic = createAnthropic({ // custom settings});
You can use the following optional settings to customize the Anthropic provider instance:
-
baseURL string
Use a different URL prefix for API calls, e.g. to use proxy servers. The default prefix is
https://api.anthropic.com/v1
. -
apiKey string
API key that is being send using the
x-api-key
header. It defaults to theANTHROPIC_API_KEY
environment variable. -
headers Record<string,string>
Custom headers to include in the requests.
-
fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>
Custom fetch implementation. Defaults to the global
fetch
function. You can use it as a middleware to intercept requests, or to provide a custom fetch implementation for e.g. testing.
Language Models
You can create models that call the Anthropic Messages API using the provider instance.
The first argument is the model id, e.g. claude-3-haiku-20240307
.
Some models have multi-modal capabilities.
const model = anthropic('claude-3-haiku-20240307');
The following optional settings are available for Anthropic models:
-
cacheControl boolean
Enable the Anthropic cache control beta.
You can then use provider metadata to set cache control breakpoints (example)
Example: Generate Text
You can use Anthropic language models to generate text with the generateText
function:
import { anthropic } from '@ai-sdk/anthropic';import { generateText } from 'ai';
const { text } = await generateText({ model: anthropic('claude-3-haiku-20240307'), prompt: 'Write a vegetarian lasagna recipe for 4 people.',});
Anthropic language models can also be used in the streamText
, generateObject
, streamObject
, and streamUI
functions
(see AI SDK Core and AI SDK RSC).
Example: Cache Control
You can enable the cache control beta by setting the cacheControl
option to true
when creating the model instance.
In the messages and message parts, you can then use the experimental_providerMetadata
property to set cache control breakpoints.
You need to set the anthropic
property in the experimental_providerMetadata
object to { cacheControl: { type: 'ephemeral' } }
to set a cache control breakpoint.
The cache creation input tokens are then returned in the experimental_providerMetadata
object
for generateText
and generateObject
, again under the anthropic
property.
When you use streamText
or streamObject
, the response contains a promise
that resolves to the metadata. Alternatively you can receive it in the
onFinish
callback.
import { anthropic } from '@ai-sdk/anthropic';import { generateText } from 'ai';
const errorMessage = '... long error message ...';
const result = await generateText({ model: anthropic('claude-3-5-sonnet-20240620', { cacheControl: true, }), messages: [ { role: 'user', content: [ { type: 'text', text: 'You are a JavaScript expert.' }, { type: 'text', text: `Error message: ${errorMessage}`, experimental_providerMetadata: { anthropic: { cacheControl: { type: 'ephemeral' } }, }, }, { type: 'text', text: 'Explain the error message.' }, ], }, ],});
console.log(result.text);console.log(result.experimental_providerMetadata?.anthropic);// e.g. { cacheCreationInputTokens: 2118, cacheReadInputTokens: 0 }
You can also use cache control on system messages by providing multiple system messages at the head of your messages array:
const result = await generateText({ model: anthropic('claude-3-5-sonnet-20240620', { cacheControl: true, }), messages: [ { role: 'system', content: 'Cached system message part', experimental_providerMetadata: { anthropic: { cacheControl: { type: 'ephemeral' } }, }, }, { role: 'system', content: 'Uncached system message part', }, { role: 'user', content: 'User prompt', }, ],});
Model Capabilities
See also Anthropic Model Comparison.
Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
---|---|---|---|---|
claude-3-5-sonnet-20240620 | ||||
claude-3-opus-20240229 | ||||
claude-3-sonnet-20240229 | ||||
claude-3-haiku-20240307 |
The table above lists popular models. You can also pass any available provider model ID as a string if needed.