Amazon Bedrock Provider

The Amazon Bedrock provider for the AI SDK contains language model support for the Amazon Bedrock APIs.

Setup

The Bedrock provider is available in the @ai-sdk/amazon-bedrock module. You can install it with

pnpm
npm
yarn
pnpm add @ai-sdk/amazon-bedrock

Prerequisites

Access to Amazon Bedrock foundation models isn't granted by default. In order to gain access to a foundation model, an IAM user with sufficient permissions needs to request access to it through the console. Once access is provided to a model, it is available for all users in the account.

See the Model Access Docs for more information.

Authentication

Step 1: Creating AWS Access Key and Secret Key

To get started, you'll need to create an AWS access key and secret key. Here's how:

Login to AWS Management Console

Create an IAM User

  • Navigate to the IAM dashboard and click on "Users" in the left-hand navigation menu.
  • Click on "Create user" and fill in the required details to create a new IAM user.
  • Make sure to select "Programmatic access" as the access type.
  • The user account needs the AmazonBedrockFullAccess policy attached to it.

Create Access Key

  • Click on the "Security credentials" tab and then click on "Create access key".
  • Click "Create access key" to generate a new access key pair.
  • Download the .csv file containing the access key ID and secret access key.

Step 2: Configuring the Access Key and Secret Key

Within your project add a .env file if you don't already have one. This file will be used to set the access key and secret key as environment variables. Add the following lines to the .env file:

AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY=YOUR_SECRET_ACCESS_KEY
AWS_REGION=YOUR_REGION

Many frameworks such as Next.js load the .env file automatically. If you're using a different framework, you may need to load the .env file manually using a package like dotenv.

Remember to replace YOUR_ACCESS_KEY_ID, YOUR_SECRET_ACCESS_KEY, and YOUR_REGION with the actual values from your AWS account.

Provider Instance

You can import the default provider instance bedrock from @ai-sdk/amazon-bedrock:

import { bedrock } from '@ai-sdk/amazon-bedrock';

If you need a customized setup, you can import createAmazonBedrock from @ai-sdk/amazon-bedrock and create a provider instance with your settings:

import { createAmazonBedrock } from '@ai-sdk/amazon-bedrock';
const bedrock = createAmazonBedrock({
region: 'us-east-1',
accessKeyId: 'xxxxxxxxx',
secretAccessKey: 'xxxxxxxxx',
sessionToken: 'xxxxxxxxx',
});

The credentials settings fall back to environment variable defaults described below. These may be set by your serverless environment without your awareness, which can lead to merged/conflicting credential values and provider errors around failed authentication. If you're experiencing issues be sure you are explicitly specifying all settings (even if undefined) to avoid any defaults.

You can use the following optional settings to customize the Amazon Bedrock provider instance:

  • region string

    The AWS region that you want to use for the API calls. It uses the AWS_REGION environment variable by default.

  • accessKeyId string

    The AWS access key ID that you want to use for the API calls. It uses the AWS_ACCESS_KEY_ID environment variable by default.

  • secretAccessKey string

    The AWS secret access key that you want to use for the API calls. It uses the AWS_SECRET_ACCESS_KEY environment variable by default.

  • sessionToken string

    Optional. The AWS session token that you want to use for the API calls. It uses the AWS_SESSION_TOKEN environment variable by default.

Language Models

You can create models that call the Bedrock API using the provider instance. The first argument is the model id, e.g. meta.llama3-70b-instruct-v1:0.

const model = bedrock('meta.llama3-70b-instruct-v1:0');

Amazon Bedrock models also support some model specific settings that are not part of the standard call settings. You can pass them as an options argument:

const model = bedrock('anthropic.claude-3-sonnet-20240229-v1:0', {
additionalModelRequestFields: { top_k: 350 },
});

Documentation for additional settings based on the selected model can be found within the Amazon Bedrock Inference Parameter Documentation.

You can use Amazon Bedrock language models to generate text with the generateText function:

import { bedrock } from '@ai-sdk/amazon-bedrock';
import { generateText } from 'ai';
const { text } = await generateText({
model: bedrock('meta.llama3-70b-instruct-v1:0'),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});

Amazon Bedrock language models can also be used in the streamText function (see AI SDK Core).

File Inputs

Amazon Bedrock supports file inputs on in combination with specific models, e.g. anthropic.claude-3-haiku-20240307-v1:0.

The Amazon Bedrock provider supports file inputs, e.g. PDF files.

import { bedrock } from '@ai-sdk/amazon-bedrock';
import { generateText } from 'ai';
const result = await generateText({
model: bedrock('anthropic.claude-3-haiku-20240307-v1:0'),
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'Describe the pdf in detail.' },
{
type: 'file',
data: fs.readFileSync('./data/ai.pdf'),
mimeType: 'application/pdf',
},
],
},
],
});

Guardrails

You can use the bedrock provider options to utilize Amazon Bedrock Guardrails:

const result = await generateText({
bedrock('anthropic.claude-3-sonnet-20240229-v1:0'),
providerOptions: {
bedrock: {
guardrailConfig: {
guardrailIdentifier: '1abcd2ef34gh',
guardrailVersion: '1',
trace: 'enabled' as const,
streamProcessingMode: 'async',
},
},
},
});

Tracing information will be returned in the provider metadata if you have tracing enabled.

if (result.providerMetadata?.bedrock.trace) {
// ...
}

See the Amazon Bedrock Guardrails documentation for more information.

Model Capabilities

ModelImage InputObject GenerationTool UsageTool Streaming
amazon.titan-tg1-large
amazon.titan-text-express-v1
anthropic.claude-3-5-sonnet-20241022-v2:0
anthropic.claude-3-5-sonnet-20240620-v1:0
anthropic.claude-3-5-haiku-20241022-v1:0
anthropic.claude-3-opus-20240229-v1:0
anthropic.claude-3-sonnet-20240229-v1:0
anthropic.claude-3-haiku-20240307-v1:0
anthropic.claude-v2:1
cohere.command-r-v1:0
cohere.command-r-plus-v1:0
meta.llama2-13b-chat-v1
meta.llama2-70b-chat-v1
meta.llama3-8b-instruct-v1:0
meta.llama3-70b-instruct-v1:0
meta.llama3-1-8b-instruct-v1:0
meta.llama3-1-70b-instruct-v1:0
meta.llama3-1-405b-instruct-v1:0
meta.llama3-2-1b-instruct-v1:0
meta.llama3-2-3b-instruct-v1:0
meta.llama3-2-11b-instruct-v1:0
meta.llama3-2-90b-instruct-v1:0
mistral.mistral-7b-instruct-v0:2
mistral.mixtral-8x7b-instruct-v0:1
mistral.mistral-large-2402-v1:0
mistral.mistral-small-2402-v1:0

The table above lists popular models. Please see the Amazon Bedrock docs for a full list of available models. The table above lists popular models. You can also pass any available provider model ID as a string if needed.

Embedding Models

You can create models that call the Bedrock API Bedrock API using the .embedding() factory method.

const model = bedrock.embedding('amazon.titan-embed-text-v1');

Bedrock Titan embedding model amazon.titan-embed-text-v2:0 supports several aditional settings. You can pass them as an options argument:

const model = bedrock.embedding('amazon.titan-embed-text-v2:0', {
dimensions: 512 // optional, number of dimensions for the embedding
normalize: true // optional normalize the output embeddings
})

The following optional settings are available for Bedrock Titan embedding models:

  • dimensions: number

    The number of dimensions the output embeddings should have. The following values are accepted: 1024 (default), 512, 256.

  • normalize boolean

    Flag indicating whether or not to normalize the output embeddings. Defaults to true.

Model Capabilities

ModelDefault DimensionsCustom Dimensions
amazon.titan-embed-text-v11536
amazon.titan-embed-text-v2:01024

Response Headers

The Amazon Bedrock provider will return the response headers associated with network requests made of the Bedrock servers.

import { bedrock } from '@ai-sdk/amazon-bedrock';
import { generateText } from 'ai';
const { text } = await generateText({
model: bedrock('meta.llama3-70b-instruct-v1:0'),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
console.log(result.response.headers);

Below is sample output where you can see the x-amzn-requestid header. This can be useful for correlating Bedrock API calls with requests made by the AI SDK:

{
connection: 'keep-alive',
'content-length': '2399',
'content-type': 'application/json',
date: 'Fri, 07 Feb 2025 04:28:30 GMT',
'x-amzn-requestid': 'c9f3ace4-dd5d-49e5-9807-39aedfa47c8e'
}

This information is also available with streamText:

import { bedrock } from '@ai-sdk/amazon-bedrock';
import { streamText } from 'ai';
const result = streamText({
model: bedrock('meta.llama3-70b-instruct-v1:0'),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
for await (const textPart of result.textStream) {
process.stdout.write(textPart);
}
console.log('Response headers:', (await result.response).headers);

With sample output as:

{
connection: 'keep-alive',
'content-type': 'application/vnd.amazon.eventstream',
date: 'Fri, 07 Feb 2025 04:33:37 GMT',
'transfer-encoding': 'chunked',
'x-amzn-requestid': 'a976e3fc-0e45-4241-9954-b9bdd80ab407'
}

Migrating to @ai-sdk/amazon-bedrock 2.x

The Amazon Bedrock provider was rewritten in version 2.x to remove the dependency on the @aws-sdk/client-bedrock-runtime package.

The bedrockOptions provider setting previously available has been removed. If you were using the bedrockOptions object, you should now use the region, accessKeyId, secretAccessKey, and sessionToken settings directly instead.

Note that you may need to set all of these explicitly, e.g. even if you're not using sessionToken, set it to undefined. If you're running in a serverless environment, there may be default environment variables set by your containing environment that the Amazon Bedrock provider will then pick up and could conflict with the ones you're intending to use.