AWS Bedrock
Vercel AI SDK provides a set of utilities to make it easy to use AWS Bedrocks's API. In this guide, we'll walk through how to use the utilities to create a chat bot.
Guide: Chat Bot
Create a Next.js app
Create a Next.js application and install ai
:
pnpm dlx create-next-app my-ai-app
cd my-ai-app
pnpm install ai
Add your AWS Credentials to .env
Create a .env
file in your project root and add your AWS credentials:
AWS_REGION=YOUR_AWS_REGION
AWS_ACCESS_KEY_ID=YOUR_AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY=YOUR_AWS_SECRET_ACCESS_KEY
Create a Route Handler
Create a Next.js Route Handler that uses the Edge Runtime to generate a response to a series of messages via AWS Bedrock's API, and returns the response as a streaming text response.
For this example, we'll use the Anthropic model anthropic.claude-v2
and create a route handler at app/api/chat/route.ts
that accepts a POST
request with a messages
array of strings:
import {
BedrockRuntimeClient,
InvokeModelWithResponseStreamCommand,
} from '@aws-sdk/client-bedrock-runtime';
import { AWSBedrockAnthropicStream, StreamingTextResponse } from 'ai';
import { experimental_buildAnthropicPrompt } from 'ai/prompts';
// IMPORTANT! Set the runtime to edge
export const runtime = 'edge';
export async function POST(req: Request) {
// Extract the `prompt` from the body of the request
const { messages } = await req.json();
const bedrockClient = new BedrockRuntimeClient({
region: process.env.AWS_REGION ?? '',
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID ?? '',
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY ?? '',
},
});
// Ask Claude for a streaming chat completion given the prompt
const bedrockResponse = await bedrockClient.send(
new InvokeModelWithResponseStreamCommand({
modelId: 'anthropic.claude-v2',
contentType: 'application/json',
accept: 'application/json',
body: JSON.stringify({
prompt: experimental_buildAnthropicPrompt(messages),
max_tokens_to_sample: 300,
}),
}),
);
// Convert the response into a friendly text-stream
const stream = AWSBedrockAnthropicStream(bedrockResponse);
// Respond with the stream
return new StreamingTextResponse(stream);
}
Vercel AI SDK provides 2 utility helpers to make the above seamless: First, we
pass the streaming bedrockResponse
we receive from AWS Bedrocks's API to
AWSBedrockAnthropicStream
. This utility class decodes/extracts the text
tokens in the response and then re-encodes them properly for simple
consumption. We can then pass that new stream directly to
StreamingTextResponse
. This
is another utility class that extends the normal Node/Edge Runtime Response
class with the default headers you probably want (hint: 'Content-Type': 'text/plain; charset=utf-8'
is already set for you).
Wire up the UI
Create a Client component with a form that we'll use to gather the prompt from the user and then stream back the completion from.
By default, the useChat
hook will use the POST
Route Handler we created above (it defaults to /api/chat
). You can override this by passing a api
prop to useChat({ api: '...'})
.
'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
<div className="mx-auto w-full max-w-md py-24 flex flex-col stretch">
{messages.map(m => (
<div key={m.id}>
{m.role === 'user' ? 'User: ' : 'AI: '}
{m.content}
</div>
))}
<form onSubmit={handleSubmit}>
<label>
Say something...
<input
className="fixed w-full max-w-md bottom-0 border border-gray-300 rounded mb-8 shadow-xl p-2"
value={input}
onChange={handleInputChange}
/>
</label>
<button type="submit">Send</button>
</form>
</div>
);
}