---
title: AI SDK by Vercel
description: Welcome to the AI SDK documentation!
---
# AI SDK
The AI SDK is the TypeScript toolkit designed to help developers build AI-powered applications with React, Next.js, Vue, Svelte, Node.js, and more.
## Why use the AI SDK?
Integrating large language models (LLMs) into applications is complicated and heavily dependent on the specific model provider you use.
- **[AI SDK Core](/docs/ai-sdk-core):** A unified API for generating text, structured objects, and tool calls with LLMs.
- **[AI SDK UI](/docs/ai-sdk-ui):** A set of framework-agnostic hooks for quickly building chat and generative user interface.
## Model Providers
The AI SDK supports [multiple model providers](/providers).
## Templates
We've built some [templates](https://vercel.com/templates?type=ai) that include AI SDK integrations for different use cases, providers, and frameworks. You can use these templates to get started with your AI-powered application.
### Starter Kits
### Feature Exploration
### Frameworks
### Generative UI
### Security
## Join our Community
If you have questions about anything related to the AI SDK, you're always welcome to ask our community on [GitHub Discussions](https://github.com/vercel/ai/discussions).
## `llms.txt`
You can access the entire AI SDK documentation in Markdown format at [sdk.vercel.ai/llms.txt](/llms.txt). This can be used to ask any LLM (assuming it has a big enough context window) questions about the AI SDK based on the most up-to-date documentation.
### Example Usage
For instance, to prompt an LLM with questions about the AI SDK:
1. Copy the documentation contents from [sdk.vercel.ai/llms.txt](/llms.txt)
2. Use the following prompt format:
```prompt
Documentation:
{paste documentation here}
---
Based on the above documentation, answer the following:
{your question}
```
---
title: Overview
description: An overview of foundational concepts critical to understanding the AI SDK
---
# Overview
This page is a beginner-friendly introduction to high-level artificial
intelligence (AI) concepts. To dive right into implementing the AI SDK, feel
free to skip ahead to our [quickstarts](/docs/getting-started) or learn about
our [supported models and providers](/docs/foundations/providers-and-models).
The AI SDK standardizes integrating artificial intelligence (AI) models across [supported providers](/docs/foundations/providers-and-models). This enables developers to focus on building great AI applications, not waste time on technical details.
For example, here’s how you can generate text with various models using the AI SDK:
To effectively leverage the AI SDK, it helps to familiarize yourself with the following concepts:
## Generative Artificial Intelligence
**Generative artificial intelligence** refers to models that predict and generate various types of outputs (such as text, images, or audio) based on what’s statistically likely, pulling from patterns they’ve learned from their training data. For example:
- Given a photo, a generative model can generate a caption.
- Given an audio file, a generative model can generate a transcription.
- Given a text description, a generative model can generate an image.
## Large Language Models
A **large language model (LLM)** is a subset of generative models focused primarily on **text**. An LLM takes a sequence of words as input and aims to predict the most likely sequence to follow. It assigns probabilities to potential next sequences and then selects one. The model continues to generate sequences until it meets a specified stopping criterion.
LLMs learn by training on massive collections of written text, which means they will be better suited to some use cases than others. For example, a model trained on GitHub data would understand the probabilities of sequences in source code particularly well.
However, it's crucial to understand LLMs' limitations. When asked about less known or absent information, like the birthday of a personal relative, LLMs might "hallucinate" or make up information. It's essential to consider how well-represented the information you need is in the model.
## Embedding Models
An **embedding model** is used to convert complex data (like words or images) into a dense vector (a list of numbers) representation, known as an embedding. Unlike generative models, embedding models do not generate new text or data. Instead, they provide representations of semantic and syntactic relationships between entities that can be used as input for other models or other natural language processing tasks.
In the next section, you will learn about the difference between models providers and models, and which ones are available in the AI SDK.
---
title: Providers and Models
description: Learn about the providers and models available in the AI SDK.
---
# Providers and Models
Companies such as OpenAI and Anthropic (providers) offer access to a range of large language models (LLMs) with differing strengths and capabilities through their own APIs.
Each provider typically has its own unique method for interfacing with their models, complicating the process of switching providers and increasing the risk of vendor lock-in.
To solve these challenges, AI SDK Core offers a standardized approach to interacting with LLMs through a [language model specification](https://github.com/vercel/ai/tree/main/packages/provider/src/language-model/v1) that abstracts differences between providers. This unified interface allows you to switch between providers with ease while using the same API for all providers.
Here is an overview of the AI SDK Provider Architecture:
## AI SDK Providers
The AI SDK comes with several providers that you can use to interact with different language models:
- [OpenAI Provider](/providers/ai-sdk-providers/openai) (`@ai-sdk/openai`)
- [Azure OpenAI Provider](/providers/ai-sdk-providers/azure) (`@ai-sdk/azure`)
- [Anthropic Provider](/providers/ai-sdk-providers/anthropic) (`@ai-sdk/anthropic`)
- [Amazon Bedrock Provider](/providers/ai-sdk-providers/amazon-bedrock) (`@ai-sdk/amazon-bedrock`)
- [Google Generative AI Provider](/providers/ai-sdk-providers/google-generative-ai) (`@ai-sdk/google`)
- [Google Vertex Provider](/providers/ai-sdk-providers/google-vertex) (`@ai-sdk/google-vertex`)
- [Mistral Provider](/providers/ai-sdk-providers/mistral) (`@ai-sdk/mistral`)
- [xAI Grok Provider](/providers/ai-sdk-providers/xai) (`@ai-sdk/xai`)
- [Together.ai Provider](/providers/ai-sdk-providers/togetherai) (`@ai-sdk/togetherai`)
- [Cohere Provider](/providers/ai-sdk-providers/cohere) (`@ai-sdk/cohere`)
- [Groq](/providers/ai-sdk-providers/groq) (`@ai-sdk/groq`)
You can also use the OpenAI provider with OpenAI-compatible APIs:
- [Perplexity](/providers/ai-sdk-providers/perplexity)
- [Fireworks](/providers/ai-sdk-providers/fireworks)
- [LM Studio](/providers/openai-compatible-providers/lmstudio)
- [Baseten](/providers/openai-compatible-providers/baseten)
Our [language model specification](https://github.com/vercel/ai/tree/main/packages/provider/src/language-model/v1) is published as an open-source package, which you can use to create [custom providers](/providers/community-providers/custom-providers).
The open-source community has created the following providers:
- [Ollama Provider](/providers/community-providers/ollama) (`ollama-ai-provider`)
- [ChromeAI Provider](/providers/community-providers/chrome-ai) (`chrome-ai`)
- [AnthropicVertex Provider](/providers/community-providers/anthropic-vertex-ai) (`anthropic-vertex-ai`)
- [FriendliAI Provider](/providers/community-providers/friendliai) (`@friendliai/ai-provider`)
- [Portkey Provider](/providers/community-providers/portkey) (`@portkey-ai/vercel-provider`)
- [Cloudflare Workers AI Provider](/providers/community-providers/cloudflare-workers-ai) (`workers-ai-provider`)
- [Crosshatch Provider](/providers/community-providers/crosshatch) (`@crosshatch/ai-provider`)
- [Mixedbread Provider](/providers/community-providers/mixedbread) (`mixedbread-ai-provider`)
- [Voyage AI Provider](/providers/community-providers/voyage-ai) (`voyage-ai-provider`)
- [LLamaCpp Provider](/providers/community-providers/llama-cpp) (`llamacpp-ai-provider`)
## Model Capabilities
The AI providers support different language models with various capabilities.
Here are the capabilities of popular models:
| Provider | Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
| ------------------------------------------------------------------------ | ---------------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-4o` | | | | |
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-4o-mini` | | | | |
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-4-turbo` | | | | |
| [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-4` | | | | |
| [OpenAI](/providers/ai-sdk-providers/openai) | `o1-preview` | | | | |
| [OpenAI](/providers/ai-sdk-providers/openai) | `o1-mini` | | | | |
| [Anthropic](/providers/ai-sdk-providers/anthropic) | `claude-3-5-sonnet-20241022` | | | | |
| [Anthropic](/providers/ai-sdk-providers/anthropic) | `claude-3-5-sonnet-20240620` | | | | |
| [Anthropic](/providers/ai-sdk-providers/anthropic) | `claude-3-5-haiku-20241022` | | | | |
| [Mistral](/providers/ai-sdk-providers/mistral) | `pixtral-large-latest` | | | | |
| [Mistral](/providers/ai-sdk-providers/mistral) | `mistral-large-latest` | | | | |
| [Mistral](/providers/ai-sdk-providers/mistral) | `mistral-small-latest` | | | | |
| [Mistral](/providers/ai-sdk-providers/mistral) | `pixtral-12b-2409` | | | | |
| [Google Generative AI](/providers/ai-sdk-providers/google-generative-ai) | `gemini-2.0-flash-exp` | | | | |
| [Google Generative AI](/providers/ai-sdk-providers/google-generative-ai) | `gemini-1.5-flash` | | | | |
| [Google Generative AI](/providers/ai-sdk-providers/google-generative-ai) | `gemini-1.5-pro` | | | | |
| [Google Vertex](/providers/ai-sdk-providers/google-vertex) | `gemini-1.5-flash` | | | | |
| [Google Vertex](/providers/ai-sdk-providers/google-vertex) | `gemini-1.5-pro` | | | | |
| [xAI Grok](/providers/ai-sdk-providers/xai) | `grok-beta` | | | | |
| [xAI Grok](/providers/ai-sdk-providers/xai) | `grok-vision-beta` | | | | |
| [Groq](/providers/ai-sdk-providers/groq) | `llama-3.3-70b-versatile` | | | | |
| [Groq](/providers/ai-sdk-providers/groq) | `llama-3.1-8b-instant` | | | | |
| [Groq](/providers/ai-sdk-providers/groq) | `mixtral-8x7b-32768` | | | | |
| [Groq](/providers/ai-sdk-providers/groq) | `gemma2-9b-it` | | | | |
This table is not exhaustive. Additional models can be found in the provider
documentation pages and on the provider websites.
---
title: Prompts
description: Learn about the Prompt structure used in the AI SDK.
---
# Prompts
Prompts are instructions that you give a [large language model (LLM)](/docs/foundations/overview#large-language-models) to tell it what to do.
It's like when you ask someone for directions; the clearer your question, the better the directions you'll get.
Many LLM providers offer complex interfaces for specifying prompts. They involve different roles and message types.
While these interfaces are powerful, they can be hard to use and understand.
In order to simplify prompting, the AI SDK support text, message, and system prompts.
## Text Prompts
Text prompts are strings.
They are ideal for simple generation use cases,
e.g. repeatedly generating content for variants of the same prompt text.
You can set text prompts using the `prompt` property made available by AI SDK functions like [`streamText`](/docs/reference/ai-sdk-core/stream-text) or [`generateObject`](/docs/reference/ai-sdk-core/generate-object).
You can structure the text in any way and inject variables, e.g. using a template literal.
```ts highlight="3"
const result = await generateText({
model: yourModel,
prompt: 'Invent a new holiday and describe its traditions.',
});
```
You can also use template literals to provide dynamic data to your prompt.
```ts highlight="3-5"
const result = await generateText({
model: yourModel,
prompt:
`I am planning a trip to ${destination} for ${lengthOfStay} days. ` +
`Please suggest the best tourist activities for me to do.`,
});
```
## System Prompts
System prompts are the initial set of instructions given to models that help guide and constrain the models' behaviors and responses.
You can set system prompts using the `system` property.
System prompts work with both the `prompt` and the `messages` properties.
```ts highlight="3-6"
const result = await generateText({
model: yourModel,
system:
`You help planning travel itineraries. ` +
`Respond to the users' request with a list ` +
`of the best stops to make in their destination.`,
prompt:
`I am planning a trip to ${destination} for ${lengthOfStay} days. ` +
`Please suggest the best tourist activities for me to do.`,
});
```
When you use a message prompt, you can also use system messages instead of a
system prompt.
## Message Prompts
A message prompt is an array of user, assistant, and tool messages.
They are great for chat interfaces and more complex, multi-modal prompts.
You can use the `messages` property to set message prompts.
Each message has a `role` and a `content` property. The content can either be text (for user and assistant messages), or an array of relevant parts (data) for that message type.
```ts highlight="3-7"
const result = await streamUI({
model: yourModel,
messages: [
{ role: 'user', content: 'Hi!' },
{ role: 'assistant', content: 'Hello, how can I help?' },
{ role: 'user', content: 'Where can I buy the best Currywurst in Berlin?' },
],
});
```
Instead of sending a text in the `content` property, you can send an array of parts that includes a mix of text and other content parts.
Not all language models support all message and content types. For example,
some models might not be capable of handling multi-modal inputs or tool
messages. [Learn more about the capabilities of select
models](./providers-and-models#model-capabilities).
### User Messages
#### Text Parts
Text content is the most common type of content. It is a string that is passed to the model.
If you only need to send text content in a message, the `content` property can be a string,
but you can also use the `parts` property to send multiple parts of content.
```ts highlight="7"
const result = await generateText({
model: yourModel,
messages: [
{
role: 'user',
content: [
{
type: 'text',
text: 'Where can I buy the best Currywurst in Berlin?',
},
],
},
],
});
```
#### Image Parts
User messages can include image parts. An image can be one of the following:
- base64-encoded image:
- `string` with base-64 encoded content
- data URL `string`, e.g. `data:image/png;base64,...`
- binary image:
- `ArrayBuffer`
- `Uint8Array`
- `Buffer`
- URL:
- http(s) URL `string`, e.g. `https://example.com/image.png`
- `URL` object, e.g. `new URL('https://example.com/image.png')`
##### Example: Binary image (Buffer)
```ts highlight="8-11"
const result = await generateText({
model,
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'Describe the image in detail.' },
{
type: 'image',
image: fs.readFileSync('./data/comic-cat.png'),
},
],
},
],
});
```
##### Example: Base-64 encoded image (string)
```ts highlight="8-11"
const result = await generateText({
model: yourModel,
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'Describe the image in detail.' },
{
type: 'image',
image: fs.readFileSync('./data/comic-cat.png').toString('base64'),
},
],
},
],
});
```
##### Example: Image URL (string)
```ts highlight="8-12"
const result = await generateText({
model: yourModel,
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'Describe the image in detail.' },
{
type: 'image',
image:
'https://github.com/vercel/ai/blob/main/examples/ai-core/data/comic-cat.png?raw=true',
},
],
},
],
});
```
#### File Parts
Only a few providers and models currently support file parts: [Google
Generative AI](/providers/ai-sdk-providers/google-generative-ai), [Google
Vertex AI](/providers/ai-sdk-providers/google-vertex),
[OpenAI](/providers/ai-sdk-providers/openai) (for `wav` and `mp3` audio with
`gpt-4o-audio-preview`), [Anthropic](/providers/ai-sdk-providers/anthropic)
(for `pdf`).
User messages can include file parts. A file can be one of the following:
- base64-encoded file:
- `string` with base-64 encoded content
- data URL `string`, e.g. `data:image/png;base64,...`
- binary data:
- `ArrayBuffer`
- `Uint8Array`
- `Buffer`
- URL:
- http(s) URL `string`, e.g. `https://example.com/some.pdf`
- `URL` object, e.g. `new URL('https://example.com/some.pdf')`
You need to specify the MIME type of the file you are sending.
##### Example: PDF file from Buffer
```ts highlight="12-14"
import { google } from '@ai-sdk/google';
import { generateText } from 'ai';
const result = await generateText({
model: google('gemini-1.5-flash'),
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'What is the file about?' },
{
type: 'file',
mimeType: 'application/pdf',
data: fs.readFileSync('./data/example.pdf'),
},
],
},
],
});
```
##### Example: mp3 audio file from Buffer
```ts highlight="12-14"
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
const result = await generateText({
model: openai('gpt-4o-audio-preview'),
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'What is the audio saying?' },
{
type: 'file',
mimeType: 'audio/mpeg',
data: fs.readFileSync('./data/galileo.mp3'),
},
],
},
],
});
```
### Assistant Messages
Assistant messages are messages that have a role of `assistant`.
They are typically previous responses from the assistant and can contain text and tool call parts.
#### Example: Assistant message with text
```ts highlight="5"
const result = await generateText({
model: yourModel,
messages: [
{ role: 'user', content: 'Hi!' },
{ role: 'assistant', content: 'Hello, how can I help?' },
],
});
```
#### Example: Assistant message with tool call
```ts highlight="5-10"
const result = await generateText({
model: yourModel,
messages: [
{ role: 'user', content: 'How many calories are in this block of cheese?' },
{
type: 'tool-call',
toolCallId: '12345',
toolName: 'get-nutrition-data',
args: { cheese: 'Roquefort' },
},
],
});
```
### Tool messages
[Tools](/docs/foundations/tools) (also known as function calling) are programs
that you can provide an LLM to extend it's built-in functionality. This can be
anything from calling an external API to calling functions within your UI.
Learn more about Tools in [the next section](/docs/foundations/tools).
For models that support [tool](/docs/foundations/tools) calls, assistant messages can contain tool call parts, and tool messages can contain tool result parts.
A single assistant message can call multiple tools, and a single tool message can contain multiple tool results.
```ts highlight="14-42"
const result = await generateText({
model: yourModel,
messages: [
{
role: 'user',
content: [
{
type: 'text',
text: 'How many calories are in this block of cheese?',
},
{ type: 'image', image: fs.readFileSync('./data/roquefort.jpg') },
],
},
{
role: 'assistant',
content: [
{
type: 'tool-call',
toolCallId: '12345',
toolName: 'get-nutrition-data',
args: { cheese: 'Roquefort' },
},
// there could be more tool calls here (parallel calling)
],
},
{
role: 'tool',
content: [
{
type: 'tool-result',
toolCallId: '12345', // needs to match the tool call id
toolName: 'get-nutrition-data',
result: {
name: 'Cheese, roquefort',
calories: 369,
fat: 31,
protein: 22,
},
},
// there could be more tool results here (parallel calling)
],
},
],
});
```
#### Multi-modal Tool Results
Multi-part tool results are experimental and only supported by Anthropic.
Tool results can be multi-part and multi-modal, e.g. a text and an image.
You can use the `experimental_content` property on tool parts to specify multi-part tool results.
```ts highlight="20-32"
const result = await generateText({
model: yourModel,
messages: [
// ...
{
role: 'tool',
content: [
{
type: 'tool-result',
toolCallId: '12345', // needs to match the tool call id
toolName: 'get-nutrition-data',
// for models that do not support multi-part tool results,
// you can include a regular result part:
result: {
name: 'Cheese, roquefort',
calories: 369,
fat: 31,
protein: 22,
},
// for models that support multi-part tool results,
// you can include a multi-part content part:
content: [
{
type: 'text',
text: 'Here is an image of the nutrition data for the cheese:',
},
{
type: 'image',
data: fs.readFileSync('./data/roquefort-nutrition-data.png'),
mimeType: 'image/png',
},
],
},
],
},
],
});
```
### System Messages
System messages are messages that are sent to the model before the user messages to guide the assistant's behavior.
You can alternatively use the `system` property.
```ts highlight="4"
const result = await generateText({
model: yourModel,
messages: [
{ role: 'system', content: 'You help planning travel itineraries.' },
{
role: 'user',
content:
'I am planning a trip to Berlin for 3 days. Please suggest the best tourist activities for me to do.',
},
],
});
```
---
title: Tools
description: Learn about tools with the AI SDK.
---
# Tools
While [large language models (LLMs)](/docs/foundations/overview#large-language-models) have incredible generation capabilities,
they struggle with discrete tasks (e.g. mathematics) and interacting with the outside world (e.g. getting the weather).
Tools are actions that an LLM can invoke.
The results of these actions can be reported back to the LLM to be considered in the next response.
For example, when you ask an LLM for the "weather in London", and there is a weather tool available, it could call a tool
with London as the argument. The tool would then fetch the weather data and return it to the LLM. The LLM can then use this
information in its response.
## What is a tool?
A tool is an object that can be called by the model to perform a specific task.
You can use tools with [`generateText`](/docs/reference/ai-sdk-core/generate-text)
and [`streamText`](/docs/reference/ai-sdk-core/stream-text) by passing one or more tools to the `tools` parameter.
A tool consists of three properties:
- **`description`**: An optional description of the tool that can influence when the tool is picked.
- **`parameters`**: A [Zod schema](/docs/foundations/tools#schema-specification-and-validation-with-zod) or a [JSON schema](/docs/reference/ai-sdk-core/json-schema) that defines the parameters. The schema is consumed by the LLM, and also used to validate the LLM tool calls.
- **`execute`**: An optional async function that is called with the arguments from the tool call.
`streamUI` uses UI generator tools with a `generate` function that can return
React components.
If the LLM decides to use a tool, it will generate a tool call.
Tools with an `execute` function are run automatically when these calls are generated.
The results of the tool calls are returned using tool result objects.
You can automatically pass tool results back to the LLM
using [multi-step calls](/docs/ai-sdk-core/tools-and-tool-calling#multi-step-calls) with `streamText` and `generateText`.
## Schemas
Schemas are used to define the parameters for tools and to validate the [tool calls](/docs/ai-sdk-core/tools-and-tool-calling).
The AI SDK supports both raw JSON schemas (using the `jsonSchema` function) and [Zod](https://zod.dev/) schemas.
[Zod](https://zod.dev/) is the most popular JavaScript schema validation library.
You can install Zod with:
You can then specify a Zod schema, for example:
```ts
import z from 'zod';
const recipeSchema = z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(
z.object({
name: z.string(),
amount: z.string(),
}),
),
steps: z.array(z.string()),
}),
});
```
You can also use schemas for structured output generation with
[`generateObject`](/docs/reference/ai-sdk-core/generate-object) and
[`streamObject`](/docs/reference/ai-sdk-core/stream-object).
## Toolkits
When you work with tools, you typically need a mix of application specific tools and general purpose tools.
There are several providers that offer pre-built tools as **toolkits** that you can use out of the box:
- **[agentic](https://github.com/transitive-bullshit/agentic)** - A collection of 20+ tools. Most tools connect to access external APIs such as [Exa](https://exa.ai/) or [E2B](https://e2b.dev/).
- **[browserbase](https://github.com/browserbase/js-sdk?tab=readme-ov-file#vercel-ai-sdk-integration)** - Browser tool that runs a headless browser
- **[Stripe agent tools](https://docs.stripe.com/agents)** - Tools for interacting with Stripe.
- **[Toolhouse](https://docs.toolhouse.ai/toolhouse/using-vercel-ai)** - AI function-calling in 3 lines of code for over 25 different actions.
Do you have open source tools or tool libraries that are compatible with the
AI SDK? Please [file a pull request](https://github.com/vercel/ai/pulls) to
add them to this list.
## Learn more
The AI SDK Core [Tool Calling](/docs/ai-sdk-core/tools-and-tool-calling)
and [Agents](/docs/ai-sdk-core/agents) documentation has more information about tools and tool calling.
---
title: Streaming
description: Why use streaming for AI applications?
---
# Streaming
Streaming conversational text UIs (like ChatGPT) have gained massive popularity over the past few months. This section explores the benefits and drawbacks of streaming and blocking interfaces.
[Large language models (LLMs)](/docs/foundations/overview#large-language-models) are extremely powerful. However, when generating long outputs, they can be very slow compared to the latency you're likely used to. If you try to build a traditional blocking UI, your users might easily find themselves staring at loading spinners for 5, 10, even up to 40s waiting for the entire LLM response to be generated. This can lead to a poor user experience, especially in conversational applications like chatbots. Streaming UIs can help mitigate this issue by **displaying parts of the response as they become available**.
## Real-world Examples
Here are 2 examples that illustrate how streaming UIs can improve user experiences in a real-world setting – the first uses a blocking UI, while the second uses a streaming UI.
### Blocking UI
### Streaming UI
As you can see, the streaming UI is able to start displaying the response much faster than the blocking UI. This is because the blocking UI has to wait for the entire response to be generated before it can display anything, while the streaming UI can display parts of the response as they become available.
While streaming interfaces can greatly enhance user experiences, especially with larger language models, they aren't always necessary or beneficial. If you can achieve your desired functionality using a smaller, faster model without resorting to streaming, this route can often lead to simpler and more manageable development processes.
However, regardless of the speed of your model, the AI SDK is designed to make implementing streaming UIs as simple as possible. In the example below, we stream text generation from OpenAI's `gpt-4-turbo` in under 10 lines of code using the SDK's [`streamText`](/docs/reference/ai-sdk-core/stream-text) function:
```ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
const { textStream } = streamText({
model: openai('gpt-4-turbo'),
prompt: 'Write a poem about embedding models.',
});
for await (const textPart of textStream) {
console.log(textPart);
}
```
For an introduction to streaming UIs and the AI SDK, check out our [Getting Started guides](/docs/getting-started).
---
title: Foundations
description: A section that covers foundational knowledge around LLMs and concepts crucial to the AI SDK
---
# Foundations
---
title: Navigating the Library
description: Learn how to navigate the AI SDK.
---
# Navigating the Library
the AI SDK is a powerful toolkit for building AI applications. This page will help you pick the right tools for your requirements.
Let’s start with a quick overview of the AI SDK, which is comprised of three parts:
- **[AI SDK Core](/docs/ai-sdk-core/overview):** A unified, provider agnostic API for generating text, structured objects, and tool calls with LLMs.
- **[AI SDK UI](/docs/ai-sdk-ui/overview):** A set of framework-agnostic hooks for building chat and generative user interfaces.
- [AI SDK RSC](/docs/ai-sdk-rsc/overview): Stream generative user interfaces with React Server Components (RSC). Development is currently experimental and we recommend using [AI SDK UI](/docs/ai-sdk-ui/overview).
## Choosing the Right Tool for Your Environment
When deciding which part of the AI SDK to use, your first consideration should be the environment and existing stack you are working with. Different components of the SDK are tailored to specific frameworks and environments.
| Library | Purpose | Environment Compatibility |
| ----------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------- |
| [AI SDK Core](/docs/ai-sdk-core/overview) | Call any LLM with unified API (e.g. [generateText](/docs/reference/ai-sdk-core/generate-text) and [generateObject](/docs/reference/ai-sdk-core/generate-object)) | Any JS environment (e.g. Node.js, Deno, Browser) |
| [AI SDK UI](/docs/ai-sdk-ui/overview) | Build streaming chat and generative UIs (e.g. [useChat](/docs/reference/ai-sdk-ui/use-chat)) | React & Next.js, Vue & Nuxt, Svelte & SvelteKit, Solid.js & SolidStart |
| [AI SDK RSC](/docs/ai-sdk-rsc/overview) | Stream generative UIs from Server to Client (e.g. [streamUI](/docs/reference/ai-sdk-rsc/stream-ui)). Development is currently experimental and we recommend using [AI SDK UI](/docs/ai-sdk-ui/overview). | Any framework that supports React Server Components (e.g. Next.js) |
## Environment Compatibility
These tools have been designed to work seamlessly with each other and it's likely that you will be using them together. Let's look at how you could decide which libraries to use based on your application environment, existing stack, and requirements.
The following table outlines AI SDK compatibility based on environment:
| Environment | [AI SDK Core](/docs/ai-sdk-core/overview) | [AI SDK UI](/docs/ai-sdk-ui/overview) | [AI SDK RSC](/docs/ai-sdk-rsc/overview) |
| --------------------- | ----------------------------------------- | ------------------------------------- | --------------------------------------- |
| None / Node.js / Deno | | | |
| Vue / Nuxt | | | |
| Svelte / SvelteKit | | | |
| Solid.js / SolidStart | | | |
| Next.js Pages Router | | | |
| Next.js App Router | | | |
## When to use AI SDK UI
AI SDK UI provides a set of framework-agnostic hooks for quickly building **production-ready AI-native applications**. It offers:
- Full support for streaming chat and client-side generative UI
- Utilities for handling common AI interaction patterns (i.e. chat, completion, assistant)
- Production-tested reliability and performance
- Compatibility across popular frameworks
## AI SDK UI Framework Compatibility
AI SDK UI supports the following frameworks: [React](https://react.dev/), [Svelte](https://svelte.dev/), [Vue.js](https://vuejs.org/), and [SolidJS](https://www.solidjs.com/). Here is a comparison of the supported functions across these frameworks:
| Function | React | Svelte | Vue.js | SolidJS |
| ---------------------------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
| [useChat](/docs/reference/ai-sdk-ui/use-chat) | | | | |
| [useChat](/docs/reference/ai-sdk-ui/use-chat) tool calling | | | | |
| [useChat](/docs/reference/ai-sdk-ui/use-chat) attachments | | | | |
| [useCompletion](/docs/reference/ai-sdk-ui/use-completion) | | | | |
| [useObject](/docs/reference/ai-sdk-ui/use-object) | | | | |
| [useAssistant](/docs/reference/ai-sdk-ui/use-assistant) | | | | |
[Contributions](https://github.com/vercel/ai/blob/main/CONTRIBUTING.md) are
welcome to implement missing features for non-React frameworks.
## When to use AI SDK RSC
AI SDK RSC is currently experimental. We recommend using [AI SDK
UI](/docs/ai-sdk-ui/overview) for production. For guidance on migrating from
RSC to UI, see our [migration guide](/docs/ai-sdk-rsc/migrating-to-ui).
[React Server Components](https://nextjs.org/docs/app/building-your-application/rendering/server-components)
(RSCs) provide a new approach to building React applications that allow components
to render on the server, fetch data directly, and stream the results to the client,
reducing bundle size and improving performance. They also introduce a new way to
call server-side functions from anywhere in your application called [Server Actions](https://nextjs.org/docs/app/building-your-application/data-fetching/server-actions-and-mutations).
AI SDK RSC provides a number of utilities that allow you to stream values and UI directly from the server to the client. However, **it's important to be aware of current limitations**:
- **Cancellation**: currently, it is not possible to abort a stream using Server Actions. This will be improved in future releases of React and Next.js.
- **Increased Data Transfer**: using [`createStreamableUI`](/docs/reference/ai-sdk-rsc/create-streamable-ui) can lead to quadratic data transfer (quadratic to the length of generated text). You can avoid this using [ `createStreamableValue` ](/docs/reference/ai-sdk-rsc/create-streamable-value) instead, and rendering the component client-side.
- **Re-mounting Issue During Streaming**: when using `createStreamableUI`, components re-mount on `.done()`, causing [flickering](https://github.com/vercel/ai/issues/2232).
Given these limitations, **we recommend using [AI SDK UI](/docs/ai-sdk-ui/overview) for production applications**.
---
title: Next.js App Router
description: Welcome to the AI SDK quickstart guide for Next.js App Router!
---
# Next.js App Router Quickstart
In this quick start tutorial, you'll build a simple AI-chatbot with a streaming user interface. Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects.
Check out [Prompt Engineering](/docs/advanced/prompt-engineering) and [HTTP Streaming](/docs/advanced/why-streaming) if you haven't heard of them.
## Prerequisites
To follow this quickstart, you'll need:
- Node.js 18+ and pnpm installed on your local development machine.
- An OpenAI API key.
If you haven't obtained your OpenAI API key, you can do so by [signing up](https://platform.openai.com/signup/) on the OpenAI website.
## Create Your Application
Start by creating a new Next.js application. This command will create a new directory named `my-ai-app` and set up a basic Next.js application inside it.
Be sure to select yes when prompted to use the App Router. If you are
looking for the Next.js Pages Router quickstart guide, you can find it
[here](/docs/getting-started/nextjs-pages-router).
Navigate to the newly created directory:
### Install dependencies
Install `ai` and `@ai-sdk/openai`, the AI package and AI SDK's [ OpenAI provider ](/providers/ai-sdk-providers/openai) respectively.
The AI SDK is designed to be a unified interface to interact with any large
language model. This means that you can change model and providers with just
one line of code! Learn more about [available providers](/providers) and
[building custom providers](/providers/community-providers/custom-providers)
in the [providers](/providers) section.
Make sure you are using `ai` version 3.1 or higher.
### Configure OpenAI API key
Create a `.env.local` file in your project root and add your OpenAI API Key. This key is used to authenticate your application with the OpenAI service.
Edit the `.env.local` file:
```env filename=".env.local"
OPENAI_API_KEY=xxxxxxxxx
```
Replace `xxxxxxxxx` with your actual OpenAI API key.
The AI SDK's OpenAI Provider will default to using the `OPENAI_API_KEY`
environment variable.
## Create a Route Handler
Create a route handler, `app/api/chat/route.ts` and add the following code:
```tsx filename="app/api/chat/route.ts"
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
messages,
});
return result.toDataStreamResponse();
}
```
Let's take a look at what is happening in this code:
1. Define an asynchronous `POST` request handler and extract `messages` from the body of the request. The `messages` variable contains a history of the conversation between you and the chatbot and provides the chatbot with the necessary context to make the next generation.
2. Call [`streamText`](/docs/reference/ai-sdk-core/stream-text), which is imported from the `ai` package. This function accepts a configuration object that contains a `model` provider (imported from `@ai-sdk/openai`) and `messages` (defined in step 1). You can pass additional [settings](/docs/ai-sdk-core/settings) to further customise the model's behaviour.
3. The `streamText` function returns a [`StreamTextResult`](/docs/reference/ai-sdk-core/stream-text#result-object). This result object contains the [ `toDataStreamResponse` ](/docs/reference/ai-sdk-core/stream-text#to-data-stream-response) function which converts the result to a streamed response object.
4. Finally, return the result to the client to stream the response.
This Route Handler creates a POST request endpoint at `/api/chat`.
## Wire up the UI
Now that you have a Route Handler that can query an LLM, it's time to setup your frontend. The AI SDK's [ UI ](/docs/ai-sdk-ui) package abstracts the complexity of a chat interface into one hook, [`useChat`](/docs/reference/ai-sdk-ui/use-chat).
Update your root page (`app/page.tsx`) with the following code to show a list of chat messages and provide a user message input:
```tsx filename="app/page.tsx"
'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
);
}
```
Make sure you add the `"use client"` directive to the top of your file. This
allows you to add interactivity with Javascript.
This page utilizes the `useChat` hook, which will, by default, use the `POST` API route you created earlier (`/api/chat`). The hook provides functions and state for handling user input and form submission. The `useChat` hook provides multiple utility functions and state variables:
- `messages` - the current chat messages (an array of objects with `id`, `role`, and `content` properties).
- `input` - the current value of the user's input field.
- `handleInputChange` and `handleSubmit` - functions to handle user interactions (typing into the input field and submitting the form, respectively).
## Running Your Application
With that, you have built everything you need for your chatbot! To start your application, use the command:
Head to your browser and open http://localhost:3000. You should see an input field. Test it out by entering a message and see the AI chatbot respond in real-time! The AI SDK makes it fast and easy to build AI chat interfaces with Next.js.
## Enhance Your Chatbot with Tools
While large language models (LLMs) have incredible generation capabilities, they struggle with discrete tasks (e.g. mathematics) and interacting with the outside world (e.g. getting the weather). This is where [tools](/docs/ai-sdk-core/tools-and-tool-calling) come in.
Tools are actions that an LLM can invoke. The results of these actions can be reported back to the LLM to be considered in the next response.
For example, if a user asks about the current weather, without tools, the model would only be able to provide general information based on its training data. But with a weather tool, it can fetch and provide up-to-date, location-specific weather information.
Let's enhance your chatbot by adding a simple weather tool.
### Update Your Route Handler
Modify your `app/api/chat/route.ts` file to include the new weather tool:
```tsx filename="app/api/chat/route.ts" highlight="2,13-27"
import { openai } from '@ai-sdk/openai';
import { streamText, tool } from 'ai';
import { z } from 'zod';
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
messages,
tools: {
weather: tool({
description: 'Get the weather in a location (fahrenheit)',
parameters: z.object({
location: z.string().describe('The location to get the weather for'),
}),
execute: async ({ location }) => {
const temperature = Math.round(Math.random() * (90 - 32) + 32);
return {
location,
temperature,
};
},
}),
},
});
return result.toDataStreamResponse();
}
```
In this updated code:
1. You import the `tool` function from the `ai` package and `z` from `zod` for schema validation.
2. You define a `tools` object with a `weather` tool. This tool:
- Has a description that helps the model understand when to use it.
- Defines parameters using a Zod schema, specifying that it requires a `location` string to execute this tool. The model will attempt to extract this parameter from the context of the conversation. If it can't, it will ask the user for the missing information.
- Defines an `execute` function that simulates getting weather data (in this case, it returns a random temperature). This is an asynchronous function running on the server so you can fetch real data from an external API.
Now your chatbot can "fetch" weather information for any location the user asks about. When the model determines it needs to use the weather tool, it will generate a tool call with the necessary parameters. The `execute` function will then be automatically run, and you can access the results via `toolInvocations` that is available on the message object.
Try asking something like "What's the weather in New York?" and see how the model uses the new tool.
Notice the blank response in the UI? This is because instead of generating a text response, the model generated a tool call. You can access the tool call and subsequent tool result in the `toolInvocations` key of the message object.
### Update the UI
To display the tool invocations in your UI, update your `app/page.tsx` file:
```tsx filename="app/page.tsx" highlight="12-16"
'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
);
}
```
With this change, you check each message for any tool calls (`toolInvocations`). These tool calls will be displayed as stringified JSON. Otherwise, you show the message content as before.
Now, when you ask about the weather, you'll see the tool invocation and its result displayed in your chat interface.
## Enabling Multi-Step Tool Calls
You may have noticed that while the tool results are visible in the chat interface, the model isn't using this information to answer your original query. This is because once the model generates a tool call, it has technically completed its generation.
To solve this, you can enable multi-step tool calls using the `maxSteps` option in your `useChat` hook. This feature will automatically send tool results back to the model to trigger an additional generation. In this case, you want the model to answer your question using the results from the weather tool.
### Update Your Client-Side Code
Modify your `app/page.tsx` file to include the `maxSteps` option:
```tsx filename="app/page.tsx" highlight="7"
'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat({
maxSteps: 5,
});
// ... rest of your component code
}
```
Head back to the browser and ask about the weather in a location. You should now see the model using the weather tool results to answer your question.
By setting `maxSteps` to 5, you're allowing the model to use up to 5 "steps" for any given generation. This enables more complex interactions and allows the model to gather and process information over several steps if needed. You can see this in action by adding another tool to convert the temperature from Fahrenheit to Celsius.
### Update Your Route Handler
Update your `app/api/chat/route.ts` file to add a new tool to convert the temperature from Fahrenheit to Celsius:
```tsx filename="app/api/chat/route.ts" highlight="27-40"
import { openai } from '@ai-sdk/openai';
import { streamText, tool } from 'ai';
import { z } from 'zod';
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
messages,
tools: {
weather: tool({
description: 'Get the weather in a location (fahrenheit)',
parameters: z.object({
location: z.string().describe('The location to get the weather for'),
}),
execute: async ({ location }) => {
const temperature = Math.round(Math.random() * (90 - 32) + 32);
return {
location,
temperature,
};
},
}),
convertFahrenheitToCelsius: tool({
description: 'Convert a temperature in fahrenheit to celsius',
parameters: z.object({
temperature: z
.number()
.describe('The temperature in fahrenheit to convert'),
}),
execute: async ({ temperature }) => {
const celsius = Math.round((temperature - 32) * (5 / 9));
return {
celsius,
};
},
}),
},
});
return result.toDataStreamResponse();
}
```
Now, when you ask "What's the weather in New York in celsius?", you should see a more complete interaction:
1. The model will call the weather tool for New York.
2. You'll see the tool result displayed.
3. It will then call the temperature conversion tool to convert the temperature from Fahrenheit to Celsius.
4. The model will then use that information to provide a natural language response about the weather in New York.
This multi-step approach allows the model to gather information and use it to provide more accurate and contextual responses, making your chatbot considerably more useful.
This simple example demonstrates how tools can expand your model's capabilities. You can create more complex tools to integrate with real APIs, databases, or any other external systems, allowing the model to access and process real-world data in real-time. Tools bridge the gap between the model's knowledge cutoff and current information.
## Where to Next?
You've built an AI chatbot using the AI SDK! From here, you have several paths to explore:
- To learn more about the AI SDK, read through the [documentation](/docs).
- If you're interested in diving deeper with guides, check out the [RAG (retrieval-augmented generation)](/docs/guides/rag-chatbot) and [multi-modal chatbot](/docs/guides/multi-modal-chatbot) guides.
- To jumpstart your first AI project, explore available [templates](https://vercel.com/templates?type=ai).
---
title: Next.js Pages Router
description: Welcome to the AI SDK quickstart guide for Next.js Pages Router!
---
# Next.js Pages Router Quickstart
The AI SDK is a powerful Typescript library designed to help developers build AI-powered applications.
In this quickstart tutorial, you'll build a simple AI-chatbot with a streaming user interface. Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects.
If you are unfamiliar with the concepts of [Prompt Engineering](/docs/advanced/prompt-engineering) and [HTTP Streaming](/docs/advanced/why-streaming), you can optionally read these documents first.
## Prerequisites
To follow this quickstart, you'll need:
- Node.js 18+ and pnpm installed on your local development machine.
- An OpenAI API key.
If you haven't obtained your OpenAI API key, you can do so by [signing up](https://platform.openai.com/signup/) on the OpenAI website.
## Setup Your Application
Start by creating a new Next.js application. This command will create a new directory named `my-ai-app` and set up a basic Next.js application inside it.
Be sure to select no when prompted to use the App Router. If you are looking
for the Next.js App Router quickstart guide, you can find it
[here](/docs/getting-started/nextjs-app-router).
Navigate to the newly created directory:
### Install dependencies
Install `ai` and `@ai-sdk/openai`, the AI SDK's OpenAI provider.
The AI SDK is designed to be a unified interface to interact with any large
language model. This means that you can change model and providers with just
one line of code! Learn more about [available providers](/providers) and
[building custom providers](/providers/community-providers/custom-providers)
in the [providers](/providers) section.
Make sure you are using `ai` version 3.1 or higher.
### Configure OpenAI API Key
Create a `.env.local` file in your project root and add your OpenAI API Key. This key is used to authenticate your application with the OpenAI service.
Edit the `.env.local` file:
```env filename=".env.local"
OPENAI_API_KEY=xxxxxxxxx
```
Replace `xxxxxxxxx` with your actual OpenAI API key.
The AI SDK's OpenAI Provider will default to using the `OPENAI_API_KEY`
environment variable.
## Create a Route Handler
As long as you are on Next.js 13+, you can use Route Handlers (using the App
Router) alongside the Pages Router. This is recommended to enable you to use
the Web APIs interface/signature and to better support streaming.
Create a Route Handler (`app/api/chat/route.ts`) and add the following code:
```tsx filename="app/api/chat/route.ts"
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
messages,
});
return result.toDataStreamResponse();
}
```
Let's take a look at what is happening in this code:
1. Define an asynchronous `POST` request handler and extract `messages` from the body of the request. The `messages` variable contains a history of the conversation between you and the chatbot and provides the chatbot with the necessary context to make the next generation.
2. Call [`streamText`](/docs/reference/ai-sdk-core/stream-text), which is imported from the `ai` package. This function accepts a configuration object that contains a `model` provider (imported from `@ai-sdk/openai`) and `messages` (defined in step 1). You can pass additional [settings](/docs/ai-sdk-core/settings) to further customise the model's behaviour.
3. The `streamText` function returns a [`StreamTextResult`](/docs/reference/ai-sdk-core/stream-text#result-object). This result object contains the [ `toDataStreamResponse` ](/docs/reference/ai-sdk-core/stream-text#to-data-stream-response) function which converts the result to a streamed response object.
4. Finally, return the result to the client to stream the response.
This Route Handler creates a POST request endpoint at `/api/chat`.
## Wire up the UI
Now that you have an API route that can query an LLM, it's time to setup your frontend. The AI SDK's [ UI ](/docs/ai-sdk-ui) package abstract the complexity of a chat interface into one hook, [`useChat`](/docs/reference/ai-sdk-ui/use-chat).
Update your root page (`pages/index.tsx`) with the following code to show a list of chat messages and provide a user message input:
```tsx filename="pages/index.tsx"
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
);
}
```
This page utilizes the `useChat` hook, which will, by default, use the `POST` API route you created earlier (`/api/chat`). The hook provides functions and state for handling user input and form submission. The `useChat` hook provides multiple utility functions and state variables:
- `messages` - the current chat messages (an array of objects with `id`, `role`, and `content` properties).
- `input` - the current value of the user's input field.
- `handleInputChange` and `handleSubmit` - functions to handle user interactions (typing into the input field and submitting the form, respectively).
- `isLoading` - boolean that indicates whether the API request is in progress.
## Running Your Application
With that, you have built everything you need for your chatbot! To start your application, use the command:
Head to your browser and open http://localhost:3000. You should see an input field. Test it out by entering a message and see the AI chatbot respond in real-time! The AI SDK makes it fast and easy to build AI chat interfaces with Next.js.
## Enhance Your Chatbot with Tools
While large language models (LLMs) have incredible generation capabilities, they struggle with discrete tasks (e.g. mathematics) and interacting with the outside world (e.g. getting the weather). This is where [tools](/docs/ai-sdk-core/tools-and-tool-calling) come in.
Tools are actions that an LLM can invoke. The results of these actions can be reported back to the LLM to be considered in the next response.
For example, if a user asks about the current weather, without tools, the model would only be able to provide general information based on its training data. But with a weather tool, it can fetch and provide up-to-date, location-specific weather information.
Let's enhance your chatbot by adding a simple weather tool.
### Update Your Route Handler
Modify your `app/api/chat/route.ts` file to include the new weather tool:
```tsx filename="app/api/chat/route.ts" highlight="2,13-27"
import { openai } from '@ai-sdk/openai';
import { streamText, tool } from 'ai';
import { z } from 'zod';
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
messages,
tools: {
weather: tool({
description: 'Get the weather in a location (fahrenheit)',
parameters: z.object({
location: z.string().describe('The location to get the weather for'),
}),
execute: async ({ location }) => {
const temperature = Math.round(Math.random() * (90 - 32) + 32);
return {
location,
temperature,
};
},
}),
},
});
return result.toDataStreamResponse();
}
```
In this updated code:
1. You import the `tool` function from the `ai` package and `z` from `zod` for schema validation.
2. You define a `tools` object with a `weather` tool. This tool:
- Has a description that helps the model understand when to use it.
- Defines parameters using a Zod schema, specifying that it requires a `location` string to execute this tool. The model will attempt to extract this parameter from the context of the conversation. If it can't, it will ask the user for the missing information.
- Defines an `execute` function that simulates getting weather data (in this case, it returns a random temperature). This is an asynchronous function running on the server so you can fetch real data from an external API.
Now your chatbot can "fetch" weather information for any location the user asks about. When the model determines it needs to use the weather tool, it will generate a tool call with the necessary parameters. The `execute` function will then be automatically run, and you can access the results via `toolInvocations` that is available on the message object.
Try asking something like "What's the weather in New York?" and see how the model uses the new tool.
Notice the blank response in the UI? This is because instead of generating a text response, the model generated a tool call. You can access the tool call and subsequent tool result in the `toolInvocations` key of the message object.
### Update the UI
To display the tool invocations in your UI, update your `pages/index.tsx` file:
```tsx filename="pages/index.tsx" highlight="11-15"
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
);
}
```
With this change, you check each message for any tool calls (`toolInvocations`). These tool calls will be displayed as stringified JSON. Otherwise, you show the message content as before.
Now, when you ask about the weather, you'll see the tool invocation and its result displayed in your chat interface.
## Enabling Multi-Step Tool Calls
You may have noticed that while the tool results are visible in the chat interface, the model isn't using this information to answer your original query. This is because once the model generates a tool call, it has technically completed its generation.
To solve this, you can enable multi-step tool calls using the `maxSteps` option in your `useChat` hook. This feature will automatically send tool results back to the model to trigger an additional generation. In this case, you want the model to answer your question using the results from the weather tool.
### Update Your Client-Side Code
Modify your `pages/index.tsx` file to include the `maxSteps` option:
```tsx filename="pages/index.tsx" highlight="6"
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat({
maxSteps: 5,
});
// ... rest of your component code
}
```
Head back to the browser and ask about the weather in a location. You should now see the model using the weather tool results to answer your question.
By setting `maxSteps` to 5, you're allowing the model to use up to 5 "steps" for any given generation. This enables more complex interactions and allows the model to gather and process information over several steps if needed. You can see this in action by adding another tool to convert the temperature from Fahrenheit to Celsius.
### Update Your Route Handler
Update your `app/api/chat/route.ts` file to add a new tool to convert the temperature from Fahrenheit to Celsius:
```tsx filename="app/api/chat/route.ts" highlight="27-40"
import { openai } from '@ai-sdk/openai';
import { streamText, tool } from 'ai';
import { z } from 'zod';
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
messages,
tools: {
weather: tool({
description: 'Get the weather in a location (fahrenheit)',
parameters: z.object({
location: z.string().describe('The location to get the weather for'),
}),
execute: async ({ location }) => {
const temperature = Math.round(Math.random() * (90 - 32) + 32);
return {
location,
temperature,
};
},
}),
convertFahrenheitToCelsius: tool({
description: 'Convert a temperature in fahrenheit to celsius',
parameters: z.object({
temperature: z
.number()
.describe('The temperature in fahrenheit to convert'),
}),
execute: async ({ temperature }) => {
const celsius = Math.round((temperature - 32) * (5 / 9));
return {
celsius,
};
},
}),
},
});
return result.toDataStreamResponse();
}
```
Now, when you ask "What's the weather in New York in celsius?", you should see a more complete interaction:
1. The model will call the weather tool for New York.
2. You'll see the tool result displayed.
3. It will then call the temperature conversion tool to convert the temperature from Fahrenheit to Celsius.
4. The model will then use that information to provide a natural language response about the weather in New York.
This multi-step approach allows the model to gather information and use it to provide more accurate and contextual responses, making your chatbot considerably more useful.
This simple example demonstrates how tools can expand your model's capabilities. You can create more complex tools to integrate with real APIs, databases, or any other external systems, allowing the model to access and process real-world data in real-time. Tools bridge the gap between the model's knowledge cutoff and current information.
## Where to Next?
You've built an AI chatbot using the AI SDK! From here, you have several paths to explore:
- To learn more about the AI SDK, read through the [documentation](/docs).
- If you're interested in diving deeper with guides, check out the [RAG (retrieval-augmented generation)](/docs/guides/rag-chatbot) and [multi-modal chatbot](/docs/guides/multi-modal-chatbot) guides.
- To jumpstart your first AI project, explore available [templates](https://vercel.com/templates?type=ai).
---
title: Svelte
description: Welcome to the AI SDK quickstart guide for Svelte!
---
# Svelte Quickstart
The AI SDK is a powerful Typescript library designed to help developers build AI-powered applications.
In this quickstart tutorial, you'll build a simple AI-chatbot with a streaming user interface. Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects.
If you are unfamiliar with the concepts of [Prompt Engineering](/docs/advanced/prompt-engineering) and [HTTP Streaming](/docs/advanced/why-streaming), you can optionally read these documents first.
## Prerequisites
To follow this quickstart, you'll need:
- Node.js 18+ and pnpm installed on your local development machine.
- An OpenAI API key.
If you haven't obtained your OpenAI API key, you can do so by [signing up](https://platform.openai.com/signup/) on the OpenAI website.
## Setup Your Application
Start by creating a new SvelteKit application. This command will create a new directory named `my-ai-app` and set up a basic SvelteKit application inside it.
Navigate to the newly created directory:
### Install Dependencies
Install `ai` and `@ai-sdk/openai`, the AI SDK's OpenAI provider.
The AI SDK is designed to be a unified interface to interact with any large
language model. This means that you can change model and providers with just
one line of code! Learn more about [available providers](/providers) and
[building custom providers](/providers/community-providers/custom-providers)
in the [providers](/providers) section.
Make sure you are using `ai` version 3.1 or higher.
### Configure OpenAI API Key
Create a `.env.local` file in your project root and add your OpenAI API Key. This key is used to authenticate your application with the OpenAI service.
Edit the `.env.local` file:
```env filename=".env.local"
OPENAI_API_KEY=xxxxxxxxx
```
Replace `xxxxxxxxx` with your actual OpenAI API key.
The AI SDK's OpenAI Provider will default to using the `OPENAI_API_KEY`
environment variable.
## Create an API route
Create a SvelteKit Endpoint, `src/routes/api/chat/+server.ts` and add the following code:
```tsx filename="src/routes/api/chat/+server.ts"
import { createOpenAI } from '@ai-sdk/openai';
import { streamText } from 'ai';
import type { RequestHandler } from './$types';
import { env } from '$env/dynamic/private';
const openai = createOpenAI({
apiKey: env.OPENAI_API_KEY ?? '',
});
export const POST = (async ({ request }) => {
const { messages } = await request.json();
const result = streamText({
model: openai('gpt-4o'),
messages,
});
return result.toDataStreamResponse();
}) satisfies RequestHandler;
```
You may see an error with the `./$types` import. This will be resolved as soon
as you run the dev server.
Let's take a look at what is happening in this code:
1. Create an OpenAI provider instance with the `createOpenAI` function from the `@ai-sdk/openai` package.
2. Define a `POST` request handler and extract `messages` from the body of the request. The `messages` variable contains a history of the conversation with you and the chatbot and will provide the chatbot with the necessary context to make the next generation.
3. Call [`streamText`](/docs/reference/ai-sdk-core/stream-text), which is imported from the `ai` package. This function accepts a configuration object that contains a `model` provider (defined in step 1) and `messages` (defined in step 2). You can pass additional [settings](/docs/ai-sdk-core/settings) to further customise the model's behaviour.
4. The `streamText` function returns a [`StreamTextResult`](/docs/reference/ai-sdk-core/stream-text#result-object). This result object contains the [ `toDataStreamResponse` ](/docs/reference/ai-sdk-core/stream-text#to-data-stream-response) function which converts the result to a streamed response object.
5. Return the result to the client to stream the response.
## Wire up the UI
Now that you have an API route that can query an LLM, it's time to setup your frontend. The AI SDK's [ UI ](/docs/ai-sdk-ui) package abstract the complexity of a chat interface into one hook, [`useChat`](/docs/reference/ai-sdk-ui/use-chat).
Update your root page (`src/routes/+page.svelte`) with the following code to show a list of chat messages and provide a user message input:
```svelte filename="src/routes/+page.svelte"
{#each $messages as message}
{message.role}: {message.content}
{/each}
```
This page utilizes the `useChat` hook, which will, by default, use the `POST` route handler you created earlier. The hook provides functions and state for handling user input and form submission. The `useChat` hook provides multiple utility functions and state variables:
- `messages` - the current chat messages (an array of objects with `id`, `role`, and `content` properties).
- `input` - the current value of the user's input field.
- `handleSubmit` - function to handle form submission.
## Running Your Application
With that, you have built everything you need for your chatbot! To start your application, use the command:
Head to your browser and open http://localhost:5173. You should see an input field. Test it out by entering a message and see the AI chatbot respond in real-time! The AI SDK makes it fast and easy to build AI chat interfaces with Svelte.
## Enhance Your Chatbot with Tools
While large language models (LLMs) have incredible generation capabilities, they struggle with discrete tasks (e.g. mathematics) and interacting with the outside world (e.g. getting the weather). This is where [tools](/docs/ai-sdk-core/tools-and-tool-calling) come in.
Tools are actions that an LLM can invoke. The results of these actions can be reported back to the LLM to be considered in the next response.
For example, if a user asks about the current weather, without tools, the model would only be able to provide general information based on its training data. But with a weather tool, it can fetch and provide up-to-date, location-specific weather information.
Let's enhance your chatbot by adding a simple weather tool.
### Update Your API Route
Modify your `src/routes/api/chat/+server.ts` file to include the new weather tool:
```tsx filename="src/routes/api/chat/+server.ts" highlight="2,4,18-32"
import { createOpenAI } from '@ai-sdk/openai';
import { streamText, tool } from 'ai';
import type { RequestHandler } from './$types';
import { z } from 'zod';
import { env } from '$env/dynamic/private';
const openai = createOpenAI({
apiKey: env.OPENAI_API_KEY ?? '',
});
export const POST = (async ({ request }) => {
const { messages } = await request.json();
const result = streamText({
model: openai('gpt-4o'),
messages,
tools: {
weather: tool({
description: 'Get the weather in a location (fahrenheit)',
parameters: z.object({
location: z.string().describe('The location to get the weather for'),
}),
execute: async ({ location }) => {
const temperature = Math.round(Math.random() * (90 - 32) + 32);
return {
location,
temperature,
};
},
}),
},
});
return result.toDataStreamResponse();
}) satisfies RequestHandler;
```
In this updated code:
1. You import the `tool` function from the `ai` package and `z` from `zod` for schema validation.
2. You define a `tools` object with a `weather` tool. This tool:
- Has a description that helps the model understand when to use it.
- Defines parameters using a Zod schema, specifying that it requires a `location` string to execute this tool. The model will attempt to extract this parameter from the context of the conversation. If it can't, it will ask the user for the missing information.
- Defines an `execute` function that simulates getting weather data (in this case, it returns a random temperature). This is an asynchronous function running on the server so you can fetch real data from an external API.
Now your chatbot can "fetch" weather information for any location the user asks about. When the model determines it needs to use the weather tool, it will generate a tool call with the necessary parameters. The `execute` function will then be automatically run, and you can access the results via `toolInvocations` that is available on the message object.
Try asking something like "What's the weather in New York?" and see how the model uses the new tool.
Notice the blank response in the UI? This is because instead of generating a text response, the model generated a tool call. You can access the tool call and subsequent tool result in the `toolInvocations` key of the message object.
### Update the UI
To display the tool invocations in your UI, update your `src/routes/+page.svelte` file:
```svelte filename="src/routes/+page.svelte"
```
With this change, you check each message for any tool calls (`toolInvocations`). These tool calls will be displayed as stringified JSON. Otherwise, you show the message content as before.
Now, when you ask about the weather, you'll see the tool invocation and its result displayed in your chat interface.
## Enabling Multi-Step Tool Calls
You may have noticed that while the tool results are visible in the chat interface, the model isn't using this information to answer your original query. This is because once the model generates a tool call, it has technically completed its generation.
To solve this, you can enable multi-step tool calls using the `maxSteps` option in your `useChat` hook. This feature will automatically send tool results back to the model to trigger an additional generation. In this case, you want the model to answer your question using the results from the weather tool.
### Update Your UI
Modify your `src/routes/+page.svelte` file to include the `maxSteps` option:
```svelte filename="src/routes/+page.svelte" highlight="4"
```
Head back to the browser and ask about the weather in a location. You should now see the model using the weather tool results to answer your question.
By setting `maxSteps` to 5, you're allowing the model to use up to 5 "steps" for any given generation. This enables more complex interactions and allows the model to gather and process information over several steps if needed. You can see this in action by adding another tool to convert the temperature from Fahrenheit to Celsius.
### Update Your API Route
Update your `src/routes/api/chat/+server.ts` file to add a new tool to convert the temperature from Fahrenheit to Celsius:
```tsx filename="src/routes/api/chat/+server.ts" highlight="32-45"
import { createOpenAI } from '@ai-sdk/openai';
import { streamText, tool } from 'ai';
import type { RequestHandler } from './$types';
import { z } from 'zod';
import { env } from '$env/dynamic/private';
const openai = createOpenAI({
apiKey: env.OPENAI_API_KEY ?? '',
});
export const POST = (async ({ request }) => {
const { messages } = await request.json();
const result = streamText({
model: openai('gpt-4o'),
messages,
tools: {
weather: tool({
description: 'Get the weather in a location (fahrenheit)',
parameters: z.object({
location: z.string().describe('The location to get the weather for'),
}),
execute: async ({ location }) => {
const temperature = Math.round(Math.random() * (90 - 32) + 32);
return {
location,
temperature,
};
},
}),
convertFahrenheitToCelsius: tool({
description: 'Convert a temperature in fahrenheit to celsius',
parameters: z.object({
temperature: z
.number()
.describe('The temperature in fahrenheit to convert'),
}),
execute: async ({ temperature }) => {
const celsius = Math.round((temperature - 32) * (5 / 9));
return {
celsius,
};
},
}),
},
});
return result.toDataStreamResponse();
}) satisfies RequestHandler;
```
Now, when you ask "What's the weather in New York in celsius?", you should see a more complete interaction:
1. The model will call the weather tool for New York.
2. You'll see the tool result displayed.
3. It will then call the temperature conversion tool to convert the temperature from Fahrenheit to Celsius.
4. The model will then use that information to provide a natural language response about the weather in New York.
This multi-step approach allows the model to gather information and use it to provide more accurate and contextual responses, making your chatbot considerably more useful.
This simple example demonstrates how tools can expand your model's capabilities. You can create more complex tools to integrate with real APIs, databases, or any other external systems, allowing the model to access and process real-world data in real-time. Tools bridge the gap between the model's knowledge cutoff and current information.
## Where to Next?
You've built an AI chatbot using the AI SDK! From here, you have several paths to explore:
- To learn more about the AI SDK, read through the [documentation](/docs).
- If you're interested in diving deeper with guides, check out the [RAG (retrieval-augmented generation)](/docs/guides/rag-chatbot) and [multi-modal chatbot](/docs/guides/multi-modal-chatbot) guides.
- To jumpstart your first AI project, explore available [templates](https://vercel.com/templates?type=ai).
---
title: Nuxt
description: Welcome to the AI SDK quickstart guide for Nuxt!
---
# Nuxt Quickstart
The AI SDK is a powerful Typescript library designed to help developers build AI-powered applications.
In this quickstart tutorial, you'll build a simple AI-chatbot with a streaming user interface. Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects.
If you are unfamiliar with the concepts of [Prompt Engineering](/docs/advanced/prompt-engineering) and [HTTP Streaming](/docs/advanced/why-streaming), you can optionally read these documents first.
## Prerequisites
To follow this quickstart, you'll need:
- Node.js 18+ and pnpm installed on your local development machine.
- An OpenAI API key.
If you haven't obtained your OpenAI API key, you can do so by [signing up](https://platform.openai.com/signup/) on the OpenAI website.
## Setup Your Application
Start by creating a new Nuxt application. This command will create a new directory named `my-ai-app` and set up a basic Nuxt application inside it.
Navigate to the newly created directory:
### Install dependencies
Install `ai` and `@ai-sdk/openai`, the AI SDK's OpenAI provider.
The AI SDK is designed to be a unified interface to interact with any large
language model. This means that you can change model and providers with just
one line of code! Learn more about [available providers](/providers) and
[building custom providers](/providers/community-providers/custom-providers)
in the [providers](/providers) section.
Make sure you are using `ai` version 3.1 or higher.
### Configure OpenAI API key
Create a `.env` file in your project root and add your OpenAI API Key. This key is used to authenticate your application with the OpenAI service.
Edit the `.env` file:
```env filename=".env"
OPENAI_API_KEY=xxxxxxxxx
```
Replace `xxxxxxxxx` with your actual OpenAI API key and configure the environment variable in `nuxt.config.ts`:
```ts filename="nuxt.config.ts"
export default defineNuxtConfig({
// rest of your nuxt config
runtimeConfig: {
openaiApiKey: process.env.OPENAI_API_KEY,
},
});
```
The AI SDK's OpenAI Provider will default to using the `OPENAI_API_KEY`
environment variable.
## Create an API route
Create an API route, `server/api/chat.ts` and add the following code:
```typescript filename="server/api/chat.ts"
import { streamText } from 'ai';
import { createOpenAI } from '@ai-sdk/openai';
export default defineLazyEventHandler(async () => {
const apiKey = useRuntimeConfig().openaiApiKey;
if (!apiKey) throw new Error('Missing OpenAI API key');
const openai = createOpenAI({
apiKey: apiKey,
});
return defineEventHandler(async (event: any) => {
const { messages } = await readBody(event);
const result = streamText({
model: openai('gpt-4o'),
messages,
});
return result.toDataStreamResponse();
});
});
```
Let's take a look at what is happening in this code:
1. Create an OpenAI provider instance with the `createOpenAI` function from the `@ai-sdk/openai` package.
2. Define an Event Handler and extract `messages` from the body of the request. The `messages` variable contains a history of the conversation with you and the chatbot and will provide the chatbot with the necessary context to make the next generation.
3. Call [`streamText`](/docs/reference/ai-sdk-core/stream-text), which is imported from the `ai` package. This function accepts a configuration object that contains a `model` provider (defined in step 1) and `messages` (defined in step 2). You can pass additional [settings](/docs/ai-sdk-core/settings) to further customise the model's behaviour.
4. The `streamText` function returns a [`StreamTextResult`](/docs/reference/ai-sdk-core/stream-text#result). This result object contains the [ `toDataStreamResponse` ](/docs/reference/ai-sdk-core/stream-text#to-data-stream-response) function which converts the result to a streamed response object.
5. Return the result to the client to stream the response.
## Wire up the UI
Now that you have an API route that can query an LLM, it's time to setup your frontend. The AI SDK's [ UI ](/docs/ai-sdk-ui/overview) package abstract the complexity of a chat interface into one hook, [`useChat`](/docs/reference/ai-sdk-ui/use-chat).
Update your root page (`pages/index.vue`) with the following code to show a list of chat messages and provide a user message input:
```typescript filename="pages/index.vue"
```
If your project has `app.vue` instead of `pages/index.vue`, delete the
`app.vue` file and create a new `pages/index.vue` file with the code above.
This page utilizes the `useChat` hook, which will, by default, use the API route you created earlier (`/api/chat`). The hook provides functions and state for handling user input and form submission. The `useChat` hook provides multiple utility functions and state variables:
- `messages` - the current chat messages (an array of objects with `id`, `role`, and `content` properties).
- `input` - the current value of the user's input field.
- `handleSubmit` - function to handle form submission.
## Running Your Application
With that, you have built everything you need for your chatbot! To start your application, use the command:
Head to your browser and open http://localhost:3000. You should see an input field. Test it out by entering a message and see the AI chatbot respond in real-time! The AI SDK makes it fast and easy to build AI chat interfaces with Nuxt.
## Enhance Your Chatbot with Tools
While large language models (LLMs) have incredible generation capabilities, they struggle with discrete tasks (e.g. mathematics) and interacting with the outside world (e.g. getting the weather). This is where [tools](/docs/ai-sdk-core/tools-and-tool-calling) come in.
Tools are actions that an LLM can invoke. The results of these actions can be reported back to the LLM to be considered in the next response.
For example, if a user asks about the current weather, without tools, the model would only be able to provide general information based on its training data. But with a weather tool, it can fetch and provide up-to-date, location-specific weather information.
Let's enhance your chatbot by adding a simple weather tool.
### Update Your API Route
Modify your `server/api/chat.ts` file to include the new weather tool:
```typescript filename="server/api/chat.ts" highlight="1,18-34"
import { streamText, tool } from 'ai';
import { createOpenAI } from '@ai-sdk/openai';
import { z } from 'zod';
export default defineLazyEventHandler(async () => {
const apiKey = useRuntimeConfig().openaiApiKey;
if (!apiKey) throw new Error('Missing OpenAI API key');
const openai = createOpenAI({
apiKey: apiKey,
});
return defineEventHandler(async (event: any) => {
const { messages } = await readBody(event);
const result = streamText({
model: openai('gpt-4o'),
messages,
tools: {
weather: tool({
description: 'Get the weather in a location (fahrenheit)',
parameters: z.object({
location: z
.string()
.describe('The location to get the weather for'),
}),
execute: async ({ location }) => {
const temperature = Math.round(Math.random() * (90 - 32) + 32);
return {
location,
temperature,
};
},
}),
},
});
return result.toDataStreamResponse();
});
});
```
In this updated code:
1. You import the `tool` function from the `ai` package and `z` from `zod` for schema validation.
2. You define a `tools` object with a `weather` tool. This tool:
- Has a description that helps the model understand when to use it.
- Defines parameters using a Zod schema, specifying that it requires a `location` string to execute this tool. The model will attempt to extract this parameter from the context of the conversation. If it can't, it will ask the user for the missing information.
- Defines an `execute` function that simulates getting weather data (in this case, it returns a random temperature). This is an asynchronous function running on the server so you can fetch real data from an external API.
Now your chatbot can "fetch" weather information for any location the user asks about. When the model determines it needs to use the weather tool, it will generate a tool call with the necessary parameters. The `execute` function will then be automatically run, and you can access the results via `toolInvocations` that is available on the message object.
Try asking something like "What's the weather in New York?" and see how the model uses the new tool.
Notice the blank response in the UI? This is because instead of generating a text response, the model generated a tool call. You can access the tool call and subsequent tool result in the `toolInvocations` key of the message object.
### Update the UI
To display the tool invocations in your UI, update your `pages/index.vue` file:
```typescript filename="pages/index.vue" highlight="11-15"
{{ m.role === 'user' ? 'User: ' : 'AI: ' }}
{{ JSON.stringify(m.toolInvocations, null, 2) }}
{{ m.content }}
```
With this change, you check each message for any tool calls (`toolInvocations`). These tool calls will be displayed as stringified JSON. Otherwise, you show the message content as before.
Now, when you ask about the weather, you'll see the tool invocation and its result displayed in your chat interface.
## Enabling Multi-Step Tool Calls
You may have noticed that while the tool results are visible in the chat interface, the model isn't using this information to answer your original query. This is because once the model generates a tool call, it has technically completed its generation.
To solve this, you can enable multi-step tool calls using the `maxSteps` option in your `useChat` hook. This feature will automatically send tool results back to the model to trigger an additional generation. In this case, you want the model to answer your question using the results from the weather tool.
### Update Your Client-Side Code
Modify your `pages/index.vue` file to include the `maxSteps` option:
```typescript filename="pages/index.vue" highlight="4"
```
Head back to the browser and ask about the weather in a location. You should now see the model using the weather tool results to answer your question.
By setting `maxSteps` to 5, you're allowing the model to use up to 5 "steps" for any given generation. This enables more complex interactions and allows the model to gather and process information over several steps if needed. You can see this in action by adding another tool to convert the temperature from Fahrenheit to Celsius.
### Update Your API Route
Update your `server/api/chat.ts` file to add a new tool to convert the temperature from Fahrenheit to Celsius:
```typescript filename="server/api/chat.ts" highlight="34-47"
import { streamText, tool } from 'ai';
import { createOpenAI } from '@ai-sdk/openai';
import { z } from 'zod';
export default defineLazyEventHandler(async () => {
const apiKey = useRuntimeConfig().openaiApiKey;
if (!apiKey) throw new Error('Missing OpenAI API key');
const openai = createOpenAI({
apiKey: apiKey,
});
return defineEventHandler(async (event: any) => {
const { messages } = await readBody(event);
const result = streamText({
model: openai('gpt-4o-preview'),
messages,
tools: {
weather: tool({
description: 'Get the weather in a location (fahrenheit)',
parameters: z.object({
location: z
.string()
.describe('The location to get the weather for'),
}),
execute: async ({ location }) => {
const temperature = Math.round(Math.random() * (90 - 32) + 32);
return {
location,
temperature,
};
},
}),
convertFahrenheitToCelsius: tool({
description: 'Convert a temperature in fahrenheit to celsius',
parameters: z.object({
temperature: z
.number()
.describe('The temperature in fahrenheit to convert'),
}),
execute: async ({ temperature }) => {
const celsius = Math.round((temperature - 32) * (5 / 9));
return {
celsius,
};
},
}),
},
});
return result.toDataStreamResponse();
});
});
```
Now, when you ask "What's the weather in New York in celsius?", you should see a more complete interaction:
1. The model will call the weather tool for New York.
2. You'll see the tool result displayed.
3. It will then call the temperature conversion tool to convert the temperature from Fahrenheit to Celsius.
4. The model will then use that information to provide a natural language response about the weather in New York.
This multi-step approach allows the model to gather information and use it to provide more accurate and contextual responses, making your chatbot considerably more useful.
This simple example demonstrates how tools can expand your model's capabilities. You can create more complex tools to integrate with real APIs, databases, or any other external systems, allowing the model to access and process real-world data in real-time. Tools bridge the gap between the model's knowledge cutoff and current information.
## Where to Next?
You've built an AI chatbot using the AI SDK! From here, you have several paths to explore:
- To learn more about the AI SDK, read through the [documentation](/docs).
- If you're interested in diving deeper with guides, check out the [RAG (retrieval-augmented generation)](/docs/guides/rag-chatbot) and [multi-modal chatbot](/docs/guides/multi-modal-chatbot) guides.
- To jumpstart your first AI project, explore available [templates](https://vercel.com/templates?type=ai).
---
title: Node.js
description: Welcome to the AI SDK quickstart guide for Node.js!
---
# Node.js Quickstart
In this quickstart tutorial, you'll build a simple AI chatbot with a streaming user interface. Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects.
If you are unfamiliar with the concepts of [Prompt Engineering](/docs/advanced/prompt-engineering) and [HTTP Streaming](/docs/advanced/why-streaming), you can optionally read these documents first.
## Prerequisites
To follow this quickstart, you'll need:
- Node.js 18+ and pnpm installed on your local development machine.
- An OpenAI API key.
If you haven't obtained your OpenAI API key, you can do so by [signing up](https://platform.openai.com/signup/) on the OpenAI website.
## Setup Your Application
Start by creating a new directory using the `mkdir` command. Change into your new directory and then run the `pnpm init` command. This will create a `package.json` in your new directory.
```bash
mkdir my-ai-app
cd my-ai-app
pnpm init
```
### Install Dependencies
Install `ai` and `@ai-sdk/openai`, the AI SDK's OpenAI provider, along with other necessary dependencies.
The AI SDK is designed to be a unified interface to interact with any large
language model. This means that you can change model and providers with just
one line of code! Learn more about [available providers](/providers) and
[building custom providers](/providers/community-providers/custom-providers)
in the [providers](/providers) section.
```bash
pnpm add ai @ai-sdk/openai zod dotenv
pnpm add -D @types/node tsx typescript
```
Make sure you are using `ai` version 3.1 or higher.
The `ai` and `@ai-sdk/openai` packages contain the AI SDK and the [ AI SDK OpenAI provider](/providers/ai-sdk-providers/openai), respectively. You will use `zod` to define type-safe schemas that you will pass to the large language model (LLM). You will use `dotenv` to access environment variables (your OpenAI key) within your application. There are also three development dependencies, installed with the `-D` flag, that are necessary to run your Typescript code.
### Configure OpenAI API key
Create a `.env` file in your project root and add your OpenAI API Key. This key is used to authenticate your application with the OpenAI service.
Edit the `.env` file:
```env filename=".env"
OPENAI_API_KEY=xxxxxxxxx
```
Replace `xxxxxxxxx` with your actual OpenAI API key.
The AI SDK's OpenAI Provider will default to using the `OPENAI_API_KEY`
environment variable.
## Create Your Application
Create an `index.ts` file in the root of your project and add the following code:
```ts filename="index.ts"
import { openai } from '@ai-sdk/openai';
import { CoreMessage, streamText } from 'ai';
import dotenv from 'dotenv';
import * as readline from 'node:readline/promises';
dotenv.config();
const terminal = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
const messages: CoreMessage[] = [];
async function main() {
while (true) {
const userInput = await terminal.question('You: ');
messages.push({ role: 'user', content: userInput });
const result = streamText({
model: openai('gpt-4o'),
messages,
});
let fullResponse = '';
process.stdout.write('\nAssistant: ');
for await (const delta of result.textStream) {
fullResponse += delta;
process.stdout.write(delta);
}
process.stdout.write('\n\n');
messages.push({ role: 'assistant', content: fullResponse });
}
}
main().catch(console.error);
```
Let's take a look at what is happening in this code:
1. Set up a readline interface for taking input from the terminal, enabling interactive sessions directly from the command line.
2. Initialize an array called `messages` to store the history of your conversation. This history allows the model to maintain context in ongoing dialogues.
3. In the `main` function:
- Prompt for and capture user input, storing it in `userInput`.
- Add user input to the `messages` array as a user message.
- Call [`streamText`](/docs/reference/ai-sdk-core/stream-text), which is imported from the `ai` package. This function accepts a configuration object that contains a `model` provider and `messages`.
- Iterate over the text stream returned by the `streamText` function (`result.textStream`) and print the contents of the stream to the terminal.
- Add the assistant's response to the `messages` array.
## Running Your Application
With that, you have built everything you need for your chatbot! To start your application, use the command:
You should see a prompt in your terminal. Test it out by entering a message and see the AI chatbot respond in real-time! The AI SDK makes it fast and easy to build AI chat interfaces with Node.js.
## Enhance Your Chatbot with Tools
While large language models (LLMs) have incredible generation capabilities, they struggle with discrete tasks (e.g. mathematics) and interacting with the outside world (e.g. getting the weather). This is where [tools](/docs/ai-sdk-core/tools-and-tool-calling) come in.
Tools are actions that an LLM can invoke. The results of these actions can be reported back to the LLM to be considered in the next response.
For example, if a user asks about the current weather, without tools, the model would only be able to provide general information based on its training data. But with a weather tool, it can fetch and provide up-to-date, location-specific weather information.
Let's enhance your chatbot by adding a simple weather tool.
### Update Your Application
Modify your `index.ts` file to include the new weather tool:
```ts filename="index.ts" highlight="2,4,25-36"
import { openai } from '@ai-sdk/openai';
import { CoreMessage, streamText, tool } from 'ai';
import dotenv from 'dotenv';
import { z } from 'zod';
import * as readline from 'node:readline/promises';
dotenv.config();
const terminal = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
const messages: CoreMessage[] = [];
async function main() {
while (true) {
const userInput = await terminal.question('You: ');
messages.push({ role: 'user', content: userInput });
const result = streamText({
model: openai('gpt-4o'),
messages,
tools: {
weather: tool({
description: 'Get the weather in a location (in Celsius)',
parameters: z.object({
location: z
.string()
.describe('The location to get the weather for'),
}),
execute: async ({ location }) => ({
location,
temperature: Math.round((Math.random() * 30 + 5) * 10) / 10, // Random temp between 5°C and 35°C
}),
}),
},
});
let fullResponse = '';
process.stdout.write('\nAssistant: ');
for await (const delta of result.textStream) {
fullResponse += delta;
process.stdout.write(delta);
}
process.stdout.write('\n\n');
messages.push({ role: 'assistant', content: fullResponse });
}
}
main().catch(console.error);
```
In this updated code:
1. You import the `tool` function from the `ai` package.
2. You define a `tools` object with a `weather` tool. This tool:
- Has a description that helps the model understand when to use it.
- Defines parameters using a Zod schema, specifying that it requires a `location` string to execute this tool.
- Defines an `execute` function that simulates getting weather data (in this case, it returns a random temperature). This is an asynchronous function, so you could fetch real data from an external API.
Now your chatbot can "fetch" weather information for any location the user asks about. When the model determines it needs to use the weather tool, it will generate a tool call with the necessary parameters. The `execute` function will then be automatically run, and the results will be used by the model to generate its response.
Try asking something like "What's the weather in New York?" and see how the model uses the new tool.
Notice the blank "assistant" response? This is because instead of generating a text response, the model generated a tool call. You can access the tool call and subsequent tool result in the `toolCall` and `toolResult` keys of the result object.
```typescript highlight="47-48"
import { openai } from '@ai-sdk/openai';
import { CoreMessage, streamText, tool } from 'ai';
import dotenv from 'dotenv';
import { z } from 'zod';
import * as readline from 'node:readline/promises';
dotenv.config();
const terminal = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
const messages: CoreMessage[] = [];
async function main() {
while (true) {
const userInput = await terminal.question('You: ');
messages.push({ role: 'user', content: userInput });
const result = streamText({
model: openai('gpt-4o'),
messages,
tools: {
weather: tool({
description: 'Get the weather in a location (in Celsius)',
parameters: z.object({
location: z
.string()
.describe('The location to get the weather for'),
}),
execute: async ({ location }) => ({
location,
temperature: Math.round((Math.random() * 30 + 5) * 10) / 10, // Random temp between 5°C and 35°C
}),
}),
},
});
let fullResponse = '';
process.stdout.write('\nAssistant: ');
for await (const delta of result.textStream) {
fullResponse += delta;
process.stdout.write(delta);
}
process.stdout.write('\n\n');
console.log(await result.toolCalls);
console.log(await result.toolResults);
messages.push({ role: 'assistant', content: fullResponse });
}
}
main().catch(console.error);
```
Now, when you ask about the weather, you'll see the tool call and its result displayed in your chat interface.
## Enabling Multi-Step Tool Calls
You may have noticed that while the tool results are visible in the chat interface, the model isn't using this information to answer your original query. This is because once the model generates a tool call, it has technically completed its generation.
To solve this, you can enable multi-step tool calls using `maxSteps`. This feature will automatically send tool results back to the model to trigger an additional generation. In this case, you want the model to answer your question using the results from the weather tool.
### Update Your Application
Modify your `index.ts` file to include the `maxSteps` option:
```ts filename="index.ts" highlight="37-40"
import { openai } from '@ai-sdk/openai';
import { CoreMessage, streamText, tool } from 'ai';
import dotenv from 'dotenv';
import { z } from 'zod';
import * as readline from 'node:readline/promises';
dotenv.config();
const terminal = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
const messages: CoreMessage[] = [];
async function main() {
while (true) {
const userInput = await terminal.question('You: ');
messages.push({ role: 'user', content: userInput });
const result = streamText({
model: openai('gpt-4o'),
messages,
tools: {
weather: tool({
description: 'Get the weather in a location (in Celsius)',
parameters: z.object({
location: z
.string()
.describe('The location to get the weather for'),
}),
execute: async ({ location }) => ({
location,
temperature: Math.round((Math.random() * 30 + 5) * 10) / 10, // Random temp between 5°C and 35°C
}),
}),
},
maxSteps: 5,
onStepFinish: step => {
console.log(JSON.stringify(step, null, 2));
},
});
let fullResponse = '';
process.stdout.write('\nAssistant: ');
for await (const delta of result.textStream) {
fullResponse += delta;
process.stdout.write(delta);
}
process.stdout.write('\n\n');
messages.push({ role: 'assistant', content: fullResponse });
}
}
main().catch(console.error);
```
In this updated code:
1. You set `maxSteps` to 5, allowing the model to use up to 5 "steps" for any given generation.
2. You add an `onStepFinish` callback to log each step of the interaction, helping you understand the model's tool usage. This means we can also delete the `toolCall` and `toolResult` `console.log` statements from the previous example.
Now, when you ask about the weather in a location, you should see the model using the weather tool results to answer your question.
By setting `maxSteps` to 5, you're allowing the model to use up to 5 "steps" for any given generation. This enables more complex interactions and allows the model to gather and process information over several steps if needed. You can see this in action by adding another tool to convert the temperature from Celsius to Fahrenheit.
### Adding a second tool
Update your `index.ts` file to add a new tool to convert the temperature from Celsius to Fahrenheit:
```ts filename="index.ts" highlight="36-45"
import { openai } from '@ai-sdk/openai';
import { CoreMessage, streamText, tool } from 'ai';
import dotenv from 'dotenv';
import { z } from 'zod';
import * as readline from 'node:readline/promises';
dotenv.config();
const terminal = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
const messages: CoreMessage[] = [];
async function main() {
while (true) {
const userInput = await terminal.question('You: ');
messages.push({ role: 'user', content: userInput });
const result = streamText({
model: openai('gpt-4o'),
messages,
tools: {
weather: tool({
description: 'Get the weather in a location (in Celsius)',
parameters: z.object({
location: z
.string()
.describe('The location to get the weather for'),
}),
execute: async ({ location }) => ({
location,
temperature: Math.round((Math.random() * 30 + 5) * 10) / 10, // Random temp between 5°C and 35°C
}),
}),
convertCelsiusToFahrenheit: tool({
description: 'Convert a temperature from Celsius to Fahrenheit',
parameters: z.object({
celsius: z
.number()
.describe('The temperature in Celsius to convert'),
}),
execute: async ({ celsius }) => {
const fahrenheit = (celsius * 9) / 5 + 32;
return { fahrenheit: Math.round(fahrenheit * 100) / 100 };
},
}),
},
maxSteps: 5,
onStepFinish: step => {
console.log(JSON.stringify(step, null, 2));
},
});
let fullResponse = '';
process.stdout.write('\nAssistant: ');
for await (const delta of result.textStream) {
fullResponse += delta;
process.stdout.write(delta);
}
process.stdout.write('\n\n');
messages.push({ role: 'assistant', content: fullResponse });
}
}
main().catch(console.error);
```
Now, when you ask "What's the weather in New York in Celsius?", you should see a more complete interaction:
1. The model will call the weather tool for New York.
2. You'll see the tool result logged.
3. It will then call the temperature conversion tool to convert the temperature from Celsius to Fahrenheit.
4. The model will then use that information to provide a natural language response about the weather in New York.
This multi-step approach allows the model to gather information and use it to provide more accurate and contextual responses, making your chatbot considerably more useful.
This example shows how tools can expand the model's capabilities. You can create more complex tools to integrate with real APIs, databases, or any other external systems, allowing the model to access and process real-world data in real-time. Tools bridge the gap between the model's knowledge cutoff and current information.
## Where to Next?
You've built an AI chatbot using the AI SDK! From here, you have several paths to explore:
- To learn more about the AI SDK, read through the [documentation](/docs).
- If you're interested in diving deeper with guides, check out the [RAG (retrieval-augmented generation)](/docs/guides/rag-chatbot) and [multi-modal chatbot](/docs/guides/multi-modal-chatbot) guides.
- To jumpstart your first AI project, explore available [templates](https://vercel.com/templates?type=ai).
---
title: Expo
description: Welcome to the AI SDK quickstart guide for Expo!
---
# Expo Quickstart
In this quick start tutorial, you'll build a simple AI-chatbot with a streaming user interface with [Expo](https://expo.dev/). Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects.
Check out [Prompt Engineering](/docs/advanced/prompt-engineering) and [HTTP Streaming](/docs/advanced/why-streaming) if you haven't heard of them.
## Prerequisites
To follow this quickstart, you'll need:
- Node.js 18+ and pnpm installed on your local development machine.
- An OpenAI API key.
If you haven't obtained your OpenAI API key, you can do so by [signing up](https://platform.openai.com/signup/) on the OpenAI website.
## Create Your Application
Start by creating a new Expo application. This command will create a new directory named `my-ai-app` and set up a basic Expo application inside it.
Navigate to the newly created directory:
This guide requires Expo 52 or higher.
### Install dependencies
Install `ai`, `@ai-sdk/react` and `@ai-sdk/openai`, the AI package, the AI React package and AI SDK's [ OpenAI provider ](/providers/ai-sdk-providers/openai) respectively.
The AI SDK is designed to be a unified interface to interact with any large
language model. This means that you can change model and providers with just
one line of code! Learn more about [available providers](/providers) and
[building custom providers](/providers/community-providers/custom-providers)
in the [providers](/providers) section.
Make sure you are using `ai` version 3.1 or higher.
### Configure OpenAI API key
Create a `.env.local` file in your project root and add your OpenAI API Key. This key is used to authenticate your application with the OpenAI service.
Edit the `.env.local` file:
```env filename=".env.local"
OPENAI_API_KEY=xxxxxxxxx
```
Replace `xxxxxxxxx` with your actual OpenAI API key.
The AI SDK's OpenAI Provider will default to using the `OPENAI_API_KEY`
environment variable.
## Create an API Route
Create a route handler, `app/api/chat+api.ts` and add the following code:
```tsx filename="app/api/chat+api.ts"
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
messages,
});
return result.toDataStreamResponse();
}
```
Let's take a look at what is happening in this code:
1. Define an asynchronous `POST` request handler and extract `messages` from the body of the request. The `messages` variable contains a history of the conversation between you and the chatbot and provides the chatbot with the necessary context to make the next generation.
2. Call [`streamText`](/docs/reference/ai-sdk-core/stream-text), which is imported from the `ai` package. This function accepts a configuration object that contains a `model` provider (imported from `@ai-sdk/openai`) and `messages` (defined in step 1). You can pass additional [settings](/docs/ai-sdk-core/settings) to further customise the model's behaviour.
3. The `streamText` function returns a [`StreamTextResult`](/docs/reference/ai-sdk-core/stream-text#result-object). This result object contains the [ `toDataStreamResponse` ](/docs/reference/ai-sdk-core/stream-text#to-data-stream-response) function which converts the result to a streamed response object.
4. Finally, return the result to the client to stream the response.
This API route creates a POST request endpoint at `/api/chat`.
If you are experiencing issues with choppy/delayed streams on iOS, you can add
the `Content-Type`: `application/octet-stream` header to the response. For
more information, check out [this GitHub
issue](https://github.com/vercel/ai/issues/3946).
## Wire up the UI
Now that you have an API route that can query an LLM, it's time to setup your frontend. The AI SDK's [ UI ](/docs/ai-sdk-ui) package abstracts the complexity of a chat interface into one hook, [`useChat`](/docs/reference/ai-sdk-ui/use-chat).
Update your root page (`app/(tabs)/index.tsx`) with the following code to show a list of chat messages and provide a user message input:
```tsx filename="app/(tabs)/index.tsx"
import { generateAPIUrl } from '@/utils';
import { useChat } from '@ai-sdk/react';
import { fetch as expoFetch } from 'expo/fetch';
import { View, TextInput, ScrollView, Text, SafeAreaView } from 'react-native';
export default function App() {
const { messages, error, handleInputChange, input, handleSubmit } = useChat({
fetch: expoFetch as unknown as typeof globalThis.fetch,
api: generateAPIUrl('/api/chat'),
onError: error => console.error(error, 'ERROR'),
});
if (error) return {error.message};
return (
{messages.map(m => (
{m.role}{m.content}
))}
handleInputChange({
...e,
target: {
...e.target,
value: e.nativeEvent.text,
},
} as unknown as React.ChangeEvent)
}
onSubmitEditing={e => {
handleSubmit(e);
e.preventDefault();
}}
autoFocus={true}
/>
);
}
```
This page utilizes the `useChat` hook, which will, by default, use the `POST` API route you created earlier (`/api/chat`). The hook provides functions and state for handling user input and form submission. The `useChat` hook provides multiple utility functions and state variables:
- `messages` - the current chat messages (an array of objects with `id`, `role`, and `content` properties).
- `input` - the current value of the user's input field.
- `handleInputChange` and `handleSubmit` - functions to handle user interactions (typing into the input field and submitting the form, respectively).
You use the expo/fetch function instead of the native node fetch to enable
streaming of chat responses. This requires Expo 52 or higher.
### Create the API URL Generator
Because you're using expo/fetch for streaming responses instead of the native fetch function, you'll need an API URL generator to ensure you are using the correct base url and format depending on the client environment (e.g. web or mobile). Create a new file called `utils.ts` in the root of your project and add the following code:
```ts filename="utils.ts"
import Constants from 'expo-constants';
export const generateAPIUrl = (relativePath: string) => {
const origin = Constants.experienceUrl.replace('exp://', 'http://');
const path = relativePath.startsWith('/') ? relativePath : `/${relativePath}`;
if (process.env.NODE_ENV === 'development') {
return origin.concat(path);
}
if (!process.env.EXPO_PUBLIC_API_BASE_URL) {
throw new Error(
'EXPO_PUBLIC_API_BASE_URL environment variable is not defined',
);
}
return process.env.EXPO_PUBLIC_API_BASE_URL.concat(path);
};
```
This utility function handles URL generation for both development and production environments, ensuring your API calls work correctly across different devices and configurations.
Before deploying to production, you must set the `EXPO_PUBLIC_API_BASE_URL`
environment variable in your production environment. This variable should
point to the base URL of your API server.
## Running Your Application
With that, you have built everything you need for your chatbot! To start your application, use the command:
Head to your browser and open http://localhost:8081. You should see an input field. Test it out by entering a message and see the AI chatbot respond in real-time! The AI SDK makes it fast and easy to build AI chat interfaces with Expo.
## Enhance Your Chatbot with Tools
While large language models (LLMs) have incredible generation capabilities, they struggle with discrete tasks (e.g. mathematics) and interacting with the outside world (e.g. getting the weather). This is where [tools](/docs/ai-sdk-core/tools-and-tool-calling) come in.
Tools are actions that an LLM can invoke. The results of these actions can be reported back to the LLM to be considered in the next response.
For example, if a user asks about the current weather, without tools, the model would only be able to provide general information based on its training data. But with a weather tool, it can fetch and provide up-to-date, location-specific weather information.
Let's enhance your chatbot by adding a simple weather tool.
### Update Your API route
Modify your `app/api/chat+api.ts` file to include the new weather tool:
```tsx filename="app/api/chat+api.ts" highlight="2,13-27"
import { openai } from '@ai-sdk/openai';
import { streamText, tool } from 'ai';
import { z } from 'zod';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
messages,
tools: {
weather: tool({
description: 'Get the weather in a location (fahrenheit)',
parameters: z.object({
location: z.string().describe('The location to get the weather for'),
}),
execute: async ({ location }) => {
const temperature = Math.round(Math.random() * (90 - 32) + 32);
return {
location,
temperature,
};
},
}),
},
});
return result.toDataStreamResponse();
}
```
In this updated code:
1. You import the `tool` function from the `ai` package and `z` from `zod` for schema validation.
2. You define a `tools` object with a `weather` tool. This tool:
- Has a description that helps the model understand when to use it.
- Defines parameters using a Zod schema, specifying that it requires a `location` string to execute this tool. The model will attempt to extract this parameter from the context of the conversation. If it can't, it will ask the user for the missing information.
- Defines an `execute` function that simulates getting weather data (in this case, it returns a random temperature). This is an asynchronous function running on the server so you can fetch real data from an external API.
Now your chatbot can "fetch" weather information for any location the user asks about. When the model determines it needs to use the weather tool, it will generate a tool call with the necessary parameters. The `execute` function will then be automatically run, and you can access the results via `toolInvocations` that is available on the message object.
You may need to restart your development server for the changes to take
effect.
Try asking something like "What's the weather in New York?" and see how the model uses the new tool.
Notice the blank response in the UI? This is because instead of generating a text response, the model generated a tool call. You can access the tool call and subsequent tool result in the `toolInvocations` key of the message object.
### Update the UI
To display the tool invocations in your UI, update your `app/(tabs)/index.tsx` file:
```tsx filename="app/(tabs)/index.tsx" highlight="31-35"
import { generateAPIUrl } from '@/utils';
import { useChat } from '@ai-sdk/react';
import { fetch as expoFetch } from 'expo/fetch';
import { View, TextInput, ScrollView, Text, SafeAreaView } from 'react-native';
export default function App() {
const { messages, error, handleInputChange, input, handleSubmit } = useChat({
fetch: expoFetch as unknown as typeof globalThis.fetch,
api: generateAPIUrl('/api/chat'),
onError: error => console.error(error, 'ERROR'),
});
if (error) return {error.message};
return (
{messages.map(m => (
{m.role}
{m.toolInvocations ? (
{JSON.stringify(m.toolInvocations, null, 2)}
) : (
{m.content}
)}
))}
handleInputChange({
...e,
target: {
...e.target,
value: e.nativeEvent.text,
},
} as unknown as React.ChangeEvent)
}
onSubmitEditing={e => {
handleSubmit(e);
e.preventDefault();
}}
autoFocus={true}
/>
);
}
```
You may need to restart your development server for the changes to take
effect.
With this change, you check each message for any tool calls (`toolInvocations`). These tool calls will be displayed as stringified JSON. Otherwise, you show the message content as before.
Now, when you ask about the weather, you'll see the tool invocation and its result displayed in your chat interface.
## Enabling Multi-Step Tool Calls
You may have noticed that while the tool results are visible in the chat interface, the model isn't using this information to answer your original query. This is because once the model generates a tool call, it has technically completed its generation.
To solve this, you can enable multi-step tool calls using the `maxSteps` option in your `useChat` hook. This feature will automatically send tool results back to the model to trigger an additional generation. In this case, you want the model to answer your question using the results from the weather tool.
### Update Your Client-Side Code
Modify your `app/(tabs)/index.tsx` file to include the `maxSteps` option:
```tsx filename="app/(tabs)/index.tsx" highlight="9"
import { useChat } from '@ai-sdk/react';
// ... rest of your imports
export default function App() {
const { messages, error, handleInputChange, input, handleSubmit } = useChat({
fetch: expoFetch as unknown as typeof globalThis.fetch,
api: generateAPIUrl('/api/chat'),
onError: error => console.error(error, 'ERROR'),
maxSteps: 5,
});
// ... rest of your component code
}
```
You may need to restart your development server for the changes to take
effect.
Head back to the browser and ask about the weather in a location. You should now see the model using the weather tool results to answer your question.
By setting `maxSteps` to 5, you're allowing the model to use up to 5 "steps" for any given generation. This enables more complex interactions and allows the model to gather and process information over several steps if needed. You can see this in action by adding another tool to convert the temperature from Fahrenheit to Celsius.
### Update Your API Route
Update your `app/api/chat+api.ts` file to add a new tool to convert the temperature from Fahrenheit to Celsius:
```tsx filename="app/api/chat+api.ts" highlight="27-40"
import { openai } from '@ai-sdk/openai';
import { streamText, tool } from 'ai';
import { z } from 'zod';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
messages,
tools: {
weather: tool({
description: 'Get the weather in a location (fahrenheit)',
parameters: z.object({
location: z.string().describe('The location to get the weather for'),
}),
execute: async ({ location }) => {
const temperature = Math.round(Math.random() * (90 - 32) + 32);
return {
location,
temperature,
};
},
}),
convertFahrenheitToCelsius: tool({
description: 'Convert a temperature in fahrenheit to celsius',
parameters: z.object({
temperature: z
.number()
.describe('The temperature in fahrenheit to convert'),
}),
execute: async ({ temperature }) => {
const celsius = Math.round((temperature - 32) * (5 / 9));
return {
celsius,
};
},
}),
},
});
return result.toDataStreamResponse();
}
```
You may need to restart your development server for the changes to take
effect.
Now, when you ask "What's the weather in New York in celsius?", you should see a more complete interaction:
1. The model will call the weather tool for New York.
2. You'll see the tool result displayed.
3. It will then call the temperature conversion tool to convert the temperature from Fahrenheit to Celsius.
4. The model will then use that information to provide a natural language response about the weather in New York.
This multi-step approach allows the model to gather information and use it to provide more accurate and contextual responses, making your chatbot considerably more useful.
This simple example demonstrates how tools can expand your model's capabilities. You can create more complex tools to integrate with real APIs, databases, or any other external systems, allowing the model to access and process real-world data in real-time. Tools bridge the gap between the model's knowledge cutoff and current information.
## Where to Next?
You've built an AI chatbot using the AI SDK! From here, you have several paths to explore:
- To learn more about the AI SDK, read through the [documentation](/docs).
- If you're interested in diving deeper with guides, check out the [RAG (retrieval-augmented generation)](/docs/guides/rag-chatbot) and [multi-modal chatbot](/docs/guides/multi-modal-chatbot) guides.
- To jumpstart your first AI project, explore available [templates](https://vercel.com/templates?type=ai).
---
title: Getting Started
description: Welcome to the AI SDK documentation!
---
# Getting Started
The following guides are intended to provide you with an introduction to some of the core features provided by the AI SDK.
## Backend Framework Examples
You can also use [AI SDK Core](/docs/ai-sdk-core/overview) and [AI SDK UI](/docs/ai-sdk-ui/overview) with the following backend frameworks:
---
title: RAG Chatbot
description: Learn how to build a RAG Chatbot with the AI SDK and Next.js
---
# RAG Chatbot Guide
In this guide, you will learn how to build a retrieval-augmented generation (RAG) chatbot application.
Before we dive in, let's look at what RAG is, and why we would want to use it.
### What is RAG?
RAG stands for retrieval augmented generation. In simple terms, RAG is the process of providing a Large Language Model (LLM) with specific information relevant to the prompt.
### Why is RAG important?
While LLMs are powerful, the information they can reason on is restricted to the data they were trained on. This problem becomes apparent when asking an LLM for information outside of their training data, like proprietary data or common knowledge that has occurred after the model’s training cutoff. RAG solves this problem by fetching information relevant to the prompt and then passing that to the model as context.
To illustrate with a basic example, imagine asking the model for your favorite food:
```txt
**input**
What is my favorite food?
**generation**
I don't have access to personal information about individuals, including their
favorite foods.
```
Not surprisingly, the model doesn’t know. But imagine, alongside your prompt, the model received some extra context:
```txt
**input**
Respond to the user's prompt using only the provided context.
user prompt: 'What is my favorite food?'
context: user loves chicken nuggets
**generation**
Your favorite food is chicken nuggets!
```
Just like that, you have augmented the model’s generation by providing relevant information to the query. Assuming the model has the appropriate information, it is now highly likely to return an accurate response to the users query. But how does it retrieve the relevant information? The answer relies on a concept called embedding.
You could fetch any context for your RAG application (eg. Google search).
Embeddings and Vector Databases are just a specific retrieval approach to
achieve semantic search.
### Embedding
[Embeddings](/docs/ai-sdk-core/embeddings) are a way to represent words, phrases, or images as vectors in a high-dimensional space. In this space, similar words are close to each other, and the distance between words can be used to measure their similarity.
In practice, this means that if you embedded the words `cat` and `dog`, you would expect them to be plotted close to each other in vector space. The process of calculating the similarity between two vectors is called ‘cosine similarity’ where a value of 1 would indicate high similarity and a value of -1 would indicate high opposition.
Don’t worry if this seems complicated. a high level understanding is all you
need to get started! For a more in-depth introduction to embeddings, check out
[this guide](https://jalammar.github.io/illustrated-word2vec/).
As mentioned above, embeddings are a way to represent the semantic meaning of **words and phrases**. The implication here is that the larger the input to your embedding, the lower quality the embedding will be. So how would you approach embedding content longer than a simple phrase?
### Chunking
Chunking refers to the process of breaking down a particular source material into smaller pieces. There are many different approaches to chunking and it’s worth experimenting as the most effective approach can differ by use case. A simple and common approach to chunking (and what you will be using in this guide) is separating written content by sentences.
Once your source material is appropriately chunked, you can embed each one and then store the embedding and the chunk together in a database. Embeddings can be stored in any database that supports vectors. For this tutorial, you will be using [Postgres](https://www.postgresql.org/) alongside the [pgvector](https://github.com/pgvector/pgvector) plugin.
### All Together Now
Combining all of this together, RAG is the process of enabling the model to respond with information outside of it’s training data by embedding a users query, retrieving the relevant source material (chunks) with the highest semantic similarity, and then passing them alongside the initial query as context. Going back to the example where you ask the model for your favorite food, the prompt preparation process would look like this.
By passing the appropriate context and refining the model’s objective, you are able to fully leverage its strengths as a reasoning machine.
Onto the project!
## Project Setup
In this project, you will build a chatbot that will only respond with information that it has within its knowledge base. The chatbot will be able to both store and retrieve information. This project has many interesting use cases from customer support through to building your own second brain!
This project will use the following stack:
- [Next.js](https://nextjs.org) 14 (App Router)
- [ AI SDK ](https://sdk.vercel.ai/docs)
- [OpenAI](https://openai.com)
- [ Drizzle ORM ](https://orm.drizzle.team)
- [ Postgres ](https://www.postgresql.org/) with [ pgvector ](https://github.com/pgvector/pgvector)
- [ shadcn-ui ](https://ui.shadcn.com) and [ TailwindCSS ](https://tailwindcss.com) for styling
### Clone Repo
To reduce the scope of this guide, you will be starting with a [repository](https://github.com/vercel/ai-sdk-rag-starter) that already has a few things set up for you:
- Drizzle ORM (`lib/db`) including an initial migration and a script to migrate (`db:migrate`)
- a basic schema for the `resources` table (this will be for source material)
- a Server Action for creating a `resource`
To get started, clone the starter repository with the following command:
First things first, run the following command to install the project’s dependencies:
### Create Database
You will need a Postgres database to complete this tutorial. If you don’t have Postgres setup on your local machine you can:
- Create a free Postgres database with [Vercel Postgres](https://vercel.com/docs/storage/vercel-postgres); or
- Follow [this guide](https://www.prisma.io/dataguide/postgresql/setting-up-a-local-postgresql-database) to set it up locally
### Migrate Database
Once you have a Postgres database, you need to add the connection string as an environment secret.
Make a copy of the `.env.example` file and rename it to `.env`.
Open the new `.env` file. You should see an item called `DATABASE_URL`. Copy in your database connection string after the equals sign.
With that set up, you can now run your first database migration. Run the following command:
This will first add the `pgvector` extension to your database. Then it will create a new table for your `resources` schema that is defined in `lib/db/schema/resources.ts`. This schema has four columns: `id`, `content`, `createdAt`, and `updatedAt`.
If you experience an error with the migration, open your migration file
(`lib/db/migrations/0000_yielding_bloodaxe.sql`), cut (copy and remove) the
first line, and run it directly on your postgres instance. You should now be
able to run the updated migration. [More
info](https://github.com/vercel/ai-sdk-rag-starter/issues/1).
### OpenAI API Key
For this guide, you will need an OpenAI API key. To generate an API key, go to [platform.openai.com](http://platform.openai.com/).
Once you have your API key, paste it into your `.env` file (`OPENAI_API_KEY`).
## Build
Let’s build a quick task list of what needs to be done:
1. Create a table in your database to store embeddings
2. Add logic to chunk and create embeddings when creating resources
3. Create a chatbot
4. Give the chatbot tools to query / create resources for it’s knowledge base
### Create Embeddings Table
Currently, your application has one table (`resources`) which has a column (`content`) for storing content. Remember, each `resource` (source material) will have to be chunked, embedded, and then stored. Let’s create a table called `embeddings` to store these chunks.
Create a new file (`lib/db/schema/embeddings.ts`) and add the following code:
```tsx filename="lib/db/schema/embeddings.ts"
import { generateId } from 'ai';
import { index, pgTable, text, varchar, vector } from 'drizzle-orm/pg-core';
import { resources } from './resources';
export const embeddings = pgTable(
'embeddings',
{
id: varchar('id', { length: 191 })
.primaryKey()
.$defaultFn(() => generateId()),
resourceId: varchar('resource_id', { length: 191 }).references(
() => resources.id,
{ onDelete: 'cascade' },
),
content: text('content').notNull(),
embedding: vector('embedding', { dimensions: 1536 }).notNull(),
},
table => ({
embeddingIndex: index('embeddingIndex').using(
'hnsw',
table.embedding.op('vector_cosine_ops'),
),
}),
);
```
This table has four columns:
- `id` - unique identifier
- `resourceId` - a foreign key relation to the full source material
- `content` - the plain text chunk
- `embedding` - the vector representation of the plain text chunk
To perform similarity search, you also need to include an index ([HNSW](https://github.com/pgvector/pgvector?tab=readme-ov-file#hnsw) or [IVFFlat](https://github.com/pgvector/pgvector?tab=readme-ov-file#ivfflat)) on this column for better performance.
To push this change to the database, run the following command:
### Add Embedding Logic
Now that you have a table to store embeddings, it’s time to write the logic to create the embeddings.
Create a file with the following command:
### Generate Chunks
Remember, to create an embedding, you will start with a piece of source material (unknown length), break it down into smaller chunks, embed each chunk, and then save the chunk to the database. Let’s start by creating a function to break the source material into small chunks.
```tsx filename="lib/ai/embedding.ts"
const generateChunks = (input: string): string[] => {
return input
.trim()
.split('.')
.filter(i => i !== '');
};
```
This function will take an input string and split it by periods, filtering out any empty items. This will return an array of strings. It is worth experimenting with different chunking techniques in your projects as the best technique will vary.
### Install AI SDK
You will use the AI SDK to create embeddings. This will require two more dependencies, which you can install by running the following command:
This will install the [AI SDK](https://sdk.vercel.ai/docs) and the [OpenAI provider](/providers/ai-sdk-providers/openai).
The AI SDK is designed to be a unified interface to interact with any large
language model. This means that you can change model and providers with just
one line of code! Learn more about [available providers](/providers) and
[building custom providers](/providers/community-providers/custom-providers)
in the [providers](/providers) section.
### Generate Embeddings
Let’s add a function to generate embeddings. Copy the following code into your `lib/ai/embedding.ts` file.
```tsx filename="lib/ai/embedding.ts" highlight="1-2,4,13-22"
import { embedMany } from 'ai';
import { openai } from '@ai-sdk/openai';
const embeddingModel = openai.embedding('text-embedding-ada-002');
const generateChunks = (input: string): string[] => {
return input
.trim()
.split('.')
.filter(i => i !== '');
};
export const generateEmbeddings = async (
value: string,
): Promise> => {
const chunks = generateChunks(value);
const { embeddings } = await embedMany({
model: embeddingModel,
values: chunks,
});
return embeddings.map((e, i) => ({ content: chunks[i], embedding: e }));
};
```
In this code, you first define the model you want to use for the embeddings. In this example, you are using OpenAI’s `text-embedding-ada-002` embedding model.
Next, you create an asynchronous function called `generateEmbeddings`. This function will take in the source material (`value`) as an input and return a promise of an array of objects, each containing an embedding and content. Within the function, you first generate chunks for the input. Then, you pass those chunks to the [`embedMany`](/docs/reference/ai-sdk-core/embed-many) function imported from the AI SDK which will return embeddings of the chunks you passed in. Finally, you map over and return the embeddings in a format that is ready to save in the database.
### Update Server Action
Open the file at `lib/actions/resources.ts`. This file has one function, `createResource`, which, as the name implies, allows you to create a resource.
```tsx filename="lib/actions/resources.ts"
'use server';
import {
NewResourceParams,
insertResourceSchema,
resources,
} from '@/lib/db/schema/resources';
import { db } from '../db';
export const createResource = async (input: NewResourceParams) => {
try {
const { content } = insertResourceSchema.parse(input);
const [resource] = await db
.insert(resources)
.values({ content })
.returning();
return 'Resource successfully created.';
} catch (e) {
if (e instanceof Error)
return e.message.length > 0 ? e.message : 'Error, please try again.';
}
};
```
This function is a [Server Action](https://nextjs.org/docs/app/building-your-application/data-fetching/server-actions-and-mutations#with-client-components), as denoted by the `“use server”;` directive at the top of the file. This means that it can be called anywhere in your Next.js application. This function will take an input, run it through a [Zod](https://zod.dev) schema to ensure it adheres to the correct schema, and then creates a new resource in the database. This is the ideal location to generate and store embeddings of the newly created resources.
Update the file with the following code:
```tsx filename="lib/actions/resources.ts" highlight="9-10,21-27,29"
'use server';
import {
NewResourceParams,
insertResourceSchema,
resources,
} from '@/lib/db/schema/resources';
import { db } from '../db';
import { generateEmbeddings } from '../ai/embedding';
import { embeddings as embeddingsTable } from '../db/schema/embeddings';
export const createResource = async (input: NewResourceParams) => {
try {
const { content } = insertResourceSchema.parse(input);
const [resource] = await db
.insert(resources)
.values({ content })
.returning();
const embeddings = await generateEmbeddings(content);
await db.insert(embeddingsTable).values(
embeddings.map(embedding => ({
resourceId: resource.id,
...embedding,
})),
);
return 'Resource successfully created and embedded.';
} catch (error) {
return error instanceof Error && error.message.length > 0
? error.message
: 'Error, please try again.';
}
};
```
First, you call the `generateEmbeddings` function created in the previous step, passing in the source material (`content`). Once you have your embeddings (`e`) of the source material, you can save them to the database, passing the `resourceId` alongside each embedding.
### Create Root Page
Great! Let's build the frontend. The AI SDK’s [`useChat`](/docs/reference/ai-sdk-ui/use-chat) hook allows you to easily create a conversational user interface for your chatbot application.
Replace your root page (`app/page.tsx`) with the following code.
```tsx filename="app/page.tsx"
'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
{messages.map(m => (
{m.role}
{m.content}
))}
);
}
```
The `useChat` hook enables the streaming of chat messages from your AI provider (you will be using OpenAI), manages the state for chat input, and updates the UI automatically as new messages are received.
Run the following command to start the Next.js dev server:
Head to [http://localhost:3000](http://localhost:3000/). You should see an empty screen with an input bar floating at the bottom. Try to send a message. The message shows up in the UI for a fraction of a second and then disappears. This is because you haven’t set up the corresponding API route to call the model! By default, `useChat` will send a POST request to the `/api/chat` endpoint with the `messages` as the request body.
You can customize the endpoint in the useChat configuration object
### Create API Route
In Next.js, you can create custom request handlers for a given route using [Route Handlers](https://nextjs.org/docs/app/building-your-application/routing/route-handlers). Route Handlers are defined in a `route.ts` file and can export HTTP methods like `GET`, `POST`, `PUT`, `PATCH` etc.
Create a file at `app/api/chat/route.ts` by running the following command:
Open the file and add the following code:
```tsx filename="app/api/chat/route.ts"
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
messages,
});
return result.toDataStreamResponse();
}
```
In this code, you declare and export an asynchronous function called POST. You retrieve the `messages` from the request body and then pass them to the [`streamText`](/docs/reference/ai-sdk-core/stream-text) function imported from the AI SDK, alongside the model you would like to use. Finally, you return the model’s response in `AIStreamResponse` format.
Head back to the browser and try to send a message again. You should see a response from the model streamed directly in!
### Refining your prompt
While you now have a working chatbot, it isn't doing anything special.
Let’s add system instructions to refine and restrict the model’s behavior. In this case, you want the model to only use information it has retrieved to generate responses. Update your route handler with the following code:
```tsx filename="app/api/chat/route.ts" highlight="12-14"
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
system: `You are a helpful assistant. Check your knowledge base before answering any questions.
Only respond to questions using information from tool calls.
if no relevant information is found in the tool calls, respond, "Sorry, I don't know."`,
messages,
});
return result.toDataStreamResponse();
}
```
Head back to the browser and try to ask the model what your favorite food is. The model should now respond exactly as you instructed above (“Sorry, I don’t know”) given it doesn’t have any relevant information.
In its current form, your chatbot is now, well, useless. How do you give the model the ability to add and query information?
### Using Tools
A [tool](/docs/foundations/tools) is a function that can be called by the model to perform a specific task. You can think of a tool like a program you give to the model that it can run as and when it deems necessary.
Let’s see how you can create a tool to give the model the ability to create, embed and save a resource to your chatbots’ knowledge base.
### Add Resource Tool
Update your route handler with the following code:
```tsx filename="app/api/chat/route.ts" highlight="18-29"
import { createResource } from '@/lib/actions/resources';
import { openai } from '@ai-sdk/openai';
import { streamText, tool } from 'ai';
import { z } from 'zod';
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
system: `You are a helpful assistant. Check your knowledge base before answering any questions.
Only respond to questions using information from tool calls.
if no relevant information is found in the tool calls, respond, "Sorry, I don't know."`,
messages,
tools: {
addResource: tool({
description: `add a resource to your knowledge base.
If the user provides a random piece of knowledge unprompted, use this tool without asking for confirmation.`,
parameters: z.object({
content: z
.string()
.describe('the content or resource to add to the knowledge base'),
}),
execute: async ({ content }) => createResource({ content }),
}),
},
});
return result.toDataStreamResponse();
}
```
In this code, you define a tool called `addResource`. This tool has three elements:
- **description**: description of the tool that will influence when the tool is picked.
- **parameters**: [Zod schema](https://sdk.vercel.ai/docs/foundations/tools#schema-specification-and-validation-with-zod) that defines the parameters necessary for the tool to run.
- **execute**: An asynchronous function that is called with the arguments from the tool call.
In simple terms, on each generation, the model will decide whether it should call the tool. If it deems it should call the tool, it will extract the parameters from the input and then append a new `message` to the `messages` array of type `tool-call`. The AI SDK will then run the `execute` function with the parameters provided by the `tool-call` message.
Head back to the browser and tell the model your favorite food. You should see an empty response in the UI. Did anything happen? Let’s see. Run the following command in a new terminal window.
This will start Drizzle Studio where we can view the rows in our database. You should see a new row in both the `embeddings` and `resources` table with your favorite food!
Let’s make a few changes in the UI to communicate to the user when a tool has been called. Head back to your root page (`app/page.tsx`) and add the following code:
```tsx filename="app/page.tsx" highlight="14-22"
'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
);
}
```
With this change, you now conditionally render the tool that has been called directly in the UI. Save the file and head back to browser. Tell the model your favorite movie. You should see which tool is called in place of the model’s typical text response.
### Improving UX with Multi-Step Calls
It would be nice if the model could summarize the action too. However, technically, once the model calls a tool, it has completed its generation as it ‘generated’ a tool call. How could you achieve this desired behaviour?
The AI SDK has a feature called [`maxSteps`](/docs/ai-sdk-core/tools-and-tool-calling#multi-step-calls) which will automatically send tool call results back to the model!
Open your root page (`app/page.tsx`) and add the following key to the `useChat` configuration object:
```tsx filename="app/page.tsx" highlight="3-5"
// ... Rest of your code
const { messages, input, handleInputChange, handleSubmit } = useChat({
maxSteps: 3,
});
// ... Rest of your code
```
Head back to the browser and tell the model your favorite pizza topping (note: pineapple is not an option). You should see a follow-up response from the model confirming the action.
### Retrieve Resource Tool
The model can now add and embed arbitrary information to your knowledge base. However, it still isn’t able to query it. Let’s create a new tool to allow the model to answer questions by finding relevant information in your knowledge base.
To find similar content, you will need to embed the users query, search the database for semantic similarities, then pass those items to the model as context alongside the query. To achieve this, let’s update your embedding logic file (`lib/ai/embedding.ts`):
```tsx filename="lib/ai/embedding.ts" highlight="1,3-5,27-34,36-49"
import { embed, embedMany } from 'ai';
import { openai } from '@ai-sdk/openai';
import { db } from '../db';
import { cosineDistance, desc, gt, sql } from 'drizzle-orm';
import { embeddings } from '../db/schema/embeddings';
const embeddingModel = openai.embedding('text-embedding-ada-002');
const generateChunks = (input: string): string[] => {
return input
.trim()
.split('.')
.filter(i => i !== '');
};
export const generateEmbeddings = async (
value: string,
): Promise> => {
const chunks = generateChunks(value);
const { embeddings } = await embedMany({
model: embeddingModel,
values: chunks,
});
return embeddings.map((e, i) => ({ content: chunks[i], embedding: e }));
};
export const generateEmbedding = async (value: string): Promise => {
const input = value.replaceAll('\\n', ' ');
const { embedding } = await embed({
model: embeddingModel,
value: input,
});
return embedding;
};
export const findRelevantContent = async (userQuery: string) => {
const userQueryEmbedded = await generateEmbedding(userQuery);
const similarity = sql`1 - (${cosineDistance(
embeddings.embedding,
userQueryEmbedded,
)})`;
const similarGuides = await db
.select({ name: embeddings.content, similarity })
.from(embeddings)
.where(gt(similarity, 0.5))
.orderBy(t => desc(t.similarity))
.limit(4);
return similarGuides;
};
```
In this code, you add two functions:
- `generateEmbedding`: generate a single embedding from an input string
- `findRelevantContent`: embeds the user’s query, searches the database for similar items, then returns relevant items
With that done, it’s onto the final step: creating the tool.
Go back to your route handler (`api/chat/route.ts`) and add a new tool called `getInformation`:
```ts filename="api/chat/route.ts" highlight="5,30-36"
import { createResource } from '@/lib/actions/resources';
import { openai } from '@ai-sdk/openai';
import { streamText, tool } from 'ai';
import { z } from 'zod';
import { findRelevantContent } from '@/lib/ai/embedding';
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
messages,
system: `You are a helpful assistant. Check your knowledge base before answering any questions.
Only respond to questions using information from tool calls.
if no relevant information is found in the tool calls, respond, "Sorry, I don't know."`,
tools: {
addResource: tool({
description: `add a resource to your knowledge base.
If the user provides a random piece of knowledge unprompted, use this tool without asking for confirmation.`,
parameters: z.object({
content: z
.string()
.describe('the content or resource to add to the knowledge base'),
}),
execute: async ({ content }) => createResource({ content }),
}),
getInformation: tool({
description: `get information from your knowledge base to answer questions.`,
parameters: z.object({
question: z.string().describe('the users question'),
}),
execute: async ({ question }) => findRelevantContent(question),
}),
},
});
return result.toDataStreamResponse();
}
```
Head back to the browser, refresh the page, and ask for your favorite food. You should see the model call the `getInformation` tool, and then use the relevant information to formulate a response!
## Conclusion
Congratulations, you have successfully built an AI chatbot that can dynamically add and retrieve information to and from a knowledge base. Throughout this guide, you learned how to create and store embeddings, set up server actions to manage resources, and use tools to extend the capabilities of your chatbot.
---
title: Multi-Modal Chatbot
description: Learn how to build a multi-modal chatbot with the AI SDK!
---
# Multi-Modal Chatbot
In this guide, you will build a multi-modal AI-chatbot with a streaming user interface.
Multi-modal refers to the ability of the chatbot to understand and generate responses in multiple formats, such as text, images, and videos. In this example, we will focus on sending images and generating text-based responses.
## Prerequisites
To follow this quickstart, you'll need:
- Node.js 18+ and pnpm installed on your local development machine.
- An OpenAI API key.
If you haven't obtained your OpenAI API key, you can do so by [signing up](https://platform.openai.com/signup/) on the OpenAI website.
## Create Your Application
Start by creating a new Next.js application. This command will create a new directory named `multi-modal-chatbot` and set up a basic Next.js application inside it.
Be sure to select yes when prompted to use the App Router. If you are
looking for the Next.js Pages Router quickstart guide, you can find it
[here](/docs/getting-started/nextjs-pages-router).
Navigate to the newly created directory:
### Install dependencies
Install `ai` and `@ai-sdk/openai`, the Vercel AI package and the AI SDK's [ OpenAI provider ](/providers/ai-sdk-providers/openai) respectively.
The AI SDK is designed to be a unified interface to interact with any large
language model. This means that you can change model and providers with just
one line of code! Learn more about [available providers](/providers) and
[building custom providers](/providers/community-providers/custom-providers)
in the [providers](/providers) section.
Make sure you are using `ai` version 3.2.27 or higher.
### Configure OpenAI API key
Create a `.env.local` file in your project root and add your OpenAI API Key. This key is used to authenticate your application with the OpenAI service.
Edit the `.env.local` file:
```env filename=".env.local"
OPENAI_API_KEY=xxxxxxxxx
```
Replace `xxxxxxxxx` with your actual OpenAI API key.
The AI SDK's OpenAI Provider will default to using the `OPENAI_API_KEY`
environment variable.
## Implementation Plan
To build a multi-modal chatbot, you will need to:
- Create a Route Handler to handle incoming chat messages and generate responses.
- Wire up the UI to display chat messages, provide a user input, and handle submitting new messages.
- Add the ability to upload images and attach them alongside the chat messages.
## Create a Route Handler
Create a route handler, `app/api/chat/route.ts` and add the following code:
```tsx filename="app/api/chat/route.ts"
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
});
return result.toDataStreamResponse();
}
```
Let's take a look at what is happening in this code:
1. Define an asynchronous `POST` request handler and extract `messages` from the body of the request. The `messages` variable contains a history of the conversation between you and the chatbot and provides the chatbot with the necessary context to make the next generation.
2. Call [`streamText`](/docs/reference/ai-sdk-core/stream-text), which is imported from the `ai` package. This function accepts a configuration object that contains a `model` provider (imported from `@ai-sdk/openai`) and `messages` (defined in step 1). You can pass additional [settings](/docs/ai-sdk-core/settings) to further customise the model's behaviour.
3. The `streamText` function returns a [`StreamTextResult`](/docs/reference/ai-sdk-core/stream-text#result-object). This result object contains the [ `toDataStreamResponse` ](/docs/reference/ai-sdk-core/stream-text#to-ai-stream-response) function which converts the result to a streamed response object.
4. Finally, return the result to the client to stream the response.
This Route Handler creates a POST request endpoint at `/api/chat`.
## Wire up the UI
Now that you have a Route Handler that can query a large language model (LLM), it's time to setup your frontend. [ AI SDK UI ](/docs/ai-sdk-ui) abstracts the complexity of a chat interface into one hook, [`useChat`](/docs/reference/ai-sdk-ui/use-chat).
Update your root page (`app/page.tsx`) with the following code to show a list of chat messages and provide a user message input:
```tsx filename="app/page.tsx"
'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
);
}
```
Make sure you add the `"use client"` directive to the top of your file. This
allows you to add interactivity with Javascript.
This page utilizes the `useChat` hook, which will, by default, use the `POST` API route you created earlier (`/api/chat`). The hook provides functions and state for handling user input and form submission. The `useChat` hook provides multiple utility functions and state variables:
- `messages` - the current chat messages (an array of objects with `id`, `role`, and `content` properties).
- `input` - the current value of the user's input field.
- `handleInputChange` and `handleSubmit` - functions to handle user interactions (typing into the input field and submitting the form, respectively).
- `isLoading` - boolean that indicates whether the API request is in progress.
## Add Image Upload
To make your chatbot multi-modal, let's add the ability to upload and send images to the model. There are two ways to send attachments alongside a message with the `useChat` hook: by [ providing a `FileList` object ](/docs/ai-sdk-ui/chatbot#filelist) or a [ list of URLs ](/docs/ai-sdk-ui/chatbot#urls) to the `handleSubmit` function. In this guide, you will be using the `FileList` approach as it does not require any additional setup.
Update your root page (`app/page.tsx`) with the following code:
```tsx filename="app/page.tsx" highlight="4-5,10-11,19-33,39-49,51-61"
'use client';
import { useChat } from 'ai/react';
import { useRef, useState } from 'react';
import Image from 'next/image';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
const [files, setFiles] = useState(undefined);
const fileInputRef = useRef(null);
return (
);
}
```
In this code, you:
1. Create state to hold the files and create a ref to the file input field.
2. Display the "uploaded" files in the UI.
3. Update the `onSubmit` function, to call the `handleSubmit` function manually, passing the the files as an option using the `experimental_attachments` key.
4. Add a file input field to the form, including an `onChange` handler to handle updating the files state.
## Running Your Application
With that, you have built everything you need for your multi-modal chatbot! To start your application, use the command:
Head to your browser and open http://localhost:3000. You should see an input field and a button to upload an image.
Upload a file and ask the model to describe what it sees. Watch as the model's response is streamed back to you!
## Where to Next?
You've built a multi-modal AI chatbot using the AI SDK! Experiment and extend the functionality of this application further by exploring [tool calling](/docs/ai-sdk-core/tools-and-tool-calling) or introducing more granular control over [AI and UI states](/docs/ai-sdk-rsc/generative-ui-state).
If you are looking to leverage the broader capabilities of LLMs, Vercel [AI SDK Core](/docs/ai-sdk-core) provides a comprehensive set of lower-level tools and APIs that will help you unlock a wider range of AI functionalities beyond the chatbot paradigm.
---
title: Get started with Llama 3.1
description: Get started with Llama 3.1 using the AI SDK.
---
# Get started with Llama 3.1
With the [release of Llama 3.1](https://ai.meta.com/blog/meta-llama-3-1/), there has never been a better time to start building AI applications.
The [AI SDK](/) is a powerful TypeScript toolkit for building AI application with large language models (LLMs) like Llama 3.1 alongside popular frameworks like React, Next.js, Vue, Svelte, Node.js, and more
## Llama 3.1
The release of Meta's Llama 3.1 is an important moment in AI development. As the first state-of-the-art open weight AI model, Llama 3.1 is helping accelerate developers building AI apps. Available in 8B, 70B, and 405B sizes, these instruction-tuned models work well for tasks like dialogue generation, translation, reasoning, and code generation.
## Benchmarks
Llama 3.1 surpasses most available open-source chat models on common industry benchmarks and even outperforms some closed-source models, offering superior performance in language nuances, contextual understanding, and complex multi-step tasks. The models' refined post-training processes significantly improve response alignment, reduce false refusal rates, and enhance answer diversity, making Llama 3.1 a powerful and accessible tool for building generative AI applications.
![Llama 3.1 Benchmarks](/images/llama-3_1-benchmarks.png)
Source: [Meta AI - Llama 3.1 Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md)
## Choosing Model Size
Llama 3.1 includes a new 405B parameter model, becoming the largest open-source model available today. This model is designed to handle the most complex and demanding tasks.
When choosing between the different sizes of Llama 3.1 models (405B, 70B, 8B), consider the trade-off between performance and computational requirements. The 405B model offers the highest accuracy and capability for complex tasks but requires significant computational resources. The 70B model provides a good balance of performance and efficiency for most applications, while the 8B model is suitable for simpler tasks or resource-constrained environments where speed and lower computational overhead are priorities.
## Getting Started with the AI SDK
The AI SDK is the TypeScript toolkit designed to help developers build AI-powered applications with React, Next.js, Vue, Svelte, Node.js, and more. Integrating LLMs into applications is complicated and heavily dependent on the specific model provider you use.
The AI SDK abstracts away the differences between model providers, eliminates boilerplate code for building chatbots, and allows you to go beyond text output to generate rich, interactive components.
At the center of the AI SDK is [AI SDK Core](/docs/ai-sdk-core/overview), which provides a unified API to call any LLM. The code snippet below is all you need to call Llama 3.1 (using [Groq](https://groq.com)) with the AI SDK:
```tsx
import { generateText } from 'ai';
import { createOpenAI as createGroq } from '@ai-sdk/openai';
const groq = createGroq({
baseURL: 'https://api.groq.com/openai/v1',
apiKey: process.env.GROQ_API_KEY,
});
const { text } = await generateText({
model: groq('llama-3.1-405b-reasoning'),
prompt: 'What is love?',
});
```
Llama 3.1 is available to use with many AI SDK providers including
[Groq](/providers/ai-sdk-providers/groq), [Amazon
Bedrock](/providers/ai-sdk-providers/amazon-bedrock),
[Perplexity](/providers/ai-sdk-providers/perplexity),
[Baseten](/providers/openai-compatible-providers/baseten)
[Fireworks](/providers/ai-sdk-providers/fireworks), and more.
AI SDK Core abstracts away the differences between model providers, allowing you to focus on building great applications. Prefer to use [Amazon Bedrock](/providers/ai-sdk-providers/amazon-bedrock)? The unified interface also means that you can easily switch between models by changing just two lines of code.
```tsx highlight="2,5"
import { generateText } from 'ai';
import { bedrock } from '@ai-sdk/amazon-bedrock';
const { text } = await generateText({
model: bedrock('meta.llama3-1-405b-instruct-v1'),
prompt: 'What is love?',
});
```
### Streaming the Response
To stream the model's response as it's being generated, update your code snippet to use the [`streamText`](/docs/reference/ai-sdk-core/stream-text) function.
```tsx
import { streamText } from 'ai';
import { createOpenAI as createGroq } from '@ai-sdk/openai';
const groq = createGroq({
baseURL: 'https://api.groq.com/openai/v1',
apiKey: process.env.GROQ_API_KEY,
});
const { textStream } = streamText({
model: groq('llama-3.1-70b-versatile'),
prompt: 'What is love?',
});
```
### Generating Structured Data
While text generation can be useful, you might want to generate structured JSON data. For example, you might want to extract information from text, classify data, or generate synthetic data. AI SDK Core provides two functions ([`generateObject`](/docs/reference/ai-sdk-core/generate-object) and [`streamObject`](/docs/reference/ai-sdk-core/stream-object)) to generate structured data, allowing you to constrain model outputs to a specific schema.
```tsx
import { generateObject } from 'ai';
import { createOpenAI as createGroq } from '@ai-sdk/openai';
import { z } from 'zod';
const groq = createGroq({
baseURL: 'https://api.groq.com/openai/v1',
apiKey: process.env.GROQ_API_KEY,
});
const { object } = await generateObject({
model: groq('llama-3.1-70b-versatile'),
schema: z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(z.object({ name: z.string(), amount: z.string() })),
steps: z.array(z.string()),
}),
}),
prompt: 'Generate a lasagna recipe.',
});
```
This code snippet will generate a type-safe recipe that conforms to the specified zod schema.
### Tools
While LLMs have incredible generation capabilities, they struggle with discrete tasks (e.g. mathematics) and interacting with the outside world (e.g. getting the weather). The solution: tools, which are like programs that you provide to the model, which it can choose to call as necessary.
### Using Tools with the AI SDK
The AI SDK supports tool usage across several of its functions, including [`generateText`](/docs/reference/ai-sdk-core/generate-text) and [`streamUI`](/docs/reference/ai-sdk-rsc/stream-ui). By passing one or more tools to the `tools` parameter, you can extend the capabilities of LLMs, allowing them to perform discrete tasks and interact with external systems.
Here's an example of how you can use a tool with the AI SDK and Llama 3.1:
```tsx
import { generateText, tool } from 'ai';
import { createOpenAI as createGroq } from '@ai-sdk/openai';
import { getWeather } from './weatherTool';
const groq = createGroq({
baseURL: 'https://api.groq.com/openai/v1',
apiKey: process.env.GROQ_API_KEY,
});
const { text } = await generateText({
model: groq('llama-3.1-70b-versatile'),
prompt: 'What is the weather like today?',
tools: {
weather: tool({
description: 'Get the weather in a location',
parameters: z.object({
location: z.string().describe('The location to get the weather for'),
}),
execute: async ({ location }) => ({
location,
temperature: 72 + Math.floor(Math.random() * 21) - 10,
}),
}),
},
});
```
In this example, the `getWeather` tool allows the model to fetch real-time weather data, enhancing its ability to provide accurate and up-to-date information.
### Agents
Agents take your AI applications a step further by allowing models to execute multiple steps (i.e. tools) in a non-deterministic way, making decisions based on context and user input.
Agents use LLMs to choose the next step in a problem-solving process. They can reason at each step and make decisions based on the evolving context.
### Implementing Agents with the AI SDK
The AI SDK supports agent implementation through the `maxSteps` parameter. This allows the model to make multiple decisions and tool calls in a single interaction.
Here's an example of an agent that solves math problems:
```tsx
import { generateText, tool } from 'ai';
import { createOpenAI as createGroq } from '@ai-sdk/openai';
import * as mathjs from 'mathjs';
import { z } from 'zod';
const groq = createGroq({
baseURL: 'https://api.groq.com/openai/v1',
apiKey: process.env.GROQ_API_KEY,
});
const problem =
'Calculate the profit for a day if revenue is $5000 and expenses are $3500.';
const { text: answer } = await generateText({
model: groq('llama-3.1-70b-versatile'),
system:
'You are solving math problems. Reason step by step. Use the calculator when necessary.',
prompt: problem,
tools: {
calculate: tool({
description: 'A tool for evaluating mathematical expressions.',
parameters: z.object({ expression: z.string() }),
execute: async ({ expression }) => mathjs.evaluate(expression),
}),
},
maxSteps: 5,
});
```
In this example, the agent can use the calculator tool multiple times if needed, reasoning through the problem step by step.
### Building Interactive Interfaces
AI SDK Core can be paired with [AI SDK UI](/docs/ai-sdk-ui/overview), another powerful component of the AI SDK, to streamline the process of building chat, completion, and assistant interfaces with popular frameworks like Next.js, Nuxt, SvelteKit, and SolidStart.
AI SDK UI provides robust abstractions that simplify the complex tasks of managing chat streams and UI updates on the frontend, enabling you to develop dynamic AI-driven interfaces more efficiently.
With four main hooks — [`useChat`](/docs/reference/ai-sdk-ui/use-chat), [`useCompletion`](/docs/reference/ai-sdk-ui/use-completion), [`useObject`](/docs/reference/ai-sdk-ui/use-object), and [`useAssistant`](/docs/reference/ai-sdk-ui/use-assistant) — you can incorporate real-time chat capabilities, text completions, streamed JSON, and interactive assistant features into your app.
Let's explore building a chatbot with [Next.js](https://nextjs.org), the AI SDK, and Llama 3.1 (via [Groq](https://groq.com/)):
```tsx filename="app/api/chat/route.ts"
import { streamText } from 'ai';
import { createOpenAI as createGroq } from '@ai-sdk/openai';
const groq = createGroq({
baseURL: 'https://api.groq.com/openai/v1',
apiKey: process.env.GROQ_API_KEY,
});
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: groq('llama-3.1-70b-versatile'),
system: 'You are a helpful assistant.',
messages,
});
return result.toDataStreamResponse();
}
```
```tsx filename="app/page.tsx"
'use client';
import { useChat } from 'ai/react';
export default function Page() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
<>
{messages.map(message => (
))}
>
);
}
```
The useChat hook on your root page (`app/page.tsx`) will make a request to your AI provider endpoint (`app/api/chat/route.ts`) whenever the user submits a message. The messages are then streamed back in real-time and displayed in the chat UI.
This enables a seamless chat experience where the user can see the AI response as soon as it is available, without having to wait for the entire response to be received.
### Going Beyond Text
The AI SDK's React Server Components (RSC) API enables you to create rich, interactive interfaces that go beyond simple text generation. With the [`streamUI`](/docs/reference/ai-sdk-rsc/stream-ui) function, you can dynamically stream React components from the server to the client.
Let's dive into how you can leverage tools with [AI SDK RSC](/docs/ai-sdk-rsc/overview) to build a generative user interface with Next.js (App Router).
First, create a Server Action.
```tsx filename="app/actions.tsx"
'use server';
import { streamUI } from 'ai/rsc';
import { createOpenAI as createGroq } from '@ai-sdk/openai';
import { z } from 'zod';
const groq = createGroq({
baseURL: 'https://api.groq.com/openai/v1',
apiKey: process.env.GROQ_API_KEY,
});
export async function streamComponent() {
const result = await streamUI({
model: groq('llama-3.1-70b-versatile'),
prompt: 'Get the weather for San Francisco',
text: ({ content }) =>
{content}
,
tools: {
getWeather: {
description: 'Get the weather for a location',
parameters: z.object({ location: z.string() }),
generate: async function* ({ location }) {
yield
);
},
},
},
});
return result.value;
}
```
In this example, if the model decides to use the `getWeather` tool, it will first yield a `div` while fetching the weather data, then return a weather component with the fetched data (note: static data in this example). This allows for a more dynamic and responsive UI that can adapt based on the AI's decisions and external data.
On the frontend, you can call this Server Action like any other asynchronous function in your application. In this case, the function returns a regular React component.
```tsx filename="app/page.tsx"
'use client';
import { useState } from 'react';
import { streamComponent } from './actions';
export default function Page() {
const [component, setComponent] = useState();
return (
{component}
);
}
```
To see AI SDK RSC in action, check out our open-source [Next.js Gemini Chatbot](https://gemini.vercel.ai/).
## Migrate from OpenAI
One of the key advantages of the AI SDK is its unified API, which makes it incredibly easy to switch between different AI models and providers. This flexibility is particularly useful when you want to migrate from one model to another, such as moving from OpenAI's GPT models to Meta's Llama models hosted on Groq.
Here's how simple the migration process can be:
**OpenAI Example:**
```tsx
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
const { text } = await generateText({
model: openai('gpt-4-turbo'),
prompt: 'What is love?',
});
```
**Llama on Groq Example:**
```tsx
import { generateText } from 'ai';
import { createOpenAI as createGroq } from '@ai-sdk/openai';
const groq = createGroq({
baseURL: 'https://api.groq.com/openai/v1',
apiKey: process.env.GROQ_API_KEY,
});
const { text } = await generateText({
model: groq('llama-3.1-70b-versatile'),
prompt: 'What is love?',
});
```
Thanks to the unified API, the core structure of the code remains the same. The main differences are:
1. Creating a Groq client
2. Changing the model name from `openai("gpt-4-turbo")` to `groq("llama-3.1-70b-versatile")`.
With just these few changes, you've migrated from using OpenAI's GPT-4-Turbo to Meta's Llama 3.1 hosted on Groq. The `generateText` function and its usage remain identical, showcasing the power of the AI SDK's unified API.
This feature allows you to easily experiment with different models, compare their performance, and choose the best one for your specific use case without having to rewrite large portions of your codebase.
## Prompt Engineering and Fine-tuning
While the Llama 3.1 family of models are powerful out-of-the-box, their performance can be enhanced through effective prompt engineering and fine-tuning techniques.
### Prompt Engineering
Prompt engineering is the practice of crafting input prompts to elicit desired outputs from language models. It involves structuring and phrasing prompts in ways that guide the model towards producing more accurate, relevant, and coherent responses.
For more information on prompt engineering techniques (specific to Llama models), check out these resources:
- [Official Llama 3.1 Prompt Guide](https://llama.meta.com/docs/how-to-guides/prompting)
- [Prompt Engineering with Llama 3](https://github.com/amitsangani/Llama/blob/main/Llama_3_Prompt_Engineering.ipynb)
- [How to prompt Llama 3](https://huggingface.co/blog/llama3#how-to-prompt-llama-3)
### Fine-tuning
Fine-tuning involves further training a pre-trained model on a specific dataset or task to customize its performance for particular use cases. This process allows you to adapt Llama 3.1 to your specific domain or application, potentially improving its accuracy and relevance for your needs.
To learn more about fine-tuning Llama models, check out these resources:
- [Official Fine-tuning Llama Guide](https://llama.meta.com/docs/how-to-guides/fine-tuning)
- [Fine-tuning and Inference with Llama 3](https://docs.inferless.com/how-to-guides/how-to-finetune--and-inference-llama3)
- [Fine-tuning Models with Fireworks AI](https://docs.fireworks.ai/fine-tuning/fine-tuning-models)
- [Fine-tuning Llama with Modal](https://modal.com/docs/examples/llm-finetuning)
## Conclusion
The AI SDK offers a powerful and flexible way to integrate cutting-edge AI models like Llama 3.1 into your applications. With AI SDK Core, you can seamlessly switch between different AI models and providers by changing just two lines of code. This flexibility allows for quick experimentation and adaptation, reducing the time required to change models from days to minutes.
The AI SDK ensures that your application remains clean and modular, accelerating development and future-proofing against the rapidly evolving landscape.
Ready to get started? Here's how you can dive in:
1. Explore the documentation at [sdk.vercel.ai/docs](/docs) to understand the full capabilities of the AI SDK.
2. Check out practical examples at [sdk.vercel.ai/examples](/examples) to see the SDK in action and get inspired for your own projects.
3. Dive deeper with advanced guides on topics like Retrieval-Augmented Generation (RAG) and multi-modal chat at [sdk.vercel.ai/docs/guides](/docs/guides).
4. Check out ready-to-deploy AI templates at [vercel.com/templates?type=ai](https://vercel.com/templates?type=ai).
---
title: Get started with OpenAI o1
description: Get started with OpenAI o1 using the AI SDK.
---
# Get started with OpenAI o1
With the [release of OpenAI's o1 series models](https://openai.com/index/introducing-openai-o1-preview/), there has never been a better time to start building AI applications, particularly those that require complex reasoning capabilities.
The [AI SDK](/) is a powerful TypeScript toolkit for building AI applications with large language models (LLMs) like OpenAI o1 alongside popular frameworks like React, Next.js, Vue, Svelte, Node.js, and more.
OpenAI o1 models are currently [in beta with limited
features](https://platform.openai.com/docs/guides/reasoning/beta-limitations).
Access is restricted to developers in tier 4 and tier 5, with low rate limits
(20 RPM). OpenAI is working on adding more features, increasing rate limits,
and expanding access to more developers in the coming weeks.
## OpenAI o1
OpenAI released a series of AI models designed to spend more time thinking before responding. They can reason through complex tasks and solve harder problems than previous models in science, coding, and math. These models, named the o1 series, are trained with reinforcement learning and can "think before they answer". As a result, they are able to produce a long internal chain of thought before responding to a prompt.
There are two reasoning models available in the API:
1. [**o1-preview**](https://platform.openai.com/docs/models/o1): An early preview of the o1 model, designed to reason about hard problems using broad general knowledge about the world.
2. [**o1-mini**](https://platform.openai.com/docs/models/ohttps://platform.openai.com/docs/models/o1): A faster and cheaper version of o1, particularly adept at coding, math, and science tasks where extensive general knowledge isn't required.
### Benchmarks
OpenAI o1 models excel in scientific reasoning, with impressive performance across various domains:
- Ranking in the 89th percentile on competitive programming questions (Codeforces)
- Placing among the top 500 students in the US in a qualifier for the USA Math Olympiad (AIME)
- Exceeding human PhD-level accuracy on a benchmark of physics, biology, and chemistry problems (GPQA)
[Source](https://openai.com/index/learning-to-reason-with-llms/)
### Prompt Engineering for o1 Models
The o1 models perform best with straightforward prompts. Some prompt engineering techniques, like few-shot prompting or instructing the model to "think step by step," may not enhance performance and can sometimes hinder it. Here are some best practices:
1. Keep prompts simple and direct: The models excel at understanding and responding to brief, clear instructions without the need for extensive guidance.
2. Avoid chain-of-thought prompts: Since these models perform reasoning internally, prompting them to "think step by step" or "explain your reasoning" is unnecessary.
3. Use delimiters for clarity: Use delimiters like triple quotation marks, XML tags, or section titles to clearly indicate distinct parts of the input, helping the model interpret different sections appropriately.
4. Limit additional context in retrieval-augmented generation (RAG): When providing additional context or documents, include only the most relevant information to prevent the model from overcomplicating its response.
## Getting Started with the AI SDK
The AI SDK is the TypeScript toolkit designed to help developers build AI-powered applications with React, Next.js, Vue, Svelte, Node.js, and more. Integrating LLMs into applications is complicated and heavily dependent on the specific model provider you use.
The AI SDK abstracts away the differences between model providers, eliminates boilerplate code for building chatbots, and allows you to go beyond text output to generate rich, interactive components.
At the center of the AI SDK is [AI SDK Core](/docs/ai-sdk-core/overview), which provides a unified API to call any LLM. The code snippet below is all you need to call OpenAI o1-mini with the AI SDK:
```tsx
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
const { text } = await generateText({
model: openai('o1-mini'),
prompt: 'Explain the concept of quantum entanglement.',
});
```
To use the o1 series of models, you must either be using @ai-sdk/openai
version 0.0.59 or greater, or set `temperature: 1`.
AI SDK Core abstracts away the differences between model providers, allowing you to focus on building great applications. The unified interface also means that you can easily switch between models by changing just one line of code.
```tsx highlight="5"
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
const { text } = await generateText({
model: openai('o1-preview'),
prompt: 'Explain the concept of quantum entanglement.',
});
```
During the beta phase, access to most chat completions parameters is not
supported for o1 models. Features like function calling and image inputs are
currently unavailable, and streaming is simulated.
### Building Interactive Interfaces
AI SDK Core can be paired with [AI SDK UI](/docs/ai-sdk-ui/overview), another powerful component of the AI SDK, to streamline the process of building chat, completion, and assistant interfaces with popular frameworks like Next.js, Nuxt, SvelteKit, and SolidStart.
AI SDK UI provides robust abstractions that simplify the complex tasks of managing chat streams and UI updates on the frontend, enabling you to develop dynamic AI-driven interfaces more efficiently.
With four main hooks — [`useChat`](/docs/reference/ai-sdk-ui/use-chat), [`useCompletion`](/docs/reference/ai-sdk-ui/use-completion), [`useObject`](/docs/reference/ai-sdk-ui/use-object), and [`useAssistant`](/docs/reference/ai-sdk-ui/use-assistant) — you can incorporate real-time chat capabilities, text completions, streamed JSON, and interactive assistant features into your app.
Let's explore building a chatbot with [Next.js](https://nextjs.org), the AI SDK, and OpenAI o1:
```tsx filename="app/api/chat/route.ts"
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
// Allow responses up to 5 minutes
export const maxDuration = 300;
export async function POST(req: Request) {
const { messages } = await req.json();
const { text } = await generateText({
model: openai('o1-preview'),
messages,
});
return new Response(text);
}
```
```tsx filename="app/page.tsx"
'use client';
import { useChat } from 'ai/react';
export default function Page() {
const { messages, input, handleInputChange, handleSubmit, error } = useChat({
streamProtocol: 'text',
});
return (
<>
{messages.map(message => (
))}
>
);
}
```
The useChat hook on your root page (`app/page.tsx`) will make a request to your AI provider endpoint (`app/api/chat/route.ts`) whenever the user submits a message. The messages are then displayed in the chat UI.
Due to the current limitations of o1 models during the beta phase, real-time
streaming is not supported. The response will be sent once the model completes
its reasoning and generates the full output.
## Get Started
Ready to get started? Here's how you can dive in:
1. Explore the documentation at [sdk.vercel.ai/docs](/docs) to understand the full capabilities of the AI SDK.
1. Check out our support for the o1 series of reasoning models in the [OpenAI Provider](/providers/ai-sdk-providers/openai#reasoning-models).
1. Check out practical examples at [sdk.vercel.ai/examples](/examples) to see the SDK in action and get inspired for your own projects.
1. Dive deeper with advanced guides on topics like Retrieval-Augmented Generation (RAG) and multi-modal chat at [sdk.vercel.ai/docs/guides](/docs/guides).
1. Check out ready-to-deploy AI templates at [vercel.com/templates?type=ai](https://vercel.com/templates?type=ai).
Remember that OpenAI o1 models are currently in beta with limited features and access. Stay tuned for updates as OpenAI expands access and adds more features to these powerful reasoning models.
---
title: Get started with Computer Use
description: Get started with Claude's Computer Use capabilities with the AI SDK
---
# Get started with Computer Use
With the [release of Computer Use in Claude 3.5 Sonnet](https://www.anthropic.com/news/3-5-models-and-computer-use), you can now direct AI models to interact with computers like humans do - moving cursors, clicking buttons, and typing text. This capability enables automation of complex tasks while leveraging Claude's advanced reasoning abilities.
The AI SDK is a powerful TypeScript toolkit for building AI applications with large language models (LLMs) like Anthropic's Claude alongside popular frameworks like React, Next.js, Vue, Svelte, Node.js, and more. In this guide, you will learn how to integrate Computer Use into your AI SDK applications.
Computer Use is currently in beta with some [ limitations
](https://docs.anthropic.com/en/docs/build-with-claude/computer-use#understand-computer-use-limitations).
The feature may be error-prone at times. Anthropic recommends starting with
low-risk tasks and implementing appropriate safety measures.
## Computer Use
Anthropic recently released a new version of the Claude 3.5 Sonnet model which is capable of 'Computer Use'. This allows the model to interact with computer interfaces through basic actions like:
- Moving the cursor
- Clicking buttons
- Typing text
- Taking screenshots
- Reading screen content
## How It Works
Computer Use enables the model to read and interact with on-screen content through a series of coordinated steps. Here's how the process works:
1. **Start with a prompt and tools**
Add Anthropic-defined Computer Use tools to your request and provide a task (prompt) for the model. For example: "save an image to your downloads folder."
2. **Select the right tool**
The model evaluates which computer tools can help accomplish the task. It then sends a formatted `tool_call` to use the appropriate tool.
3. **Execute the action and return results**
The AI SDK processes Claude's request by running the selected tool. The results can then be sent back to Claude through a `tool_result` message.
4. **Complete the task through iterations**
Claude analyzes each result to determine if more actions are needed. It continues requesting tool use and processing results until it completes your task or requires additional input.
### Available Tools
There are three main tools available in the Computer Use API:
1. **Computer Tool**: Enables basic computer control like mouse movement, clicking, and keyboard input
2. **Text Editor Tool**: Provides functionality for viewing and editing text files
3. **Bash Tool**: Allows execution of bash commands
### Implementation Considerations
Computer Use tools in the AI SDK are predefined interfaces that require your own implementation of the execution layer. While the SDK provides the type definitions and structure for these tools, you need to:
1. Set up a controlled environment for Computer Use execution
2. Implement core functionality like mouse control and keyboard input
3. Handle screenshot capture and processing
4. Set up rules and limits for how Claude can interact with your system
The recommended approach is to start with [ Anthropic's reference implementation ](https://github.com/anthropics/anthropic-quickstarts/tree/main/computer-use-demo), which provides:
- A containerized environment configured for safe Computer Use
- Ready-to-use (Python) implementations of Computer Use tools
- An agent loop for API interaction and tool execution
- A web interface for monitoring and control
This reference implementation serves as a foundation to understand the requirements before building your own custom solution.
## Getting Started with the AI SDK
If you have never used the AI SDK before, start by following the [Getting
Started guide](/docs/getting-started).
First, ensure you have the AI SDK and [Anthropic AI SDK provider](/providers/ai-sdk-providers/anthropic) installed:
You can add Computer Use to your AI SDK applications using provider-defined tools. These tools accept various input parameters (like display height and width in the case of the computer tool) and then require that you define an execute function.
Here's how you could set up the Computer Tool with the AI SDK:
```ts
import { anthropic } from '@ai-sdk/anthropic';
import { getScreenshot, executeComputerAction } from '@/utils/computer-use';
const computerTool = anthropic.tools.computer_20241022({
displayWidthPx: 1920,
displayHeightPx: 1080,
execute: async ({ action, coordinate, text }) => {
switch (action) {
case 'screenshot': {
return {
type: 'image',
data: getScreenshot(),
};
}
default: {
return executeComputerAction(action, coordinate, text);
}
}
},
experimental_toToolResultContent(result) {
return typeof result === 'string'
? [{ type: 'text', text: result }]
: [{ type: 'image', data: result.data, mimeType: 'image/png' }];
},
});
```
The `computerTool` handles two main actions: taking screenshots via `getScreenshot()` and executing computer actions like mouse movements and clicks through `executeComputerAction()`. Remember, you have to implement this execution logic (eg. the `getScreenshot` and `executeComputerAction` functions) to handle the actual computer interactions. The `execute` function should handle all low-level interactions with the operating system.
Finally, to send tool results back to the model, use the [`experimental_toToolResultContent()`](/docs/foundations/prompts#multi-modal-tool-results) function to convert text and image responses into a format the model can process. The AI SDK includes experimental support for these multi-modal tool results when using Anthropic's models.
Computer Use requires appropriate safety measures like using virtual machines,
limiting access to sensitive data, and implementing human oversight for
critical actions.
### Using Computer Tools with Text Generation
Once your tool is defined, you can use it with both the [`generateText`](/docs/reference/ai-sdk-core/generate-text) and [`streamText`](/docs/reference/ai-sdk-core/stream-text) functions.
For one-shot text generation, use `generateText`:
```ts
const result = await generateText({
model: anthropic('claude-3-5-sonnet-20241022'),
prompt: 'Move the cursor to the center of the screen and take a screenshot',
tools: { computer: computerTool },
});
console.log(response.text);
```
For streaming responses, use `streamText` to receive updates in real-time:
```ts
const result = streamText({
model: anthropic('claude-3-5-sonnet-20241022'),
prompt: 'Open the browser and navigate to vercel.com',
tools: { computer: computerTool },
});
for await (const chunk of result.textStream) {
console.log(chunk);
}
```
### Configure Multi-Step (Agentic) Generations
To allow the model to perform multiple steps without user intervention, specify a `maxSteps` value. This will automatically send any tool results back to the model to trigger a subsequent generation:
```ts highlight="5"
const stream = streamText({
model: anthropic('claude-3-5-sonnet-20241022'),
prompt: 'Open the browser and navigate to vercel.com',
tools: { computer: computerTool },
maxSteps: 10, // experiment with this value based on your use case
});
```
### Combine Multiple Tools
You can combine multiple tools in a single request to enable more complex workflows. The AI SDK supports all three of Claude's Computer Use tools:
```ts
const computerTool = anthropic.tools.computer_20241022({
...
});
const bashTool = anthropic.tools.bash_20241022({
execute: async ({ command, restart }) => execSync(command).toString()
});
const textEditorTool = anthropic.tools.textEditor_20241022({
execute: async ({
command,
path,
file_text,
insert_line,
new_str,
old_str,
view_range
}) => {
// Handle file operations based on command
switch(command) {
return executeTextEditorFunction({
command,
path,
fileText: file_text,
insertLine: insert_line,
newStr: new_str,
oldStr: old_str,
viewRange: view_range
});
}
}
});
const response = await generateText({
model: anthropic("claude-3-5-sonnet-20241022"),
prompt: "Create a new file called example.txt, write 'Hello World' to it, and run 'cat example.txt' in the terminal",
tools: {
computer: computerTool,
textEditor: textEditorTool,
bash: bashTool
},
});
```
Always implement appropriate [security measures](#security-measures) and
obtain user consent before enabling Computer Use in production applications.
### Best Practices for Computer Use
To get the best results when using Computer Use:
1. Specify simple, well-defined tasks with explicit instructions for each step
2. Prompt Claude to verify outcomes through screenshots
3. Use keyboard shortcuts when UI elements are difficult to manipulate
4. Include example screenshots for repeatable tasks
5. Provide explicit tips in system prompts for known tasks
## Security Measures
Remember, Computer Use is a beta feature. Please be aware that it poses unique risks that are distinct from standard API features or chat interfaces. These risks are heightened when using Computer Use to interact with the internet. To minimize risks, consider taking precautions such as:
1. Use a dedicated virtual machine or container with minimal privileges to prevent direct system attacks or accidents.
2. Avoid giving the model access to sensitive data, such as account login information, to prevent information theft.
3. Limit internet access to an allowlist of domains to reduce exposure to malicious content.
4. Ask a human to confirm decisions that may result in meaningful real-world consequences as well as any tasks requiring affirmative consent, such as accepting cookies, executing financial transactions, or agreeing to terms of service.
---
title: Natural Language Postgres
description: Learn how to build a Next.js app that lets you talk to a PostgreSQL database in natural language.
---
# Natural Language Postgres Guide
In this guide, you will learn how to build an app that uses AI to interact with a PostgreSQL database using natural language.
The application will:
- Generate SQL queries from a natural language input
- Explain query components in plain English
- Create a chart to visualise query results
You can find a completed version of this project at [natural-language-postgres.vercel.app](https://natural-language-postgres.vercel.app).
## Project setup
This project uses the following stack:
- [Next.js](https://nextjs.org) (App Router)
- [AI SDK](https://sdk.vercel.ai/docs)
- [OpenAI](https://openai.com)
- [Zod](https://zod.dev)
- [Postgres](https://www.postgresql.org/) with [ Vercel Postgres ](https://vercel.com/postgres)
- [shadcn-ui](https://ui.shadcn.com) and [TailwindCSS](https://tailwindcss.com) for styling
- [Recharts](https://recharts.org) for data visualization
### Clone repo
To focus on the AI-powered functionality rather than project setup and configuration we've prepared a starter repository which includes a database schema and a few components.
Clone the starter repository and check out the `starter` branch:
### Project setup and data
Let's set up the project and seed the database with the dataset:
1. Install dependencies:
2. Copy the example environment variables file:
3. Add your environment variables to `.env`:
```bash filename=".env"
OPENAI_API_KEY="your_api_key_here"
POSTGRES_URL="..."
POSTGRES_PRISMA_URL="..."
POSTGRES_URL_NO_SSL="..."
POSTGRES_URL_NON_POOLING="..."
POSTGRES_USER="..."
POSTGRES_HOST="..."
POSTGRES_PASSWORD="..."
POSTGRES_DATABASE="..."
```
This project uses Vercel Postgres. You can learn more about how to set up at
the [Vercel Postgres documentation](https://vercel.com/postgres).
4. This project uses CB Insights' Unicorn Companies dataset. You can download the dataset by following these instructions:
- Navigate to [CB Insights Unicorn Companies](https://www.cbinsights.com/research-unicorn-companies)
- Enter in your email. You will receive a link to download the dataset.
- Save it as `unicorns.csv` in your project root
### About the dataset
The Unicorn List dataset contains the following information about unicorn startups (companies with a valuation above $1bn):
- Company name
- Valuation
- Date joined (unicorn status)
- Country
- City
- Industry
- Select investors
This dataset contains over 1000 rows of data over 7 columns, giving us plenty of structured data to analyze. This makes it perfect for exploring various SQL queries that can reveal interesting insights about the unicorn startup ecosystem.
5. Now that you have the dataset downloaded and added to your project, you can initialize the database with the following command:
Note: this step can take a little while. You should see a message indicating the Unicorns table has been created and then that the database has been seeded successfully.
Remember, the dataset should be named `unicorns.csv` and located in root of
your project.
6. Start the development server:
Your application should now be running at [http://localhost:3000](http://localhost:3000).
## Project structure
The starter repository already includes everything that you will need, including:
- Database seed script (`lib/seed.ts`)
- Basic components built with shadcn/ui (`components/`)
- Function to run SQL queries (`app/actions.ts`)
- Type definitions for the database schema (`lib/types.ts`)
### Existing components
The application contains a single page in `app/page.tsx` that serves as the main interface.
At the top, you'll find a header (`header.tsx`) displaying the application title and description. Below that is an input field and search button (`search.tsx`) where you can enter natural language queries.
Initially, the page shows a collection of suggested example queries (`suggested-queries.tsx`) that you can click to quickly try out the functionality.
When you submit a query:
- The suggested queries section disappears and a loading state appears
- Once complete, a card appears with "TODO - IMPLEMENT ABOVE" (`query-viewer.tsx`) which will eventually show your generated SQL
- Below that is an empty results area with "No results found" (`results.tsx`)
After you implement the core functionality:
- The results section will display data in a table format
- A toggle button will allow switching between table and chart views
- The chart view will visualize your query results
Let's implement the AI-powered functionality to bring it all together.
## Building the application
As a reminder, this application will have three main features:
1. Generate SQL queries from natural language
2. Create a chart from the query results
3. Explain SQL queries in plain English
For each of these features, you'll use the AI SDK via [ Server Actions ](https://react.dev/reference/rsc/server-actions) to interact with OpenAI's GPT-4o and GPT-4o-mini models. Server Actions are a powerful React Server Component feature that allows you to call server-side functions directly from your frontend code.
Let's start with generating a SQL query from natural language.
## Generate SQL queries
### Providing context
For the model to generate accurate SQL queries, it needs context about your database schema, tables, and relationships. You will communicate this information through a prompt that should include:
1. Schema information
2. Example data formats
3. Available SQL operations
4. Best practices for query structure
5. Nuanced advice for specific fields
Let's write a prompt that includes all of this information:
```txt
You are a SQL (postgres) and data visualization expert. Your job is to help the user write a SQL query to retrieve the data they need. The table schema is as follows:
unicorns (
id SERIAL PRIMARY KEY,
company VARCHAR(255) NOT NULL UNIQUE,
valuation DECIMAL(10, 2) NOT NULL,
date_joined DATE,
country VARCHAR(255) NOT NULL,
city VARCHAR(255) NOT NULL,
industry VARCHAR(255) NOT NULL,
select_investors TEXT NOT NULL
);
Only retrieval queries are allowed.
For things like industry, company names and other string fields, use the ILIKE operator and convert both the search term and the field to lowercase using LOWER() function. For example: LOWER(industry) ILIKE LOWER('%search_term%').
Note: select_investors is a comma-separated list of investors. Trim whitespace to ensure you're grouping properly. Note, some fields may be null or have only one value.
When answering questions about a specific field, ensure you are selecting the identifying column (ie. what is Vercel's valuation would select company and valuation').
The industries available are:
- healthcare & life sciences
- consumer & retail
- financial services
- enterprise tech
- insurance
- media & entertainment
- industrials
- health
If the user asks for a category that is not in the list, infer based on the list above.
Note: valuation is in billions of dollars so 10b would be 10.0.
Note: if the user asks for a rate, return it as a decimal. For example, 0.1 would be 10%.
If the user asks for 'over time' data, return by year.
When searching for UK or USA, write out United Kingdom or United States respectively.
EVERY QUERY SHOULD RETURN QUANTITATIVE DATA THAT CAN BE PLOTTED ON A CHART! There should always be at least two columns. If the user asks for a single column, return the column and the count of the column. If the user asks for a rate, return the rate as a decimal. For example, 0.1 would be 10%.
```
There are several important elements of this prompt:
- Schema description helps the model understand exactly what data fields to work with
- Includes rules for handling queries based on common SQL patterns - for example, always using ILIKE for case-insensitive string matching
- Explains how to handle edge cases in the dataset, like dealing with the comma-separated investors field and ensuring whitespace is properly handled
- Instead of having the model guess at industry categories, it provides the exact list that exists in the data, helping avoid mismatches
- The prompt helps standardize data transformations - like knowing to interpret "10b" as "10.0" billion dollars, or that rates should be decimal values
- Clear rules ensure the query output will be chart-friendly by always including at least two columns of data that can be plotted
This prompt structure provides a strong foundation for query generation, but you should experiment and iterate based on your specific needs and the model you're using.
### Create a Server Action
With the prompt done, let's create a Server Action.
Open `app/actions.ts`. You should see one action already defined (`runGeneratedSQLQuery`).
Add a new action. This action should be asynchronous and take in one parameter - the natural language query.
```ts filename="app/actions.ts"
/* ...rest of the file... */
export const generateQuery = async (input: string) => {};
```
In this action, you'll use the `generateObject` function from the AI SDK which allows you to constrain the model's output to a pre-defined schema. This process, sometimes called structured output, ensures the model returns only the SQL query without any additional prefixes, explanations, or formatting that would require manual parsing.
```ts filename="app/actions.ts"
/* ...other imports... */
import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
/* ...rest of the file... */
export const generateQuery = async (input: string) => {
'use server';
try {
const result = await generateObject({
model: openai('gpt-4o'),
system: `You are a SQL (postgres) ...`, // SYSTEM PROMPT AS ABOVE - OMITTED FOR BREVITY
prompt: `Generate the query necessary to retrieve the data the user wants: ${input}`,
schema: z.object({
query: z.string(),
}),
});
return result.object.query;
} catch (e) {
console.error(e);
throw new Error('Failed to generate query');
}
};
```
Note, you are constraining the output to a single string field called `query` using `zod`, a TypeScript schema validation library. This will ensure the model only returns the SQL query itself. The resulting generated query will then be returned.
### Update the frontend
With the Server Action in place, you can now update the frontend to call this action when the user submits a natural language query. In the root page (`app/page.tsx`), you should see a `handleSubmit` function that is called when the user submits a query.
Import the `generateQuery` function and call it with the user's input.
```typescript filename="app/page.tsx" highlight="21"
/* ...other imports... */
import { runGeneratedSQLQuery, generateQuery } from './actions';
/* ...rest of the file... */
const handleSubmit = async (suggestion?: string) => {
clearExistingData();
const question = suggestion ?? inputValue;
if (inputValue.length === 0 && !suggestion) return;
if (question.trim()) {
setSubmitted(true);
}
setLoading(true);
setLoadingStep(1);
setActiveQuery('');
try {
const query = await generateQuery(question);
if (query === undefined) {
toast.error('An error occurred. Please try again.');
setLoading(false);
return;
}
setActiveQuery(query);
setLoadingStep(2);
const companies = await runGeneratedSQLQuery(query);
const columns = companies.length > 0 ? Object.keys(companies[0]) : [];
setResults(companies);
setColumns(columns);
setLoading(false);
} catch (e) {
toast.error('An error occurred. Please try again.');
setLoading(false);
}
};
/* ...rest of the file... */
```
Now, when the user submits a natural language query (ie. "how many unicorns are from San Francisco?"), that question will be sent to your newly created Server Action. The Server Action will call the model, passing in your system prompt and the users query, and return the generated SQL query in a structured format. This query is then passed to the `runGeneratedSQLQuery` action to run the query against your database. The results are then saved in local state and displayed to the user.
Save the file, make sure the dev server is running, and then head to `localhost:3000` in your browser. Try submitting a natural language query and see the generated SQL query and results. You should see a SQL query generated and displayed under the input field. You should also see the results of the query displayed in a table below the input field.
Try clicking the SQL query to see the full query if it's too long to display in the input field. You should see a button on the right side of the input field with a question mark icon. Clicking this button currently does nothing, but you'll add the "explain query" functionality to it in the next step.
## Explain SQL Queries
Next, let's add the ability to explain SQL queries in plain English. This feature helps users understand how the generated SQL query works by breaking it down into logical sections.
As with the SQL query generation, you'll need a prompt to guide the model when explaining queries.
Let's craft a prompt for the explain query functionality:
```txt
You are a SQL (postgres) expert. Your job is to explain to the user write a SQL query you wrote to retrieve the data they asked for. The table schema is as follows:
unicorns (
id SERIAL PRIMARY KEY,
company VARCHAR(255) NOT NULL UNIQUE,
valuation DECIMAL(10, 2) NOT NULL,
date_joined DATE,
country VARCHAR(255) NOT NULL,
city VARCHAR(255) NOT NULL,
industry VARCHAR(255) NOT NULL,
select_investors TEXT NOT NULL
);
When you explain you must take a section of the query, and then explain it. Each "section" should be unique. So in a query like: "SELECT * FROM unicorns limit 20", the sections could be "SELECT *", "FROM UNICORNS", "LIMIT 20".
If a section doesnt have any explanation, include it, but leave the explanation empty.
```
Like the prompt for generating SQL queries, you provide the model with the schema of the database. Additionally, you provide an example of what each section of the query might look like. This helps the model understand the structure of the query and how to break it down into logical sections.
### Create a Server Action
Add a new Server Action to generate explanations for SQL queries.
This action takes two parameters - the original natural language input and the generated SQL query.
```ts filename="app/actions.ts"
/* ...rest of the file... */
export const explainQuery = async (input: string, sqlQuery: string) => {
'use server';
try {
const result = await generateObject({
model: openai('gpt-4o'),
system: `You are a SQL (postgres) expert. ...`, // SYSTEM PROMPT AS ABOVE - OMITTED FOR BREVITY
prompt: `Explain the SQL query you generated to retrieve the data the user wanted. Assume the user is not an expert in SQL. Break down the query into steps. Be concise.
User Query:
${input}
Generated SQL Query:
${sqlQuery}`,
});
return result.object;
} catch (e) {
console.error(e);
throw new Error('Failed to generate query');
}
};
```
This action uses the `generateObject` function again. However, you haven't defined the schema yet. Let's define it in another file so it can also be used as a type in your components.
Update your `lib/types.ts` file to include the schema for the explanations:
```ts filename="lib/types.ts"
import { z } from 'zod';
/* ...rest of the file... */
export const explanationSchema = z.object({
section: z.string(),
explanation: z.string(),
});
export type QueryExplanation = z.infer;
```
This schema defines the structure of the explanation that the model will generate. Each explanation will have a `section` and an `explanation`. The `section` is the part of the query being explained, and the `explanation` is the plain English explanation of that section. Go back to your `actions.ts` file and import and use the `explanationSchema`:
```ts filename="app/actions.ts" highlight="2,19,20"
// other imports
import { explanationSchema } from '@/lib/types';
/* ...rest of the file... */
export const explainQuery = async (input: string, sqlQuery: string) => {
'use server';
try {
const result = await generateObject({
model: openai('gpt-4o'),
system: `You are a SQL (postgres) expert. ...`, // SYSTEM PROMPT AS ABOVE - OMITTED FOR BREVITY
prompt: `Explain the SQL query you generated to retrieve the data the user wanted. Assume the user is not an expert in SQL. Break down the query into steps. Be concise.
User Query:
${input}
Generated SQL Query:
${sqlQuery}`,
schema: explanationSchema,
output: 'array',
});
return result.object;
} catch (e) {
console.error(e);
throw new Error('Failed to generate query');
}
};
```
You can use `output: "array"` to indicate to the model that you expect an
array of objects matching the schema to be returned.
### Update query viewer
Next, update the `query-viewer.tsx` component to display these explanations. The `handleExplainQuery` function is called every time the user clicks the question icon button on the right side of the query. Let's update this function to use the new `explainQuery` action:
```ts filename="components/query-viewer.tsx" highlight="2,10,11"
/* ...other imports... */
import { explainQuery } from '@/app/actions';
/* ...rest of the component... */
const handleExplainQuery = async () => {
setQueryExpanded(true);
setLoadingExplanation(true);
const explanations = await explainQuery(inputValue, activeQuery);
setQueryExplanations(explanations);
setLoadingExplanation(false);
};
/* ...rest of the component... */
```
Now when users click the explanation button (the question mark icon), the component will:
1. Show a loading state
2. Send the active SQL query and the users natural language query to your Server Action
3. The model will generate an array of explanations
4. The explanations will be set in the component state and rendered in the UI
Submit a new query and then click the explanation button. Hover over different elements of the query. You should see the explanations for each section!
## Visualizing query results
Finally, let's render the query results visually in a chart. There are two approaches you could take:
1. Send both the query and data to the model and ask it to return the data in a visualization-ready format. While this provides complete control over the visualization, it requires the model to send back all of the data, which significantly increases latency and costs.
2. Send the query and data to the model and ask it to generate a chart configuration (fixed-size and not many tokens) that maps your data appropriately. This configuration specifies how to visualize the information while delivering the insights from your natural language query. Importnatly, this is done without requiring the model return the full dataset.
Since you don't know the SQL query or data shape beforehand, let's use the second approach to dynamically generate chart configurations based on the query results and user intent.
### Generate the chart configuration
For this feature, you'll create a Server Action that takes the query results and the user's original natural language query to determine the best visualization approach. Your application is already set up to use `shadcn` charts (which uses [`Recharts`](https://recharts.org/en-US/) under the hood) so the model will need to generate:
- Chart type (bar, line, area, or pie)
- Axis mappings
- Visual styling
Let's start by defining the schema for the chart configuration in `lib/types.ts`:
```ts filename="lib/types.ts"
/* ...rest of the file... */
export const configSchema = z
.object({
description: z
.string()
.describe(
'Describe the chart. What is it showing? What is interesting about the way the data is displayed?',
),
takeaway: z.string().describe('What is the main takeaway from the chart?'),
type: z.enum(['bar', 'line', 'area', 'pie']).describe('Type of chart'),
title: z.string(),
xKey: z.string().describe('Key for x-axis or category'),
yKeys: z
.array(z.string())
.describe(
'Key(s) for y-axis values this is typically the quantitative column',
),
multipleLines: z
.boolean()
.describe(
'For line charts only: whether the chart is comparing groups of data.',
)
.optional(),
measurementColumn: z
.string()
.describe(
'For line charts only: key for quantitative y-axis column to measure against (eg. values, counts etc.)',
)
.optional(),
lineCategories: z
.array(z.string())
.describe(
'For line charts only: Categories used to compare different lines or data series. Each category represents a distinct line in the chart.',
)
.optional(),
colors: z
.record(
z.string().describe('Any of the yKeys'),
z.string().describe('Color value in CSS format (e.g., hex, rgb, hsl)'),
)
.describe('Mapping of data keys to color values for chart elements')
.optional(),
legend: z.boolean().describe('Whether to show legend'),
})
.describe('Chart configuration object');
export type Config = z.infer;
```
Replace the existing `export type Config = any;` type with the new one.
This schema makes extensive use of Zod's `.describe()` function to give the model extra context about each of the key's you are expecting in the chart configuration. This will help the model understand the purpose of each key and generate more accurate results.
Another important technique to note here is that you are defining `description` and `takeaway` fields. Not only are these useful for the user to quickly understand what the chart means and what they should take away from it, but they also force the model to generate a description of the data first, before it attempts to generate configuration attributes like axis and columns. This will help the model generate more accurate and relevant chart configurations.
### Create the Server Action
Create a new action in `app/actions.ts`:
```ts
/* ...other imports... */
import { Config, configSchema, explanationsSchema, Result } from '@/lib/types';
/* ...rest of the file... */
export const generateChartConfig = async (
results: Result[],
userQuery: string,
) => {
'use server';
try {
const { object: config } = await generateObject({
model: openai('gpt-4o'),
system: 'You are a data visualization expert.',
prompt: `Given the following data from a SQL query result, generate the chart config that best visualises the data and answers the users query.
For multiple groups use multi-lines.
Here is an example complete config:
export const chartConfig = {
type: "pie",
xKey: "month",
yKeys: ["sales", "profit", "expenses"],
colors: {
sales: "#4CAF50", // Green for sales
profit: "#2196F3", // Blue for profit
expenses: "#F44336" // Red for expenses
},
legend: true
}
User Query:
${userQuery}
Data:
${JSON.stringify(results, null, 2)}`,
schema: configSchema,
});
// Override with shadcn theme colors
const colors: Record = {};
config.yKeys.forEach((key, index) => {
colors[key] = `hsl(var(--chart-${index + 1}))`;
});
const updatedConfig = { ...config, colors };
return { config: updatedConfig };
} catch (e) {
console.error(e);
throw new Error('Failed to generate chart suggestion');
}
};
```
### Update the chart component
With the action in place, you'll want to trigger it automatically after receiving query results. This ensures the visualization appears almost immediately after data loads.
Update the `handleSubmit` function in your root page (`app/page.tsx`) to generate and set the chart configuration after running the query:
```typescript filename="app/page.tsx" highlight="38,39"
/* ...other imports... */
import { getCompanies, generateQuery, generateChartConfig } from './actions';
/* ...rest of the file... */
const handleSubmit = async (suggestion?: string) => {
clearExistingData();
const question = suggestion ?? inputValue;
if (inputValue.length === 0 && !suggestion) return;
if (question.trim()) {
setSubmitted(true);
}
setLoading(true);
setLoadingStep(1);
setActiveQuery('');
try {
const query = await generateQuery(question);
if (query === undefined) {
toast.error('An error occurred. Please try again.');
setLoading(false);
return;
}
setActiveQuery(query);
setLoadingStep(2);
const companies = await runGeneratedSQLQuery(query);
const columns = companies.length > 0 ? Object.keys(companies[0]) : [];
setResults(companies);
setColumns(columns);
setLoading(false);
const { config } = await generateChartConfig(companies, question);
setChartConfig(config);
} catch (e) {
toast.error('An error occurred. Please try again.');
setLoading(false);
}
};
/* ...rest of the file... */
```
Now when users submit queries, the application will:
1. Generate and run the SQL query
2. Display the table results
3. Generate a chart configuration for the results
4. Allow toggling between table and chart views
Head back to the browser and test the application with a few queries. You should see the chart visualization appear after the table results.
## Next steps
You've built an AI-powered SQL analysis tool that can convert natural language to SQL queries, visualize query results, and explain SQL queries in plain English.
You could, for example, extend the application to use your own data sources or add more advanced features like customizing the chart configuration schema to support more chart types and options. You could also add more complex SQL query generation capabilities.
---
title: Guides
description: Learn how to build AI applications with the AI SDK
---
# Guides
These use-case specific guides are intended to help you build real applications with the AI SDK.
---
title: Overview
description: An overview of AI SDK Core.
---
# AI SDK Core
Large Language Models (LLMs) are advanced programs that can understand, create, and engage with human language on a large scale.
They are trained on vast amounts of written material to recognize patterns in language and predict what might come next in a given piece of text.
AI SDK Core **simplifies working with LLMs by offering a standardized way of integrating them into your app** - so you can focus on building great AI applications for your users, not waste time on technical details.
For example, here’s how you can generate text with various models using the AI SDK:
## AI SDK Core Functions
AI SDK Core has various functions designed for [text generation](./generating-text), [structured data generation](./generating-structured-data), and [tool usage](./tools-and-tool-calling).
These functions take a standardized approach to setting up [prompts](./prompts) and [settings](./settings), making it easier to work with different models.
- [`generateText`](/docs/ai-sdk-core/generating-text): Generates text and [tool calls](./tools-and-tool-calling).
This function is ideal for non-interactive use cases such as automation tasks where you need to write text (e.g. drafting email or summarizing web pages) and for agents that use tools.
- [`streamText`](/docs/ai-sdk-core/generating-text): Stream text and tool calls.
You can use the `streamText` function for interactive use cases such as [chat bots](/docs/ai-sdk-ui/chatbot) and [content streaming](/docs/ai-sdk-ui/completion).
- [`generateObject`](/docs/ai-sdk-core/generating-structured-data): Generates a typed, structured object that matches a [Zod](https://zod.dev/) schema.
You can use this function to force the language model to return structured data, e.g. for information extraction, synthetic data generation, or classification tasks.
- [`streamObject`](/docs/ai-sdk-core/generating-structured-data): Stream a structured object that matches a Zod schema.
You can use this function to [stream generated UIs](/docs/ai-sdk-ui/object-generation).
## API Reference
Please check out the [AI SDK Core API Reference](/docs/reference/ai-sdk-core) for more details on each function.
---
title: Generating Text
description: Learn how to generate text with the AI SDK.
---
# Generating and Streaming Text
Large language models (LLMs) can generate text in response to a prompt, which can contain instructions and information to process.
For example, you can ask a model to come up with a recipe, draft an email, or summarize a document.
The AI SDK Core provides two functions to generate text and stream it from LLMs:
- [`generateText`](#generatetext): Generates text for a given prompt and model.
- [`streamText`](#streamtext): Streams text from a given prompt and model.
Advanced LLM features such as [tool calling](./tools-and-tool-calling) and [structured data generation](./generating-structured-data) are built on top of text generation.
## `generateText`
You can generate text using the [`generateText`](/docs/reference/ai-sdk-core/generate-text) function. This function is ideal for non-interactive use cases where you need to write text (e.g. drafting email or summarizing web pages) and for agents that use tools.
```tsx
import { generateText } from 'ai';
const { text } = await generateText({
model: yourModel,
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
```
You can use more [advanced prompts](./prompts) to generate text with more complex instructions and content:
```tsx
import { generateText } from 'ai';
const { text } = await generateText({
model: yourModel,
system:
'You are a professional writer. ' +
'You write simple, clear, and concise content.',
prompt: `Summarize the following article in 3-5 sentences: ${article}`,
});
```
The result object of `generateText` contains several promises that resolve when all required data is available:
- `result.text`: The generated text.
- `result.finishReason`: The reason the model finished generating text.
- `result.usage`: The usage of the model during text generation.
## `streamText`
Depending on your model and prompt, it can take a large language model (LLM) up to a minute to finish generating it's response. This delay can be unacceptable for interactive use cases such as chatbots or real-time applications, where users expect immediate responses.
AI SDK Core provides the [`streamText`](/docs/reference/ai-sdk-core/stream-text) function which simplifies streaming text from LLMs:
```ts
import { streamText } from 'ai';
const result = streamText({
model: yourModel,
prompt: 'Invent a new holiday and describe its traditions.',
});
// example: use textStream as an async iterable
for await (const textPart of result.textStream) {
console.log(textPart);
}
```
`result.textStream` is both a `ReadableStream` and an `AsyncIterable`.
You can use `streamText` on it's own or in combination with [AI SDK
UI](/examples/next-pages/basics/streaming-text-generation) and [AI SDK
RSC](/examples/next-app/basics/streaming-text-generation).
The result object contains several helper functions to make the integration into [AI SDK UI](/docs/ai-sdk-ui) easier:
- `result.toDataStreamResponse()`: Creates a data stream HTTP response (with tool calls etc.) that can be used in a Next.js App Router API route.
- `result.pipeDataStreamToResponse()`: Writes data stream delta output to a Node.js response-like object.
- `result.toTextStreamResponse()`: Creates a simple text stream HTTP response.
- `result.pipeTextStreamToResponse()`: Writes text delta output to a Node.js response-like object.
`streamText` is using backpressure and only generates tokens as they are
requested. You need to consume the stream in order for it to finish.
It also provides several promises that resolve when the stream is finished:
- `result.text`: The generated text.
- `result.finishReason`: The reason the model finished generating text.
- `result.usage`: The usage of the model during text generation.
### `onChunk` callback
When using `streamText`, you can provide an `onChunk` callback that is triggered for each chunk of the stream.
It receives the following chunk types:
- `text-delta`
- `tool-call`
- `tool-result`
- `tool-call-streaming-start` (when `experimental_streamToolCalls` is enabled)
- `tool-call-delta` (when `experimental_streamToolCalls` is enabled)
```tsx highlight="6-11"
import { streamText } from 'ai';
const result = streamText({
model: yourModel,
prompt: 'Invent a new holiday and describe its traditions.',
onChunk({ chunk }) {
// implement your own logic here, e.g.:
if (chunk.type === 'text-delta') {
console.log(chunk.text);
}
},
});
```
### `onFinish` callback
When using `streamText`, you can provide an `onFinish` callback that is triggered when the stream is finished (
[API Reference](/docs/reference/ai-sdk-core/stream-text#on-finish)
).
It contains the text, usage information, finish reason, messages, and more:
```tsx highlight="6-8"
import { streamText } from 'ai';
const result = streamText({
model: yourModel,
prompt: 'Invent a new holiday and describe its traditions.',
onFinish({ text, finishReason, usage, response }) {
// your own logic, e.g. for saving the chat history or recording usage
const messages = response.messages; // messages that were generated
},
});
```
### `fullStream` property
You can read a stream with all events using the `fullStream` property.
This can be useful if you want to implement your own UI or handle the stream in a different way.
Here is an example of how to use the `fullStream` property:
```tsx
import { streamText } from 'ai';
import { z } from 'zod';
const result = streamText({
model: yourModel,
tools: {
cityAttractions: {
parameters: z.object({ city: z.string() }),
execute: async ({ city }) => ({
attractions: ['attraction1', 'attraction2', 'attraction3'],
}),
},
},
prompt: 'What are some San Francisco tourist attractions?',
});
for await (const part of result.fullStream) {
switch (part.type) {
case 'text-delta': {
// handle text delta here
break;
}
case 'tool-call': {
switch (part.toolName) {
case 'cityAttractions': {
// handle tool call here
break;
}
}
break;
}
case 'tool-result': {
switch (part.toolName) {
case 'cityAttractions': {
// handle tool result here
break;
}
}
break;
}
case 'finish': {
// handle finish here
break;
}
case 'error': {
// handle error here
break;
}
}
}
```
## Generating Long Text
Most language models have an output limit that is much shorter than their context window.
This means that you cannot generate long text in one go,
but it is possible to add responses back to the input and continue generating
to create longer text.
`generateText` and `streamText` support such continuations for long text generation using the experimental `continueSteps` setting:
```tsx highlight="5-6,9-10"
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
const {
text, // combined text
usage, // combined usage of all steps
} = await generateText({
model: openai('gpt-4o'), // 4096 output tokens
maxSteps: 5, // enable multi-step calls
experimental_continueSteps: true,
prompt:
'Write a book about Roman history, ' +
'from the founding of the city of Rome ' +
'to the fall of the Western Roman Empire. ' +
'Each chapter MUST HAVE at least 1000 words.',
});
```
When `experimental_continueSteps` is enabled, only full words are streamed in
`streamText`, and both `generateText` and `streamText` might drop the trailing
tokens of some calls to prevent whitespace issues.
Some models might not always stop correctly on their own and keep generating
until `maxSteps` is reached. You can hint the model to stop by e.g. using a
system message such as "Stop when sufficient information was provided."
## Examples
You can see `generateText` and `streamText` in action using various frameworks in the following examples:
### `generateText`
### `streamText`
---
title: Generating Structured Data
description: Learn how to generate structured data with the AI SDK.
---
# Generating Structured Data
While text generation can be useful, your use case will likely call for generating structured data.
For example, you might want to extract information from text, classify data, or generate synthetic data.
Many language models are capable of generating structured data, often defined as using "JSON modes" or "tools".
However, you need to manually provide schemas and then validate the generated data as LLMs can produce incorrect or incomplete structured data.
The AI SDK standardises structured object generation across model providers
with the [`generateObject`](/docs/reference/ai-sdk-core/generate-object)
and [`streamObject`](/docs/reference/ai-sdk-core/stream-object) functions.
You can use both functions with different output strategies, e.g. `array`, `object`, or `no-schema`,
and with different generation modes, e.g. `auto`, `tool`, or `json`.
You can use [Zod schemas](./schemas-and-zod) or [JSON schemas](/docs/reference/ai-sdk-core/json-schema) to specify the shape of the data that you want,
and the AI model will generate data that conforms to that structure.
## Generate Object
The `generateObject` generates structured data from a prompt.
The schema is also used to validate the generated data, ensuring type safety and correctness.
```ts
import { generateObject } from 'ai';
import { z } from 'zod';
const { object } = await generateObject({
model: yourModel,
schema: z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(z.object({ name: z.string(), amount: z.string() })),
steps: z.array(z.string()),
}),
}),
prompt: 'Generate a lasagna recipe.',
});
```
## Stream Object
Given the added complexity of returning structured data, model response time can be unacceptable for your interactive use case.
With the [`streamObject`](/docs/reference/ai-sdk-core/stream-object) function, you can stream the model's response as it is generated.
```ts
import { streamObject } from 'ai';
const { partialObjectStream } = streamObject({
// ...
});
// use partialObjectStream as an async iterable
for await (const partialObject of partialObjectStream) {
console.log(partialObject);
}
```
You can use `streamObject` to stream generated UIs in combination with React Server Components (see [Generative UI](../ai-sdk-rsc))) or the [`useObject`](/docs/reference/ai-sdk-ui/use-object) hook.
## Output Strategy
You can use both functions with different output strategies, e.g. `array`, `object`, or `no-schema`.
### Object
The default output strategy is `object`, which returns the generated data as an object.
You don't need to specify the output strategy if you want to use the default.
### Array
If you want to generate an array of objects, you can set the output strategy to `array`.
When you use the `array` output strategy, the schema specifies the shape of an array element.
With `streamObject`, you can also stream the generated array elements using `elementStream`.
```ts highlight="7,18"
import { openai } from '@ai-sdk/openai';
import { streamObject } from 'ai';
import { z } from 'zod';
const { elementStream } = streamObject({
model: openai('gpt-4-turbo'),
output: 'array',
schema: z.object({
name: z.string(),
class: z
.string()
.describe('Character class, e.g. warrior, mage, or thief.'),
description: z.string(),
}),
prompt: 'Generate 3 hero descriptions for a fantasy role playing game.',
});
for await (const hero of elementStream) {
console.log(hero);
}
```
### Enum
If you want to generate a specific enum value, e.g. for classification tasks,
you can set the output strategy to `enum`
and provide a list of possible values in the `enum` parameter.
Enum output is only available with `generateObject`.
```ts highlight="5-6"
import { generateObject } from 'ai';
const { object } = await generateObject({
model: yourModel,
output: 'enum',
enum: ['action', 'comedy', 'drama', 'horror', 'sci-fi'],
prompt:
'Classify the genre of this movie plot: ' +
'"A group of astronauts travel through a wormhole in search of a ' +
'new habitable planet for humanity."',
});
```
### No Schema
In some cases, you might not want to use a schema,
for example when the data is a dynamic user request.
You can use the `output` setting to set the output format to `no-schema` in those cases
and omit the schema parameter.
```ts highlight="6"
import { openai } from '@ai-sdk/openai';
import { generateObject } from 'ai';
const { object } = await generateObject({
model: openai('gpt-4-turbo'),
output: 'no-schema',
prompt: 'Generate a lasagna recipe.',
});
```
## Generation Mode
While some models (like OpenAI) natively support object generation, others require alternative methods, like modified [tool calling](/docs/ai-sdk-core/tools-and-tool-calling). The `generateObject` function allows you to specify the method it will use to return structured data.
- `auto`: The provider will choose the best mode for the model. This recommended mode is used by default.
- `tool`: A tool with the JSON schema as parameters is provided and the provider is instructed to use it.
- `json`: The response format is set to JSON when supported by the provider, e.g. via json modes or grammar-guided generation. If grammar-guided generation is not supported, the JSON schema and instructions to generate JSON that conforms to the schema are injected into the system prompt.
Please note that not every provider supports all generation modes. Some
providers do not support object generation at all.
## Schema Name and Description
You can optionally specify a name and description for the schema. These are used by some providers for additional LLM guidance, e.g. via tool or schema name.
```ts highlight="6-7"
import { generateObject } from 'ai';
import { z } from 'zod';
const { object } = await generateObject({
model: yourModel,
schemaName: 'Recipe',
schemaDescription: 'A recipe for a dish.',
schema: z.object({
name: z.string(),
ingredients: z.array(z.object({ name: z.string(), amount: z.string() })),
steps: z.array(z.string()),
}),
prompt: 'Generate a lasagna recipe.',
});
```
## Error Handling
When you use `generateObject`, errors are thrown when the model fails to generate proper JSON (`JSONParseError`)
or when the generated JSON does not match the schema (`TypeValidationError`).
Both error types contain additional information, e.g. the generated text or the invalid value.
You can use this to e.g. design a function that safely process the result object and also returns values in error cases:
```ts
import { openai } from '@ai-sdk/openai';
import { JSONParseError, TypeValidationError, generateObject } from 'ai';
import { z } from 'zod';
const recipeSchema = z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(z.object({ name: z.string(), amount: z.string() })),
steps: z.array(z.string()),
}),
});
type Recipe = z.infer;
async function generateRecipe(
food: string,
): Promise<
| { type: 'success'; recipe: Recipe }
| { type: 'parse-error'; text: string }
| { type: 'validation-error'; value: unknown }
| { type: 'unknown-error'; error: unknown }
> {
try {
const result = await generateObject({
model: openai('gpt-4-turbo'),
schema: recipeSchema,
prompt: `Generate a ${food} recipe.`,
});
return { type: 'success', recipe: result.object };
} catch (error) {
if (TypeValidationError.isTypeValidationError(error)) {
return { type: 'validation-error', value: error.value };
} else if (JSONParseError.isJSONParseError(error)) {
return { type: 'parse-error', text: error.text };
} else {
return { type: 'unknown-error', error };
}
}
}
```
## Structured output with `generateText`
Structured output with `generateText` is experimental and may change in the
future.
You can also generate structured data with `generateText` by using the `experimental_output` setting.
This enables you to use structured outputs together with tool calling (for models that support it - currently only OpenAI).
```ts highlight="1,3,4"
const { experimental_output } = await generateText({
// ...
experimental_output: Output.object({
schema: z.object({
name: z.string(),
age: z.number().nullable().describe('Age of the person.'),
contact: z.object({
type: z.literal('email'),
value: z.string(),
}),
occupation: z.object({
type: z.literal('employed'),
company: z.string(),
position: z.string(),
}),
}),
}),
prompt: 'Generate an example person for testing.',
});
```
## More Examples
You can see `generateObject` and `streamObject` in action using various frameworks in the following examples:
### `generateObject`
### `streamObject`
---
title: Tool Calling
description: Learn about tool calling with AI SDK Core.
---
# Tool Calling
As covered under Foundations, [tools](/docs/foundations/tools) are objects that can be called by the model to perform a specific task.
AI SDK Core tools contain three elements:
- **`description`**: An optional description of the tool that can influence when the tool is picked.
- **`parameters`**: A [Zod schema](/docs/foundations/tools#schemas) or a [JSON schema](/docs/reference/ai-sdk-core/json-schema) that defines the parameters. The schema is consumed by the LLM, and also used to validate the LLM tool calls.
- **`execute`**: An optional async function that is called with the arguments from the tool call. It produces a value of type `RESULT` (generic type). It is optional because you might want to forward tool calls to the client or to a queue instead of executing them in the same process.
You can use the [`tool`](/docs/reference/ai-sdk-core/tool) helper function to
infer the types of the `execute` parameters.
The `tools` parameter of `generateText` and `streamText` is an object that has the tool names as keys and the tools as values:
```ts highlight="6-17"
import { z } from 'zod';
import { generateText, tool } from 'ai';
const result = await generateText({
model: yourModel,
tools: {
weather: tool({
description: 'Get the weather in a location',
parameters: z.object({
location: z.string().describe('The location to get the weather for'),
}),
execute: async ({ location }) => ({
location,
temperature: 72 + Math.floor(Math.random() * 21) - 10,
}),
}),
},
prompt: 'What is the weather in San Francisco?',
});
```
When a model uses a tool, it is called a "tool call" and the output of the
tool is called a "tool result".
Tool calling is not restricted to only text generation.
You can also use it to render user interfaces (Generative UI).
## Multi-Step Calls
Large language models need to know the tool results before they can continue to generate text.
This requires sending the tool results back to the model.
You can enable this feature by setting the `maxSteps` setting to a number greater than 1.
When `maxSteps` is set to a number greater than 1, the language model will be called
in a loop when there are tool calls and for every tool call there is a tool result, until there
are no further tool calls or the maximum number of tool steps is reached.
### Example
In the following example, there are two steps:
1. **Step 1**
1. The prompt `'What is the weather in San Francisco?'` is sent to the model.
1. The model generates a tool call.
1. The tool call is executed.
1. **Step 2**
1. The tool result is sent to the model.
1. The model generates a response considering the tool result.
```ts highlight="18"
import { z } from 'zod';
import { generateText, tool } from 'ai';
const { text, steps } = await generateText({
model: yourModel,
tools: {
weather: tool({
description: 'Get the weather in a location',
parameters: z.object({
location: z.string().describe('The location to get the weather for'),
}),
execute: async ({ location }) => ({
location,
temperature: 72 + Math.floor(Math.random() * 21) - 10,
}),
}),
},
maxSteps: 5, // allow up to 5 steps
prompt: 'What is the weather in San Francisco?',
});
```
You can use `streamText` in a similar way.
### Steps
To access intermediate tool calls and results, you can use the `steps` property in the result object
or the `streamText` `onFinish` callback.
It contains all the text, tool calls, tool results, and more from each step.
#### Example: Extract tool results from all steps
```ts highlight="3,9-10"
import { generateText } from 'ai';
const { steps } = await generateText({
model: openai('gpt-4-turbo'),
maxSteps: 10,
// ...
});
// extract all tool calls from the steps:
const allToolCalls = steps.flatMap(step => step.toolCalls);
```
### `onStepFinish` callback
When using `generateText` or `streamText`, you can provide an `onStepFinish` callback that
is triggered when a step is finished,
i.e. all text deltas, tool calls, and tool results for the step are available.
When you have multiple steps, the callback is triggered for each step.
```tsx highlight="5-7"
import { generateText } from 'ai';
const result = await generateText({
// ...
onStepFinish({ text, toolCalls, toolResults, finishReason, usage }) {
// your own logic, e.g. for saving the chat history or recording usage
},
});
```
## Response Messages
Adding the generated assistant and tool messages to your conversation history is a common task,
especially if you are using multi-step tool calls.
Both `generateText` and `streamText` have a `responseMessages` property that you can use to
add the assistant and tool messages to your conversation history.
It is also available in the `onFinish` callback of `streamText`.
The `responseMessages` property contains an array of `CoreMessage` objects that you can add to your conversation history:
```ts
import { generateText } from 'ai';
const messages: CoreMessage[] = [
// ...
];
const { responseMessages } = await generateText({
// ...
messages,
});
// add the response messages to your conversation history:
messages.push(...responseMessages); // streamText: ...(await responseMessages)
```
## Tool Choice
You can use the `toolChoice` setting to influence when a tool is selected.
It supports the following settings:
- `auto` (default): the model can choose whether and which tools to call.
- `required`: the model must call a tool. It can choose which tool to call.
- `none`: the model must not call tools
- `{ type: 'tool', toolName: string (typed) }`: the model must call the specified tool
```ts highlight="18"
import { z } from 'zod';
import { generateText, tool } from 'ai';
const result = await generateText({
model: yourModel,
tools: {
weather: tool({
description: 'Get the weather in a location',
parameters: z.object({
location: z.string().describe('The location to get the weather for'),
}),
execute: async ({ location }) => ({
location,
temperature: 72 + Math.floor(Math.random() * 21) - 10,
}),
}),
},
toolChoice: 'required', // force the model to call a tool
prompt: 'What is the weather in San Francisco?',
});
```
## Tool Execution Options
When tools are called, they receive additional options as a second parameter.
### Tool Call ID
The ID of the tool call is forwarded to the tool execution.
You can use it e.g. when sending tool-call related information with stream data.
```ts highlight="14-20"
import { StreamData, streamText, tool } from 'ai';
export async function POST(req: Request) {
const { messages } = await req.json();
const data = new StreamData();
const result = streamText({
// ...
messages,
tools: {
myTool: tool({
// ...
execute: async (args, { toolCallId }) => {
// return e.g. custom status for tool call
data.appendMessageAnnotation({
type: 'tool-status',
toolCallId,
status: 'in-progress',
});
// ...
},
}),
},
onFinish() {
data.close();
},
});
return result.toDataStreamResponse({ data });
}
```
### Messages
The messages that were sent to the language model to initiate the response that contained the tool call are forwarded to the tool execution.
You can access them in the second parameter of the `execute` function.
In multi-step calls, the messages contain the text, tool calls, and tool results from all previous steps.
```ts highlight="8-9"
import { generateText, tool } from 'ai';
const result = await generateText({
// ...
tools: {
myTool: tool({
// ...
execute: async (args, { messages }) => {
// use the message history in e.g. calls to other language models
return something;
},
}),
},
});
```
### Abort Signals
The abort signals from `generateText` and `streamText` are forwarded to the tool execution.
You can access them in the second parameter of the `execute` function and e.g. abort long-running computations or forward them to fetch calls inside tools.
```ts highlight="6,11,14"
import { z } from 'zod';
import { generateText, tool } from 'ai';
const result = await generateText({
model: yourModel,
abortSignal: myAbortSignal, // signal that will be forwarded to tools
tools: {
weather: tool({
description: 'Get the weather in a location',
parameters: z.object({ location: z.string() }),
execute: async ({ location }, { abortSignal }) => {
return fetch(
`https://api.weatherapi.com/v1/current.json?q=${location}`,
{ signal: abortSignal }, // forward the abort signal to fetch
);
},
}),
},
prompt: 'What is the weather in San Francisco?',
});
```
## Types
Modularizing your code often requires defining types to ensure type safety and reusability.
To enable this, the AI SDK provides several helper types for tools, tool calls, and tool results.
You can use them to strongly type your variables, function parameters, and return types
in parts of the code that are not directly related to `streamText` or `generateText`.
Each tool call is typed with `CoreToolCall`, depending
on the tool that has been invoked.
Similarly, the tool results are typed with `CoreToolResult`.
The tools in `streamText` and `generateText` are defined as a `Record`.
The type inference helpers `CoreToolCallUnion>`
and `CoreToolResultUnion>` can be used to
extract the tool call and tool result types from the tools.
```ts highlight="18-19,23-24"
import { openai } from '@ai-sdk/openai';
import { CoreToolCallUnion, CoreToolResultUnion, generateText, tool } from 'ai';
import { z } from 'zod';
const myToolSet = {
firstTool: tool({
description: 'Greets the user',
parameters: z.object({ name: z.string() }),
execute: async ({ name }) => `Hello, ${name}!`,
}),
secondTool: tool({
description: 'Tells the user their age',
parameters: z.object({ age: z.number() }),
execute: async ({ age }) => `You are ${age} years old!`,
}),
};
type MyToolCall = CoreToolCallUnion;
type MyToolResult = CoreToolResultUnion;
async function generateSomething(prompt: string): Promise<{
text: string;
toolCalls: Array; // typed tool calls
toolResults: Array; // typed tool results
}> {
return generateText({
model: openai('gpt-4o'),
tools: myToolSet,
prompt,
});
}
```
## Handling Errors
The AI SDK has three tool-call related errors:
- [`NoSuchToolError`](/docs/reference/ai-sdk-errors/ai-no-such-tool-error): the model tries to call a tool that is not defined in the tools object
- [`InvalidToolArgumentsError`](/docs/reference/ai-sdk-errors/ai-invalid-tool-arguments-error): the model calls a tool with arguments that do not match the tool's parameters
- [`ToolExecutionError`](/docs/reference/ai-sdk-errors/ai-tool-execution-error): an error that occurred during tool execution
- [`ToolCallRepairError`](/docs/reference/ai-sdk-errors/ai-tool-call-repair-error): an error that occurred during tool call repair
### `generateText`
`generateText` throws errors and can be handled using a `try`/`catch` block:
```ts
try {
const result = await generateText({
//...
});
} catch (error) {
if (NoSuchToolError.isInstance(error)) {
// handle the no such tool error
} else if (InvalidToolArgumentsError.isInstance(error)) {
// handle the invalid tool arguments error
} else if (ToolExecutionError.isInstance(error)) {
// handle the tool execution error
} else {
// handle other errors
}
}
```
### `streamText`
`streamText` sends the errors as part of the full stream. The error parts contain the error object.
When using `toDataStreamResponse`, you can pass an `getErrorMessage` function to extract the error message from the error part and forward it as part of the data stream response:
```ts
const result = streamText({
// ...
});
return result.toDataStreamResponse({
getErrorMessage: error => {
if (NoSuchToolError.isInstance(error)) {
return 'The model tried to call a unknown tool.';
} else if (InvalidToolArgumentsError.isInstance(error)) {
return 'The model called a tool with invalid arguments.';
} else if (ToolExecutionError.isInstance(error)) {
return 'An error occurred during tool execution.';
} else {
return 'An unknown error occurred.';
}
},
});
```
## Tool Call Repair
The tool call repair feature is experimental and may change in the future.
Language models sometimes fail to generate valid tool calls,
especially when the parameters are complex or the model is smaller.
You can use the `experimental_toToolCallRepair` function to attempt to repair the tool call
with a custom function.
You can use different strategies to repair the tool call:
- Use a model with structured outputs to generate the arguments.
- Send the messages, system prompt, and tool schema to a stronger model to generate the arguments.
- Provide more specific repair instructions based on which tool was called.
```ts
import { openai } from '@ai-sdk/openai';
import { generateObject, generateText, NoSuchToolError, tool } from 'ai';
const result = await generateText({
model,
tools,
prompt,
// example approach: use a model with structured outputs for repair.
// (you can use other strategies as well)
experimental_repairToolCall: async ({
toolCall,
tools,
parameterSchema,
error,
messages,
system,
}) => {
if (NoSuchToolError.isInstance(error)) {
return null; // do not attempt to fix invalid tool names
}
const tool = tools[toolCall.toolName as keyof typeof tools];
const { object: repairedArgs } = await generateObject({
model: openai('gpt-4o', { structuredOutputs: true }),
schema: tool.parameters,
prompt: [
`The model tried to call the tool "${toolCall.toolName}"` +
` with the following arguments:`,
JSON.stringify(toolCall.args),
`The tool accepts the following schema:`,
JSON.stringify(parameterSchema(toolCall)),
'Please fix the arguments.',
].join('\n'),
});
return { ...toolCall, args: JSON.stringify(repairedArgs) };
},
});
```
## Active Tools
The `activeTools` property is experimental and may change in the future.
Language models can only handle a limited number of tools at a time, depending on the model.
To allow for static typing using a large number of tools and limiting the available tools to the model at the same time,
the AI SDK provides the `experimental_activeTools` property.
It is an array of tool names that are currently active.
By default, the value is `undefined` and all tools are active.
```ts highlight="7"
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
const { text } = await generateText({
model: openai('gpt-4o'),
tools: myToolSet,
experimental_activeTools: ['firstTool'],
});
```
## Multi-modal Tool Results
Multi-modal tool results are experimental and only supported by Anthropic.
In order to send multi-modal tool results, e.g. screenshots, back to the model,
they need to be converted into a specific format.
AI SDK Core tools have an optional `experimental_toToolResultContent` function
that converts the tool result into a content part.
Here is an example for converting a screenshot into a content part:
```ts highlight="22-27"
const result = await generateText({
model: anthropic('claude-3-5-sonnet-20241022'),
tools: {
computer: anthropic.tools.computer_20241022({
// ...
async execute({ action, coordinate, text }) {
switch (action) {
case 'screenshot': {
return {
type: 'image',
data: fs
.readFileSync('./data/screenshot-editor.png')
.toString('base64'),
};
}
default: {
return `executed ${action}`;
}
}
},
// map to tool result content for LLM consumption:
experimental_toToolResultContent(result) {
return typeof result === 'string'
? [{ type: 'text', text: result }]
: [{ type: 'image', data: result.data, mimeType: 'image/png' }];
},
}),
},
// ...
});
```
## Examples
You can see tools in action using various frameworks in the following examples:
---
title: Agents
description: Learn about creating agents with AI SDK Core.
---
# Agents
AI agents let the language model execute a series of steps in a non-deterministic way.
The model can make tool calling decisions based on the context of the conversation, the user's input,
and previous tool calls and results.
One approach to implementing agents is to allow the LLM to choose the next step in a loop.
With `generateText`, you can combine [tools](/docs/ai-sdk-core/tools-and-tool-calling) with `maxSteps`.
This makes it possible to implement agents that reason at each step and make decisions based on the context.
### Example
This example demonstrates how to create an agent that solves math problems.
It has a calculator tool (using [math.js](https://mathjs.org/)) that it can call to evaluate mathematical expressions.
```ts file='main.ts'
import { openai } from '@ai-sdk/openai';
import { generateText, tool } from 'ai';
import * as mathjs from 'mathjs';
import { z } from 'zod';
const { text: answer } = await generateText({
model: openai('gpt-4o-2024-08-06', { structuredOutputs: true }),
tools: {
calculate: tool({
description:
'A tool for evaluating mathematical expressions. ' +
'Example expressions: ' +
"'1.2 * (2 + 4.5)', '12.7 cm to inch', 'sin(45 deg) ^ 2'.",
parameters: z.object({ expression: z.string() }),
execute: async ({ expression }) => mathjs.evaluate(expression),
}),
},
maxSteps: 10,
system:
'You are solving math problems. ' +
'Reason step by step. ' +
'Use the calculator when necessary. ' +
'When you give the final answer, ' +
'provide an explanation for how you arrived at it.',
prompt:
'A taxi driver earns $9461 per 1-hour of work. ' +
'If he works 12 hours a day and in 1 hour ' +
'he uses 12 liters of petrol with a price of $134 for 1 liter. ' +
'How much money does he earn in one day?',
});
console.log(`ANSWER: ${answer}`);
```
## Structured Answers
You can use an **answer tool** and the `toolChoice: 'required'` setting to force
the LLM to answer with a structured output that matches the schema of the answer tool.
The answer tool has no `execute` function, so invoking it will terminate the agent.
Alternatively, you can use the [`experimental_output`](/docs/ai-sdk-core/generating-structured-data#structured-output-with-generatetext) setting for `generateText` to generate structured outputs.
### Example
```ts highlight="6,16-29,31,45"
import { openai } from '@ai-sdk/openai';
import { generateText, tool } from 'ai';
import 'dotenv/config';
import { z } from 'zod';
const { toolCalls } = await generateText({
model: openai('gpt-4o-2024-08-06', { structuredOutputs: true }),
tools: {
calculate: tool({
description:
'A tool for evaluating mathematical expressions. Example expressions: ' +
"'1.2 * (2 + 4.5)', '12.7 cm to inch', 'sin(45 deg) ^ 2'.",
parameters: z.object({ expression: z.string() }),
execute: async ({ expression }) => mathjs.evaluate(expression),
}),
// answer tool: the LLM will provide a structured answer
answer: tool({
description: 'A tool for providing the final answer.',
parameters: z.object({
steps: z.array(
z.object({
calculation: z.string(),
reasoning: z.string(),
}),
),
answer: z.string(),
}),
// no execute function - invoking it will terminate the agent
}),
},
toolChoice: 'required',
maxSteps: 10,
system:
'You are solving math problems. ' +
'Reason step by step. ' +
'Use the calculator when necessary. ' +
'The calculator can only do simple additions, subtractions, multiplications, and divisions. ' +
'When you give the final answer, provide an explanation for how you got it.',
prompt:
'A taxi driver earns $9461 per 1-hour work. ' +
'If he works 12 hours a day and in 1 hour he uses 14-liters petrol with price $134 for 1-liter. ' +
'How much money does he earn in one day?',
});
console.log(`FINAL TOOL CALLS: ${JSON.stringify(toolCalls, null, 2)}`);
```
## Accessing all steps
Calling `generateText` with `maxSteps` can result in several calls to the LLM (steps).
You can access information from all steps by using the `steps` property of the response.
```ts highlight="3,9-10"
import { generateText } from 'ai';
const { steps } = await generateText({
model: openai('gpt-4-turbo'),
maxSteps: 10,
// ...
});
// extract all tool calls from the steps:
const allToolCalls = steps.flatMap(step => step.toolCalls);
```
## Getting notified on each completed step
You can use the `onStepFinish` callback to get notified on each completed step.
It is triggered when a step is finished,
i.e. all text deltas, tool calls, and tool results for the step are available.
```tsx highlight="6-8"
import { generateText } from 'ai';
const result = await generateText({
model: yourModel,
maxSteps: 10,
onStepFinish({ text, toolCalls, toolResults, finishReason, usage }) {
// your own logic, e.g. for saving the chat history or recording usage
},
// ...
});
```
---
title: Prompt Engineering
description: Learn how to develop prompts with AI SDK Core.
---
# Prompt Engineering
## Tips
### Prompts for Tools
When you create prompts that include tools, getting good results can be tricky as the number and complexity of your tools increases.
Here are a few tips to help you get the best results:
1. Use a model that is strong at tool calling, such as `gpt-4` or `gpt-4-turbo`. Weaker models will often struggle to call tools effectively and flawlessly.
1. Keep the number of tools low, e.g. to 5 or less.
1. Keep the complexity of the tool parameters low. Complex Zod schemas with many nested and optional elements, unions, etc. can be challenging for the model to work with.
1. Use semantically meaningful names for your tools, parameters, parameter properties, etc. The more information you pass to the model, the better it can understand what you want.
1. Add `.describe("...")` to your Zod schema properties to give the model hints about what a particular property is for.
1. When the output of a tool might be unclear to the model and there are dependencies between tools, use the `description` field of a tool to provide information about the output of the tool execution.
1. You can include example input/outputs of tool calls in your prompt to help the model understand how to use the tools. Keep in mind that the tools work with JSON objects, so the examples should use JSON.
In general, the goal should be to give the model all information it needs in a clear way.
### Tool & Structured Data Schemas
The mapping from Zod schemas to LLM inputs (typically JSON schema) is not always straightforward, since the mapping is not one-to-one.
#### Zod Dates
Zod expects JavaScript Date objects, but models return dates as strings.
You can specify and validate the date format using `z.string().datetime()` or `z.string().date()`,
and then use a Zod transformer to convert the string to a Date object.
```ts highlight="7-10"
const result = await generateObject({
model: openai('gpt-4-turbo'),
schema: z.object({
events: z.array(
z.object({
event: z.string(),
date: z
.string()
.date()
.transform(value => new Date(value)),
}),
),
}),
prompt: 'List 5 important events from the the year 2000.',
});
```
## Debugging
### Inspecting Warnings
Not all providers support all AI SDK features.
Providers either throw exceptions or return warnings when they do not support a feature.
To check if your prompt, tools, and settings are handled correctly by the provider, you can check the call warnings:
```ts
const result = await generateText({
model: openai('gpt-4o'),
prompt: 'Hello, world!',
});
console.log(result.warnings);
```
### HTTP Request Bodies
You can inspect the raw HTTP request bodies for models that expose them, e.g. [OpenAI](/providers/ai-sdk-providers/openai).
This allows you to inspect the exact payload that is sent to the model provider in the provider-specific way.
Request bodies are available via the `request.body` property of the response:
```ts highlight="6"
const result = await generateText({
model: openai('gpt-4o'),
prompt: 'Hello, world!',
});
console.log(result.request.body);
```
---
title: Settings
description: Learn how to configure the AI SDK.
---
# Settings
Large language models (LLMs) typically provide settings to augment their output.
All AI SDK functions support the following common settings in addition to the model, the [prompt](./prompts), and additional provider-specific settings:
```ts highlight="3-5"
const result = await generateText({
model: yourModel,
maxTokens: 512,
temperature: 0.3,
maxRetries: 5,
prompt: 'Invent a new holiday and describe its traditions.',
});
```
Some providers do not support all common settings. If you use a setting with a
provider that does not support it, a warning will be generated. You can check
the `warnings` property in the result object to see if any warnings were
generated.
### `maxTokens`
Maximum number of tokens to generate.
### `temperature`
Temperature setting.
The value is passed through to the provider. The range depends on the provider and model.
For most providers, `0` means almost deterministic results, and higher values mean more randomness.
It is recommended to set either `temperature` or `topP`, but not both.
### `topP`
Nucleus sampling.
The value is passed through to the provider. The range depends on the provider and model.
For most providers, nucleus sampling is a number between 0 and 1.
E.g. 0.1 would mean that only tokens with the top 10% probability mass are considered.
It is recommended to set either `temperature` or `topP`, but not both.
### `topK`
Only sample from the top K options for each subsequent token.
Used to remove "long tail" low probability responses.
Recommended for advanced use cases only. You usually only need to use `temperature`.
### `presencePenalty`
The presence penalty affects the likelihood of the model to repeat information that is already in the prompt.
The value is passed through to the provider. The range depends on the provider and model.
For most providers, `0` means no penalty.
### `frequencyPenalty`
The frequency penalty affects the likelihood of the model to repeatedly use the same words or phrases.
The value is passed through to the provider. The range depends on the provider and model.
For most providers, `0` means no penalty.
### `stopSequences`
The stop sequences to use for stopping the text generation.
If set, the model will stop generating text when one of the stop sequences is generated.
Providers may have limits on the number of stop sequences.
### `seed`
It is the seed (integer) to use for random sampling.
If set and supported by the model, calls will generate deterministic results.
### `maxRetries`
Maximum number of retries. Set to 0 to disable retries. Default: `2`.
### `abortSignal`
An optional abort signal that can be used to cancel the call.
The abort signal can e.g. be forwarded from a user interface to cancel the call,
or to define a timeout.
#### Example: Timeout
```ts
const result = await generateText({
model: openai('gpt-4o'),
prompt: 'Invent a new holiday and describe its traditions.',
abortSignal: AbortSignal.timeout(5000), // 5 seconds
});
```
### `headers`
Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.
You can use the request headers to provide additional information to the provider,
depending on what the provider supports. For example, some observability providers support
headers such as `Prompt-Id`.
```ts
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
const result = await generateText({
model: openai('gpt-4o'),
prompt: 'Invent a new holiday and describe its traditions.',
headers: {
'Prompt-Id': 'my-prompt-id',
},
});
```
The `headers` setting is for request-specific headers. You can also set
`headers` in the provider configuration. These headers will be sent with every
request made by the provider.
---
title: Embeddings
description: Learn how to embed values with the AI SDK.
---
# Embeddings
Embeddings are a way to represent words, phrases, or images as vectors in a high-dimensional space.
In this space, similar words are close to each other, and the distance between words can be used to measure their similarity.
## Embedding a Single Value
The AI SDK provides the [`embed`](/docs/reference/ai-sdk-core/embed) function to embed single values, which is useful for tasks such as finding similar words
or phrases or clustering text.
You can use it with embeddings models, e.g. `openai.embedding('text-embedding-3-large')` or `mistral.embedding('mistral-embed')`.
```tsx
import { embed } from 'ai';
import { openai } from '@ai-sdk/openai';
// 'embedding' is a single embedding object (number[])
const { embedding } = await embed({
model: openai.embedding('text-embedding-3-small'),
value: 'sunny day at the beach',
});
```
## Embedding Many Values
When loading data, e.g. when preparing a data store for retrieval-augmented generation (RAG),
it is often useful to embed many values at once (batch embedding).
The AI SDK provides the [`embedMany`](/docs/reference/ai-sdk-core/embed-many) function for this purpose.
Similar to `embed`, you can use it with embeddings models,
e.g. `openai.embedding('text-embedding-3-large')` or `mistral.embedding('mistral-embed')`.
```tsx
import { openai } from '@ai-sdk/openai';
import { embedMany } from 'ai';
// 'embeddings' is an array of embedding objects (number[][]).
// It is sorted in the same order as the input values.
const { embeddings } = await embedMany({
model: openai.embedding('text-embedding-3-small'),
values: [
'sunny day at the beach',
'rainy afternoon in the city',
'snowy night in the mountains',
],
});
```
## Embedding Similarity
After embedding values, you can calculate the similarity between them using the [`cosineSimilarity`](/docs/reference/ai-sdk-core/cosine-similarity) function.
This is useful to e.g. find similar words or phrases in a dataset.
You can also rank and filter related items based on their similarity.
```ts highlight={"2,10"}
import { openai } from '@ai-sdk/openai';
import { cosineSimilarity, embedMany } from 'ai';
const { embeddings } = await embedMany({
model: openai.embedding('text-embedding-3-small'),
values: ['sunny day at the beach', 'rainy afternoon in the city'],
});
console.log(
`cosine similarity: ${cosineSimilarity(embeddings[0], embeddings[1])}`,
);
```
## Token Usage
Many providers charge based on the number of tokens used to generate embeddings.
Both `embed` and `embedMany` provide token usage information in the `usage` property of the result object:
```ts highlight={"4,9"}
import { openai } from '@ai-sdk/openai';
import { embed } from 'ai';
const { embedding, usage } = await embed({
model: openai.embedding('text-embedding-3-small'),
value: 'sunny day at the beach',
});
console.log(usage); // { tokens: 10 }
```
## Settings
### Retries
Both `embed` and `embedMany` accept an optional `maxRetries` parameter of type `number`
that you can use to set the maximum number of retries for the embedding process.
It defaults to `2` retries (3 attempts in total). You can set it to `0` to disable retries.
```ts highlight={"7"}
import { openai } from '@ai-sdk/openai';
import { embed } from 'ai';
const { embedding } = await embed({
model: openai.embedding('text-embedding-3-small'),
value: 'sunny day at the beach',
maxRetries: 0, // Disable retries
});
```
### Abort Signals and Timeouts
Both `embed` and `embedMany` accept an optional `abortSignal` parameter of
type [`AbortSignal`](https://developer.mozilla.org/en-US/docs/Web/API/AbortSignal)
that you can use to abort the embedding process or set a timeout.
```ts highlight={"7"}
import { openai } from '@ai-sdk/openai';
import { embed } from 'ai';
const { embedding } = await embed({
model: openai.embedding('text-embedding-3-small'),
value: 'sunny day at the beach',
abortSignal: AbortSignal.timeout(1000), // Abort after 1 second
});
```
### Custom Headers
Both `embed` and `embedMany` accept an optional `headers` parameter of type `Record`
that you can use to add custom headers to the embedding request.
```ts highlight={"7"}
import { openai } from '@ai-sdk/openai';
import { embed } from 'ai';
const { embedding } = await embed({
model: openai.embedding('text-embedding-3-small'),
value: 'sunny day at the beach',
headers: { 'X-Custom-Header': 'custom-value' },
});
```
## Embedding Providers & Models
Several providers offer embedding models:
| Provider | Model | Embedding Dimensions |
| ----------------------------------------------------------------------------------------- | ------------------------------- | -------------------- |
| [OpenAI](/providers/ai-sdk-providers/openai#embedding-models) | `text-embedding-3-large` | 3072 |
| [OpenAI](/providers/ai-sdk-providers/openai#embedding-models) | `text-embedding-3-small` | 1536 |
| [OpenAI](/providers/ai-sdk-providers/openai#embedding-models) | `text-embedding-ada-002` | 1536 |
| [Google Generative AI](/providers/ai-sdk-providers/google-generative-ai#embedding-models) | `text-embedding-004` | 768 |
| [Mistral](/providers/ai-sdk-providers/mistral#embedding-models) | `mistral-embed` | 1024 |
| [Cohere](/providers/ai-sdk-providers/cohere#embedding-models) | `embed-english-v3.0` | 1024 |
| [Cohere](/providers/ai-sdk-providers/cohere#embedding-models) | `embed-multilingual-v3.0` | 1024 |
| [Cohere](/providers/ai-sdk-providers/cohere#embedding-models) | `embed-english-light-v3.0` | 384 |
| [Cohere](/providers/ai-sdk-providers/cohere#embedding-models) | `embed-multilingual-light-v3.0` | 384 |
| [Cohere](/providers/ai-sdk-providers/cohere#embedding-models) | `embed-english-v2.0` | 4096 |
| [Cohere](/providers/ai-sdk-providers/cohere#embedding-models) | `embed-english-light-v2.0` | 1024 |
| [Cohere](/providers/ai-sdk-providers/cohere#embedding-models) | `embed-multilingual-v2.0` | 768 |
| [Amazon Bedrock](/providers/ai-sdk-providers/amazon-bedrock#embedding-models) | `amazon.titan-embed-text-v1` | 1024 |
| [Amazon Bedrock](/providers/ai-sdk-providers/amazon-bedrock#embedding-models) | `amazon.titan-embed-text-v2:0` | 1024 |
---
title: Image Generation
description: Learn how to generate images with the AI SDK.
---
# Image Generation
Image generation is an experimental feature.
The AI SDK provides the [`generateImage`](/docs/reference/ai-sdk-core/generate-image)
function to generate images based on a given prompt using an image model.
```tsx
import { experimental_generateImage as generateImage } from 'ai';
import { openai } from '@ai-sdk/openai';
const { image } = await generateImage({
model: openai.image('dall-e-3'),
prompt: 'Santa Claus driving a Cadillac',
size: '1024x1024',
});
```
You can access the image data using the `base64` or `uint8Array` properties:
```tsx
const base64 = image.base64; // base64 image data
const uint8Array = image.uint8Array; // Uint8Array image data
```
### Generating Multiple Images
`generateImage` also supports generating multiple images at once:
```tsx highlight={"4"}
const { images } = await generateImage({
model: openai.image('dall-e-3'),
prompt: 'Santa Claus driving a Cadillac',
n: 4, // number of images to generate
});
```
### Provider-specific Settings
Image models often have provider- or even model-specific settings.
You can pass such settings to the `generateImage` function
using the `providerOptions` parameter. The options for the provider
(`openai` in the example below) become request body properties.
```tsx highlight={"5-7"}
const { image } = await generateImage({
model: openai.image('dall-e-3'),
prompt: 'Santa Claus driving a Cadillac',
size: '1024x1024',
providerOptions: {
openai: { style: 'vivid', quality: 'hd' },
},
});
```
### Abort Signals and Timeouts
`generateImage` accepts an optional `abortSignal` parameter of
type [`AbortSignal`](https://developer.mozilla.org/en-US/docs/Web/API/AbortSignal)
that you can use to abort the image generation process or set a timeout.
```ts highlight={"7"}
import { openai } from '@ai-sdk/openai';
import { experimental_generateImage as generateImage } from 'ai';
const { image } = await generateImage({
model: openai.image('dall-e-3'),
prompt: 'Santa Claus driving a Cadillac',
abortSignal: AbortSignal.timeout(1000), // Abort after 1 second
});
```
### Custom Headers
`generateImage` accepts an optional `headers` parameter of type `Record`
that you can use to add custom headers to the image generation request.
```ts highlight={"7"}
import { openai } from '@ai-sdk/openai';
import { experimental_generateImage as generateImage } from 'ai';
const { image } = await generateImage({
model: openai.image('dall-e-3'),
value: 'sunny day at the beach',
headers: { 'X-Custom-Header': 'custom-value' },
});
```
## Image Models
| Provider | Model | Supported Sizes |
| --------------------------------------------------------- | ---------- | ------------------------------- |
| [OpenAI](/providers/ai-sdk-providers/openai#image-models) | `dall-e-3` | 1024x1024, 1792x1024, 1024x1792 |
| [OpenAI](/providers/ai-sdk-providers/openai#image-models) | `dall-e-2` | 256x256, 512x512, 1024x1024 |
---
title: Provider Management
description: Learn how to work with multiple providers
---
# Provider Management
Provider management is an experimental feature.
When you work with multiple providers and models, it is often desirable to manage them in a central place
and access the models through simple string ids.
The AI SDK offers [custom providers](/docs/reference/ai-sdk-core/custom-provider) and
a [provider registry](/docs/reference/ai-sdk-core/provider-registry) for this purpose.
With custom providers, you can **pre-configure model settings**, **provide model name aliases**,
and **limit the available models** .
The provider registry lets you mix **multiple providers** and access them through simple string ids.
## Custom Providers
You can create a [custom provider](/docs/reference/ai-sdk-core/custom-provider) using `experimental_customProvider`.
### Example: custom model settings
You might want to override the default model settings for a provider or provide model name aliases
with pre-configured settings.
```ts
import { openai as originalOpenAI } from '@ai-sdk/openai';
import { experimental_customProvider as customProvider } from 'ai';
// custom provider with different model settings:
export const openai = customProvider({
languageModels: {
// replacement model with custom settings:
'gpt-4o': originalOpenAI('gpt-4o', { structuredOutputs: true }),
// alias model with custom settings:
'gpt-4o-mini-structured': originalOpenAI('gpt-4o-mini', {
structuredOutputs: true,
}),
},
fallbackProvider: originalOpenAI,
});
```
### Example: model name alias
You can also provide model name aliases, so you can update the model version in one place in the future:
```ts
import { anthropic as originalAnthropic } from '@ai-sdk/anthropic';
import { experimental_customProvider as customProvider } from 'ai';
// custom provider with alias names:
export const anthropic = customProvider({
languageModels: {
opus: originalAnthropic('claude-3-opus-20240229'),
sonnet: originalAnthropic('claude-3-5-sonnet-20240620'),
haiku: originalAnthropic('claude-3-haiku-20240307'),
},
fallbackProvider: originalAnthropic,
});
```
### Example: limit available models
You can limit the available models in the system, even if you have multiple providers.
```ts
import { anthropic } from '@ai-sdk/anthropic';
import { openai } from '@ai-sdk/openai';
import { experimental_customProvider as customProvider } from 'ai';
export const myProvider = customProvider({
languageModels: {
'text-medium': anthropic('claude-3-5-sonnet-20240620'),
'text-small': openai('gpt-4o-mini'),
'structure-medium': openai('gpt-4o', { structuredOutputs: true }),
'structure-fast': openai('gpt-4o-mini', { structuredOutputs: true }),
},
embeddingModels: {
emdedding: openai.textEmbeddingModel('text-embedding-3-small'),
},
// no fallback provider
});
```
## Provider Registry
You can create a [provider registry](/docs/reference/ai-sdk-core/provider-registry) with multiple providers and models using `experimental_createProviderRegistry`.
### Example: Setup
```ts filename={"registry.ts"}
import { anthropic } from '@ai-sdk/anthropic';
import { createOpenAI } from '@ai-sdk/openai';
import { experimental_createProviderRegistry as createProviderRegistry } from 'ai';
export const registry = createProviderRegistry({
// register provider with prefix and default setup:
anthropic,
// register provider with prefix and custom setup:
openai: createOpenAI({
apiKey: process.env.OPENAI_API_KEY,
}),
});
```
### Example: Use language models
You can access language models by using the `languageModel` method on the registry.
The provider id will become the prefix of the model id: `providerId:modelId`.
```ts highlight={"5"}
import { generateText } from 'ai';
import { registry } from './registry';
const { text } = await generateText({
model: registry.languageModel('openai:gpt-4-turbo'),
prompt: 'Invent a new holiday and describe its traditions.',
});
```
### Example: Use text embedding models
You can access text embedding models by using the `textEmbeddingModel` method on the registry.
The provider id will become the prefix of the model id: `providerId:modelId`.
```ts highlight={"5"}
import { embed } from 'ai';
import { registry } from './registry';
const { embedding } = await embed({
model: registry.textEmbeddingModel('openai:text-embedding-3-small'),
value: 'sunny day at the beach',
});
```
---
title: Language Model Middleware
description: Learn how to use middleware to enhance the behavior of language models
---
# Language Model Middleware
Language model middleware is an experimental feature.
Language model middleware is a way to enhance the behavior of language models
by intercepting and modifying the calls to the language model.
It can be used to add features like guardrails, RAG, caching, and logging
in a language model agnostic way. Such middleware can be developed and
distributed independently from the language models that they are applied to.
## Using Language Model Middleware
You can use language model middleware with the `wrapLanguageModel` function.
It takes a language model and a language model middleware and returns a new
language model that incorporates the middleware.
```ts
import { experimental_wrapLanguageModel as wrapLanguageModel } from 'ai';
const wrappedLanguageModel = wrapLanguageModel({
model: yourModel,
middleware: yourLanguageModelMiddleware,
});
```
The wrapped language model can be used just like any other language model, e.g. in `streamText`:
```ts highlight="2"
const result = streamText({
model: wrappedLanguageModel,
prompt: 'What cities are in the United States?',
});
```
## Implementing Language Model Middleware
Implementing language model middleware is advanced functionality and requires
a solid understanding of the [language model
specification](https://github.com/vercel/ai/blob/main/packages/provider/src/language-model/v1/language-model-v1.ts).
You can implement any of the following three function to modify the behavior of the language model:
1. `transformParams`: Transforms the parameters before they are passed to the language model, for both `doGenerate` and `doStream`.
2. `wrapGenerate`: Wraps the `doGenerate` method of the [language model](https://github.com/vercel/ai/blob/main/packages/provider/src/language-model/v1/language-model-v1.ts).
You can modify the parameters, call the language model, and modify the result.
3. `wrapStream`: Wraps the `doStream` method of the [language model](https://github.com/vercel/ai/blob/main/packages/provider/src/language-model/v1/language-model-v1.ts).
You can modify the parameters, call the language model, and modify the result.
Here are some examples of how to implement language model middleware:
## Examples
These examples are not meant to be used in production. They are just to show
how you can use middleware to enhance the behavior of language models.
### Logging
This example shows how to log the parameters and generated text of a language model call.
```ts
import type {
Experimental_LanguageModelV1Middleware as LanguageModelV1Middleware,
LanguageModelV1StreamPart,
} from 'ai';
export const yourLogMiddleware: LanguageModelV1Middleware = {
wrapGenerate: async ({ doGenerate, params }) => {
console.log('doGenerate called');
console.log(`params: ${JSON.stringify(params, null, 2)}`);
const result = await doGenerate();
console.log('doGenerate finished');
console.log(`generated text: ${result.text}`);
return result;
},
wrapStream: async ({ doStream, params }) => {
console.log('doStream called');
console.log(`params: ${JSON.stringify(params, null, 2)}`);
const { stream, ...rest } = await doStream();
let generatedText = '';
const transformStream = new TransformStream<
LanguageModelV1StreamPart,
LanguageModelV1StreamPart
>({
transform(chunk, controller) {
if (chunk.type === 'text-delta') {
generatedText += chunk.textDelta;
}
controller.enqueue(chunk);
},
flush() {
console.log('doStream finished');
console.log(`generated text: ${generatedText}`);
},
});
return {
stream: stream.pipeThrough(transformStream),
...rest,
};
},
};
```
### Caching
This example shows how to build a simple cache for the generated text of a language model call.
```ts
import type { Experimental_LanguageModelV1Middleware as LanguageModelV1Middleware } from 'ai';
const cache = new Map();
export const yourCacheMiddleware: LanguageModelV1Middleware = {
wrapGenerate: async ({ doGenerate, params }) => {
const cacheKey = JSON.stringify(params);
if (cache.has(cacheKey)) {
return cache.get(cacheKey);
}
const result = await doGenerate();
cache.set(cacheKey, result);
return result;
},
// here you would implement the caching logic for streaming
};
```
### Retrieval Augmented Generation (RAG)
This example shows how to use RAG as middleware.
Helper functions like `getLastUserMessageText` and `findSources` are not part
of the AI SDK. They are just used in this example to illustrate the concept of
RAG.
```ts
import type { Experimental_LanguageModelV1Middleware as LanguageModelV1Middleware } from 'ai';
export const yourRagMiddleware: LanguageModelV1Middleware = {
transformParams: async ({ params }) => {
const lastUserMessageText = getLastUserMessageText({
prompt: params.prompt,
});
if (lastUserMessageText == null) {
return params; // do not use RAG (send unmodified parameters)
}
const instruction =
'Use the following information to answer the question:\n' +
findSources({ text: lastUserMessageText })
.map(chunk => JSON.stringify(chunk))
.join('\n');
return addToLastUserMessage({ params, text: instruction });
},
};
```
### Guardrails
Guard rails are a way to ensure that the generated text of a language model call
is safe and appropriate. This example shows how to use guardrails as middleware.
```ts
import type { Experimental_LanguageModelV1Middleware as LanguageModelV1Middleware } from 'ai';
export const yourGuardrailMiddleware: LanguageModelV1Middleware = {
wrapGenerate: async ({ doGenerate }) => {
const { text, ...rest } = await doGenerate();
// filtering approach, e.g. for PII or other sensitive information:
const cleanedText = text?.replace(/badword/g, '');
return { text: cleanedText, ...rest };
},
// here you would implement the guardrail logic for streaming
// Note: streaming guardrails are difficult to implement, because
// you do not know the full content of the stream until it's finished.
};
```
---
title: Error Handling
description: Learn how to handle errors in the AI SDK Core
---
# Error Handling
## Handling regular errors
Regular errors are thrown and can be handled using the `try/catch` block.
```ts highlight="3,8-10"
import { generateText } from 'ai';
try {
const { text } = await generateText({
model: yourModel,
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
} catch (error) {
// handle error
}
```
See [Error Types](/docs/reference/ai-sdk-errors) for more information on the different types of errors that may be thrown.
## Handling streaming errors (simple streams)
When errors occur during streams that do not support error chunks,
the error is thrown as a regular error.
You can handle these errors using the `try/catch` block.
```ts highlight="3,12-14"
import { generateText } from 'ai';
try {
const { textStream } = streamText({
model: yourModel,
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
for await (const textPart of textStream) {
process.stdout.write(textPart);
}
} catch (error) {
// handle error
}
```
## Handling streaming errors (streaming with `error` support)
Full streams support error parts.
You can handle those parts similar to other parts.
It is recommended to also add a try-catch block for errors that
happen outside of the streaming.
```ts highlight="13-17"
import { generateText } from 'ai';
try {
const { fullStream } = streamText({
model: yourModel,
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
for await (const part of fullStream) {
switch (part.type) {
// ... handle other part types
case 'error': {
const error = part.error;
// handle error
break;
}
}
}
} catch (error) {
// handle error
}
```
---
title: Testing
description: Learn how to use AI SDK Core mock providers for testing.
---
# Testing
Testing language models can be challenging, because they are non-deterministic
and calling them is slow and expensive.
To enable you to unit test your code that uses the AI SDK, the AI SDK Core
includes mock providers and test helpers. You can import the following helpers from `ai/test`:
- `MockEmbeddingModelV1`: A mock embedding model using the [embedding model v1 specification](https://github.com/vercel/ai/blob/main/packages/provider/src/embedding-model/v1/embedding-model-v1.ts).
- `MockLanguageModelV1`: A mock language model using the [language model v1 specification](https://github.com/vercel/ai/blob/main/packages/provider/src/language-model/v1/language-model-v1.ts).
- `mockId`: Provides an incrementing integer ID.
- `mockValues`: Iterates over an array of values with each call. Returns the last value when the array is exhausted.
- `simulateReadableStream`: Simulates a readable stream with delays.
With mock providers and test helpers, you can control the output of the AI SDK
and test your code in a repeatable and deterministic way without actually calling
a language model provider.
## Examples
You can use the test helpers with the AI Core functions in your unit tests:
### generateText
```ts
import { generateText } from 'ai';
import { MockLanguageModelV1 } from 'ai/test';
const result = await generateText({
model: new MockLanguageModelV1({
doGenerate: async () => ({
rawCall: { rawPrompt: null, rawSettings: {} },
finishReason: 'stop',
usage: { promptTokens: 10, completionTokens: 20 },
text: `Hello, world!`,
}),
}),
prompt: 'Hello, test!',
});
```
### streamText
```ts
import { streamText } from 'ai';
import { simulateReadableStream, MockLanguageModelV1 } from 'ai/test';
const result = streamText({
model: new MockLanguageModelV1({
doStream: async () => ({
stream: simulateReadableStream({
chunks: [
{ type: 'text-delta', textDelta: 'Hello' },
{ type: 'text-delta', textDelta: ', ' },
{ type: 'text-delta', textDelta: `world!` },
{
type: 'finish',
finishReason: 'stop',
logprobs: undefined,
usage: { completionTokens: 10, promptTokens: 3 },
},
],
}),
rawCall: { rawPrompt: null, rawSettings: {} },
}),
}),
prompt: 'Hello, test!',
});
```
### generateObject
```ts
import { generateObject } from 'ai';
import { MockLanguageModelV1 } from 'ai/test';
import { z } from 'zod';
const result = await generateObject({
model: new MockLanguageModelV1({
defaultObjectGenerationMode: 'json',
doGenerate: async () => ({
rawCall: { rawPrompt: null, rawSettings: {} },
finishReason: 'stop',
usage: { promptTokens: 10, completionTokens: 20 },
text: `{"content":"Hello, world!"}`,
}),
}),
schema: z.object({ content: z.string() }),
prompt: 'Hello, test!',
});
```
### streamObject
```ts
import { streamObject } from 'ai';
import { simulateReadableStream, MockLanguageModelV1 } from 'ai/test';
import { z } from 'zod';
const result = streamObject({
model: new MockLanguageModelV1({
defaultObjectGenerationMode: 'json',
doStream: async () => ({
stream: simulateReadableStream({
chunks: [
{ type: 'text-delta', textDelta: '{ ' },
{ type: 'text-delta', textDelta: '"content": ' },
{ type: 'text-delta', textDelta: `"Hello, ` },
{ type: 'text-delta', textDelta: `world` },
{ type: 'text-delta', textDelta: `!"` },
{ type: 'text-delta', textDelta: ' }' },
{
type: 'finish',
finishReason: 'stop',
logprobs: undefined,
usage: { completionTokens: 10, promptTokens: 3 },
},
],
}),
rawCall: { rawPrompt: null, rawSettings: {} },
}),
}),
schema: z.object({ content: z.string() }),
prompt: 'Hello, test!',
});
```
### Simulate Data Stream Protocol Responses
You can also simulate [Data Stream Protocol](/docs/ai-sdk-ui/stream-protocol#data-stream-protocol) responses for testing,
debugging, or demonstration purposes.
Here is a Next example:
```ts filename="route.ts"
import { simulateReadableStream } from 'ai/test';
export async function POST(req: Request) {
return new Response(
simulateReadableStream({
initialDelayInMs: 1000, // Delay before the first chunk
chunkDelayInMs: 300, // Delay between chunks
chunks: [
`0:"This"\n`,
`0:" is an"\n`,
`0:"example."\n`,
`e:{"finishReason":"stop","usage":{"promptTokens":20,"completionTokens":50},"isContinued":false}\n`,
`d:{"finishReason":"stop","usage":{"promptTokens":20,"completionTokens":50}}\n`,
],
}).pipeThrough(new TextEncoderStream()),
{
status: 200,
headers: {
'X-Vercel-AI-Data-Stream': 'v1',
'Content-Type': 'text/plain; charset=utf-8',
},
},
);
}
```
---
title: Telemetry
description: Using OpenTelemetry with AI SDK Core
---
# Telemetry
AI SDK Telemetry is experimental and may change in the future.
The AI SDK uses [OpenTelemetry](https://opentelemetry.io/) to collect telemetry data.
OpenTelemetry is an open-source observability framework designed to provide
standardized instrumentation for collecting telemetry data.
Check out the [AI SDK Observability Integrations](/providers/observability)
to see providers that offer monitoring and tracing for AI SDK applications.
## Enabling telemetry
For Next.js applications, please follow the [Next.js OpenTelemetry guide](https://nextjs.org/docs/app/building-your-application/optimizing/open-telemetry) to enable telemetry first.
You can then use the `experimental_telemetry` option to enable telemetry on specific function calls while the feature is experimental:
```ts highlight="4"
const result = await generateText({
model: openai('gpt-4-turbo'),
prompt: 'Write a short story about a cat.',
experimental_telemetry: { isEnabled: true },
});
```
When telemetry is enabled, you can also control if you want to record the input values and the output values for the function.
By default, both are enabled. You can disable them by setting the `recordInputs` and `recordOutputs` options to `false`.
Disabling the recording of inputs and outputs can be useful for privacy, data transfer, and performance reasons.
You might for example want to disable recording inputs if they contain sensitive information.
## Telemetry Metadata
You can provide a `functionId` to identify the function that the telemetry data is for,
and `metadata` to include additional information in the telemetry data.
```ts highlight="6-10"
const result = await generateText({
model: openai('gpt-4-turbo'),
prompt: 'Write a short story about a cat.',
experimental_telemetry: {
isEnabled: true,
functionId: 'my-awesome-function',
metadata: {
something: 'custom',
someOtherThing: 'other-value',
},
},
});
```
## Custom Tracer
You may provide a `tracer` which must return an OpenTelemetry `Tracer`. This is useful in situations where
you want your traces to use a `TracerProvider` other than the one provided by the `@opentelemetry/api` singleton.
```ts highlight="7"
const tracerProvider = new NodeTracerProvider();
const result = await generateText({
model: openai('gpt-4-turbo'),
prompt: 'Write a short story about a cat.',
experimental_telemetry: {
isEnabled: true,
tracer: tracerProvider.getTracer('ai'),
},
});
```
## Collected Data
### generateText function
`generateText` records 3 types of spans:
- `ai.generateText` (span): the full length of the generateText call. It contains 1 or more `ai.generateText.doGenerate` spans.
It contains the [basic LLM span information](#basic-llm-span-information) and the following attributes:
- `operation.name`: `ai.generateText` and the functionId that was set through `telemetry.functionId`
- `ai.operationId`: `"ai.generateText"`
- `ai.prompt`: the prompt that was used when calling `generateText`
- `ai.response.text`: the text that was generated
- `ai.response.toolCalls`: the tool calls that were made as part of the generation (stringified JSON)
- `ai.response.finishReason`: the reason why the generation finished
- `ai.settings.maxSteps`: the maximum number of steps that were set
- `ai.generateText.doGenerate` (span): a provider doGenerate call. It can contain `ai.toolCall` spans.
It contains the [call LLM span information](#call-llm-span-information) and the following attributes:
- `operation.name`: `ai.generateText.doGenerate` and the functionId that was set through `telemetry.functionId`
- `ai.operationId`: `"ai.generateText.doGenerate"`
- `ai.prompt.format`: the format of the prompt
- `ai.prompt.messages`: the messages that were passed into the provider
- `ai.prompt.tools`: array of stringified tool definitions. The tools can be of type `function` or `provider-defined`.
Function tools have a `name`, `description` (optional), and `parameters` (JSON schema).
Provider-defined tools have a `name`, `id`, and `args` (Record).
- `ai.prompt.toolChoice`: the stringified tool choice setting (JSON). It has a `type` property
(`auto`, `none`, `required`, `tool`), and if the type is `tool`, a `toolName` property with the specific tool.
- `ai.response.text`: the text that was generated
- `ai.response.toolCalls`: the tool calls that were made as part of the generation (stringified JSON)
- `ai.response.finishReason`: the reason why the generation finished
- `ai.toolCall` (span): a tool call that is made as part of the generateText call. See [Tool call spans](#tool-call-spans) for more details.
### streamText function
`streamText` records 3 types of spans and 2 types of events:
- `ai.streamText` (span): the full length of the streamText call. It contains a `ai.streamText.doStream` span.
It contains the [basic LLM span information](#basic-llm-span-information) and the following attributes:
- `operation.name`: `ai.streamText` and the functionId that was set through `telemetry.functionId`
- `ai.operationId`: `"ai.streamText"`
- `ai.prompt`: the prompt that was used when calling `streamText`
- `ai.response.text`: the text that was generated
- `ai.response.toolCalls`: the tool calls that were made as part of the generation (stringified JSON)
- `ai.response.finishReason`: the reason why the generation finished
- `ai.settings.maxSteps`: the maximum number of steps that were set
- `ai.streamText.doStream` (span): a provider doStream call.
This span contains an `ai.stream.firstChunk` event and `ai.toolCall` spans.
It contains the [call LLM span information](#call-llm-span-information) and the following attributes:
- `operation.name`: `ai.streamText.doStream` and the functionId that was set through `telemetry.functionId`
- `ai.operationId`: `"ai.streamText.doStream"`
- `ai.prompt.format`: the format of the prompt
- `ai.prompt.messages`: the messages that were passed into the provider
- `ai.prompt.tools`: array of stringified tool definitions. The tools can be of type `function` or `provider-defined`.
Function tools have a `name`, `description` (optional), and `parameters` (JSON schema).
Provider-defined tools have a `name`, `id`, and `args` (Record).
- `ai.prompt.toolChoice`: the stringified tool choice setting (JSON). It has a `type` property
(`auto`, `none`, `required`, `tool`), and if the type is `tool`, a `toolName` property with the specific tool.
- `ai.response.text`: the text that was generated
- `ai.response.toolCalls`: the tool calls that were made as part of the generation (stringified JSON)
- `ai.response.msToFirstChunk`: the time it took to receive the first chunk in milliseconds
- `ai.response.msToFinish`: the time it took to receive the finish part of the LLM stream in milliseconds
- `ai.response.avgCompletionTokensPerSecond`: the average number of completion tokens per second
- `ai.response.finishReason`: the reason why the generation finished
- `ai.toolCall` (span): a tool call that is made as part of the generateText call. See [Tool call spans](#tool-call-spans) for more details.
- `ai.stream.firstChunk` (event): an event that is emitted when the first chunk of the stream is received.
- `ai.response.msToFirstChunk`: the time it took to receive the first chunk
- `ai.stream.finish` (event): an event that is emitted when the finish part of the LLM stream is received.
It also records a `ai.stream.firstChunk` event when the first chunk of the stream is received.
### generateObject function
`generateObject` records 2 types of spans:
- `ai.generateObject` (span): the full length of the generateObject call. It contains 1 or more `ai.generateObject.doGenerate` spans.
It contains the [basic LLM span information](#basic-llm-span-information) and the following attributes:
- `operation.name`: `ai.generateObject` and the functionId that was set through `telemetry.functionId`
- `ai.operationId`: `"ai.generateObject"`
- `ai.prompt`: the prompt that was used when calling `generateObject`
- `ai.schema`: Stringified JSON schema version of the schema that was passed into the `generateObject` function
- `ai.schema.name`: the name of the schema that was passed into the `generateObject` function
- `ai.schema.description`: the description of the schema that was passed into the `generateObject` function
- `ai.response.object`: the object that was generated (stringified JSON)
- `ai.settings.mode`: the object generation mode, e.g. `json`
- `ai.settings.output`: the output type that was used, e.g. `object` or `no-schema`
- `ai.generateObject.doGenerate` (span): a provider doGenerate call.
It contains the [call LLM span information](#call-llm-span-information) and the following attributes:
- `operation.name`: `ai.generateObject.doGenerate` and the functionId that was set through `telemetry.functionId`
- `ai.operationId`: `"ai.generateObject.doGenerate"`
- `ai.prompt.format`: the format of the prompt
- `ai.prompt.messages`: the messages that were passed into the provider
- `ai.response.object`: the object that was generated (stringified JSON)
- `ai.settings.mode`: the object generation mode
- `ai.response.finishReason`: the reason why the generation finished
### streamObject function
`streamObject` records 2 types of spans and 1 type of event:
- `ai.streamObject` (span): the full length of the streamObject call. It contains 1 or more `ai.streamObject.doStream` spans.
It contains the [basic LLM span information](#basic-llm-span-information) and the following attributes:
- `operation.name`: `ai.streamObject` and the functionId that was set through `telemetry.functionId`
- `ai.operationId`: `"ai.streamObject"`
- `ai.prompt`: the prompt that was used when calling `streamObject`
- `ai.schema`: Stringified JSON schema version of the schema that was passed into the `streamObject` function
- `ai.schema.name`: the name of the schema that was passed into the `streamObject` function
- `ai.schema.description`: the description of the schema that was passed into the `streamObject` function
- `ai.response.object`: the object that was generated (stringified JSON)
- `ai.settings.mode`: the object generation mode, e.g. `json`
- `ai.settings.output`: the output type that was used, e.g. `object` or `no-schema`
- `ai.streamObject.doStream` (span): a provider doStream call.
This span contains an `ai.stream.firstChunk` event.
It contains the [call LLM span information](#call-llm-span-information) and the following attributes:
- `operation.name`: `ai.streamObject.doStream` and the functionId that was set through `telemetry.functionId`
- `ai.operationId`: `"ai.streamObject.doStream"`
- `ai.prompt.format`: the format of the prompt
- `ai.prompt.messages`: the messages that were passed into the provider
- `ai.settings.mode`: the object generation mode
- `ai.response.object`: the object that was generated (stringified JSON)
- `ai.response.msToFirstChunk`: the time it took to receive the first chunk
- `ai.response.finishReason`: the reason why the generation finished
- `ai.stream.firstChunk` (event): an event that is emitted when the first chunk of the stream is received.
- `ai.response.msToFirstChunk`: the time it took to receive the first chunk
### embed function
`embed` records 2 types of spans:
- `ai.embed` (span): the full length of the embed call. It contains 1 `ai.embed.doEmbed` spans.
It contains the [basic embedding span information](#basic-embedding-span-information) and the following attributes:
- `operation.name`: `ai.embed` and the functionId that was set through `telemetry.functionId`
- `ai.operationId`: `"ai.embed"`
- `ai.value`: the value that was passed into the `embed` function
- `ai.embedding`: a JSON-stringified embedding
- `ai.embed.doEmbed` (span): a provider doEmbed call.
It contains the [basic embedding span information](#basic-embedding-span-information) and the following attributes:
- `operation.name`: `ai.embed.doEmbed` and the functionId that was set through `telemetry.functionId`
- `ai.operationId`: `"ai.embed.doEmbed"`
- `ai.values`: the values that were passed into the provider (array)
- `ai.embeddings`: an array of JSON-stringified embeddings
### embedMany function
`embedMany` records 2 types of spans:
- `ai.embedMany` (span): the full length of the embedMany call. It contains 1 or more `ai.embedMany.doEmbed` spans.
It contains the [basic embedding span information](#basic-embedding-span-information) and the following attributes:
- `operation.name`: `ai.embedMany` and the functionId that was set through `telemetry.functionId`
- `ai.operationId`: `"ai.embedMany"`
- `ai.values`: the values that were passed into the `embedMany` function
- `ai.embeddings`: an array of JSON-stringified embedding
- `ai.embedMany.doEmbed` (span): a provider doEmbed call.
It contains the [basic embedding span information](#basic-embedding-span-information) and the following attributes:
- `operation.name`: `ai.embedMany.doEmbed` and the functionId that was set through `telemetry.functionId`
- `ai.operationId`: `"ai.embedMany.doEmbed"`
- `ai.values`: the values that were sent to the provider
- `ai.embeddings`: an array of JSON-stringified embeddings for each value
## Span Details
### Basic LLM span information
Many spans that use LLMs (`ai.generateText`, `ai.generateText.doGenerate`, `ai.streamText`, `ai.streamText.doStream`,
`ai.generateObject`, `ai.generateObject.doGenerate`, `ai.streamObject`, `ai.streamObject.doStream`) contain the following attributes:
- `resource.name`: the functionId that was set through `telemetry.functionId`
- `ai.model.id`: the id of the model
- `ai.model.provider`: the provider of the model
- `ai.request.headers.*`: the request headers that were passed in through `headers`
- `ai.settings.maxRetries`: the maximum number of retries that were set
- `ai.telemetry.functionId`: the functionId that was set through `telemetry.functionId`
- `ai.telemetry.metadata.*`: the metadata that was passed in through `telemetry.metadata`
- `ai.usage.completionTokens`: the number of completion tokens that were used
- `ai.usage.promptTokens`: the number of prompt tokens that were used
### Call LLM span information
Spans that correspond to individual LLM calls (`ai.generateText.doGenerate`, `ai.streamText.doStream`, `ai.generateObject.doGenerate`, `ai.streamObject.doStream`) contain
[basic LLM span information](#basic-llm-span-information) and the following attributes:
- `ai.response.model`: the model that was used to generate the response. This can be different from the model that was requested if the provider supports aliases.
- `ai.response.id`: the id of the response. Uses the ID from the provider when available.
- `ai.response.timestamp`: the timestamp of the response. Uses the timestamp from the provider when available.
- [Semantic Conventions for GenAI operations](https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-spans/)
- `gen_ai.system`: the provider that was used
- `gen_ai.request.model`: the model that was requested
- `gen_ai.request.temperature`: the temperature that was set
- `gen_ai.request.max_tokens`: the maximum number of tokens that were set
- `gen_ai.request.frequency_penalty`: the frequency penalty that was set
- `gen_ai.request.presence_penalty`: the presence penalty that was set
- `gen_ai.request.top_k`: the topK parameter value that was set
- `gen_ai.request.top_p`: the topP parameter value that was set
- `gen_ai.request.stop_sequences`: the stop sequences
- `gen_ai.response.finish_reasons`: the finish reasons that were returned by the provider
- `gen_ai.response.model`: the model that was used to generate the response. This can be different from the model that was requested if the provider supports aliases.
- `gen_ai.response.id`: the id of the response. Uses the ID from the provider when available.
- `gen_ai.usage.input_tokens`: the number of prompt tokens that were used
- `gen_ai.usage.output_tokens`: the number of completion tokens that were used
### Basic embedding span information
Many spans that use embedding models (`ai.embed`, `ai.embed.doEmbed`, `ai.embedMany`, `ai.embedMany.doEmbed`) contain the following attributes:
- `ai.model.id`: the id of the model
- `ai.model.provider`: the provider of the model
- `ai.request.headers.*`: the request headers that were passed in through `headers`
- `ai.settings.maxRetries`: the maximum number of retries that were set
- `ai.telemetry.functionId`: the functionId that was set through `telemetry.functionId`
- `ai.telemetry.metadata.*`: the metadata that was passed in through `telemetry.metadata`
- `ai.usage.tokens`: the number of tokens that were used
- `resource.name`: the functionId that was set through `telemetry.functionId`
### Tool call spans
Tool call spans (`ai.toolCall`) contain the following attributes:
- `operation.name`: `"ai.toolCall"`
- `ai.operationId`: `"ai.toolCall"`
- `ai.toolCall.name`: the name of the tool
- `ai.toolCall.id`: the id of the tool call
- `ai.toolCall.args`: the parameters of the tool call
- `ai.toolCall.result`: the result of the tool call. Only available if the tool call is successful and the result is serializable.
---
title: AI SDK Core
description: Learn about AI SDK Core.
---
# AI SDK Core
---
title: Overview
description: An overview of AI SDK UI.
---
# AI SDK UI
AI SDK UI is designed to help you build interactive chat, completion, and assistant applications with ease. It is a **framework-agnostic toolkit**, streamlining the integration of advanced AI functionalities into your applications.
AI SDK UI provides robust abstractions that simplify the complex tasks of managing chat streams and UI updates on the frontend, enabling you to develop dynamic AI-driven interfaces more efficiently. With four main hooks — **`useChat`**, **`useCompletion`**, **`useObject`**, and **`useAssistant`** — you can incorporate real-time chat capabilities, text completions, streamed JSON, and interactive assistant features into your app.
- **[`useChat`](/docs/ai-sdk-ui/chatbot)** offers real-time streaming of chat messages, abstracting state management for inputs, messages, loading, and errors, allowing for seamless integration into any UI design.
- **[`useCompletion`](/docs/ai-sdk-ui/completion)** enables you to handle text completions in your applications, managing the prompt input and automatically updating the UI as new completions are streamed.
- **[`useObject`](/docs/ai-sdk-ui/object-generation)** is a hook that allows you to consume streamed JSON objects, providing a simple way to handle and display structured data in your application.
- **[`useAssistant`](/docs/ai-sdk-ui/openai-assistants)** is designed to facilitate interaction with OpenAI-compatible assistant APIs, managing UI state and updating it automatically as responses are streamed.
These hooks are designed to reduce the complexity and time required to implement AI interactions, letting you focus on creating exceptional user experiences.
## UI Framework Support
AI SDK UI supports the following frameworks: [React](https://react.dev/), [Svelte](https://svelte.dev/), [Vue.js](https://vuejs.org/), and [SolidJS](https://www.solidjs.com/).
Here is a comparison of the supported functions across these frameworks:
| Function | React | Svelte | Vue.js | SolidJS |
| --------------------------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
| [useChat](/docs/reference/ai-sdk-ui/use-chat) | | | | |
| [useChat](/docs/reference/ai-sdk-ui/use-chat) attachments | | | | |
| [useCompletion](/docs/reference/ai-sdk-ui/use-completion) | | | | |
| [useObject](/docs/reference/ai-sdk-ui/use-object) | | | | |
| [useAssistant](/docs/reference/ai-sdk-ui/use-assistant) | | | | |
[Contributions](https://github.com/vercel/ai/blob/main/CONTRIBUTING.md) are
welcome to implement missing features for non-React frameworks.
## API Reference
Please check out the [AI SDK UI API Reference](/docs/reference/ai-sdk-ui) for more details on each function.
---
title: Chatbot
description: Learn how to use the useChat hook.
---
# Chatbot
The `useChat` hook makes it effortless to create a conversational user interface for your chatbot application. It enables the streaming of chat messages from your AI provider, manages the chat state, and updates the UI automatically as new messages arrive.
To summarize, the `useChat` hook provides the following features:
- **Message Streaming**: All the messages from the AI provider are streamed to the chat UI in real-time.
- **Managed States**: The hook manages the states for input, messages, loading, error and more for you.
- **Seamless Integration**: Easily integrate your chat AI into any design or layout with minimal effort.
In this guide, you will learn how to use the `useChat` hook to create a chatbot application with real-time message streaming.
Check out our [chatbot with tools guide](/docs/ai-sdk-ui/chatbot-with-tool-calling) to learn how to use tools in your chatbot.
Let's start with the following example first.
## Example
```tsx filename='app/page.tsx'
'use client';
import { useChat } from 'ai/react';
export default function Page() {
const { messages, input, handleInputChange, handleSubmit } = useChat({});
return (
<>
{messages.map(message => (
))}
>
);
}
```
```ts filename='app/api/chat/route.ts'
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4-turbo'),
system: 'You are a helpful assistant.',
messages,
});
return result.toDataStreamResponse();
}
```
In the `Page` component, the `useChat` hook will request to your AI provider endpoint whenever the user submits a message.
The messages are then streamed back in real-time and displayed in the chat UI.
This enables a seamless chat experience where the user can see the AI response as soon as it is available,
without having to wait for the entire response to be received.
## Customized UI
`useChat` also provides ways to manage the chat message and input states via code, show loading and error states, and update messages without being triggered by user interactions.
### Loading State
The `isLoading` state returned by the `useChat` hook can be used for several
purposes
- To show a loading spinner while the chatbot is processing the user's message.
- To show a "Stop" button to abort the current message.
- To disable the submit button.
```tsx filename='app/page.tsx' highlight="6,20-27,34"
'use client';
import { useChat } from 'ai/react';
export default function Page() {
const { messages, input, handleInputChange, handleSubmit, isLoading, stop } =
useChat({});
return (
<>
{messages.map(message => (
)}
>
);
}
```
### Error State
Similarly, the `error` state reflects the error object thrown during the fetch request.
It can be used to display an error message, disable the submit button, or show a retry button:
We recommend showing a generic error message to the user, such as "Something
went wrong." This is a good practice to avoid leaking information from the
server.
```tsx file="app/page.tsx" highlight="6,18-25,31"
'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit, error, reload } =
useChat({});
return (
{messages.map(m => (
{m.role}: {m.content}
))}
{error && (
<>
An error occurred.
>
)}
);
}
```
Please also see the [error handling](/docs/ai-sdk-ui/error-handling) guide for more information.
### Modify messages
Sometimes, you may want to directly modify some existing messages. For example, a delete button can be added to each message to allow users to remove them from the chat history.
The `setMessages` function can help you achieve these tasks:
```tsx
const { messages, setMessages, ... } = useChat()
const handleDelete = (id) => {
setMessages(messages.filter(message => message.id !== id))
}
return <>
{messages.map(message => (
))}
...
```
You can think of `messages` and `setMessages` as a pair of `state` and `setState` in React.
### Controlled input
In the initial example, we have `handleSubmit` and `handleInputChange` callbacks that manage the input changes and form submissions. These are handy for common use cases, but you can also use uncontrolled APIs for more advanced scenarios such as form validation or customized components.
The following example demonstrates how to use more granular APIs like `setInput` and `append` with your custom input and submit button components:
```tsx
const { input, setInput, append } = useChat()
return <>
setInput(value)} />
{
// Send a new message to the AI provider
append({
role: 'user',
content: input,
})
}}/>
...
```
### Cancelation and regeneration
It's also a common use case to abort the response message while it's still streaming back from the AI provider. You can do this by calling the `stop` function returned by the `useChat` hook.
```tsx
const { stop, isLoading, ... } = useChat()
return <>
...
```
When the user clicks the "Stop" button, the fetch request will be aborted. This avoids consuming unnecessary resources and improves the UX of your chatbot application.
Similarly, you can also request the AI provider to reprocess the last message by calling the `reload` function returned by the `useChat` hook:
```tsx
const { reload, isLoading, ... } = useChat()
return <>
...
>
```
When the user clicks the "Regenerate" button, the AI provider will regenerate the last message and replace the current one correspondingly.
### Throttling UI Updates
This feature is currently only available for React.
By default, the `useChat` hook will trigger a render every time a new chunk is received.
You can throttle the UI updates with the `experimental_throttle` option.
```tsx filename="page.tsx" highlight="2-3"
const { messages, ... } = useChat({
// Throttle the messages and data updates to 50ms:
experimental_throttle: 50
})
```
## Event Callbacks
`useChat` provides optional event callbacks that you can use to handle different stages of the chatbot lifecycle:
- `onFinish`: Called when the assistant message is completed
- `onError`: Called when an error occurs during the fetch request.
- `onResponse`: Called when the response from the API is received.
These callbacks can be used to trigger additional actions, such as logging, analytics, or custom UI updates.
```tsx
import { Message } from 'ai/react';
const {
/* ... */
} = useChat({
onFinish: (message, { usage, finishReason }) => {
console.log('Finished streaming message:', message);
console.log('Token usage:', usage);
console.log('Finish reason:', finishReason);
},
onError: error => {
console.error('An error occurred:', error);
},
onResponse: response => {
console.log('Received HTTP response from server:', response);
},
});
```
It's worth noting that you can abort the processing by throwing an error in the `onResponse` callback. This will trigger the `onError` callback and stop the message from being appended to the chat UI. This can be useful for handling unexpected responses from the AI provider.
## Request Configuration
### Custom headers, body, and credentials
By default, the `useChat` hook sends a HTTP POST request to the `/api/chat` endpoint with the message list as the request body. You can customize the request by passing additional options to the `useChat` hook:
```tsx
const { messages, input, handleInputChange, handleSubmit } = useChat({
api: '/api/custom-chat',
headers: {
Authorization: 'your_token',
},
body: {
user_id: '123',
},
credentials: 'same-origin',
});
```
In this example, the `useChat` hook sends a POST request to the `/api/custom-chat` endpoint with the specified headers, additional body fields, and credentials for that fetch request. On your server side, you can handle the request with these additional information.
### Setting custom body fields per request
You can configure custom `body` fields on a per-request basis using the `body` option of the `handleSubmit` function.
This is useful if you want to pass in additional information to your backend that is not part of the message list.
```tsx filename="app/page.tsx" highlight="18-20"
'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
{messages.map(m => (
{m.role}: {m.content}
))}
);
}
```
You can retrieve these custom fields on your server side by destructuring the request body:
```ts filename="app/api/chat/route.ts" highlight="3"
export async function POST(req: Request) {
// Extract addition information ("customKey") from the body of the request:
const { messages, customKey } = await req.json();
//...
}
```
## Controlling the response stream
With `streamText`, you can control how error messages and usage information are sent back to the client.
### Error Messages
By default, the error message is masked for security reasons.
The default error message is "An error occurred."
You can forward error messages or send your own error message by providing a `getErrorMessage` function:
```ts filename="app/api/chat/route.ts" highlight="13-27"
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
messages,
});
return result.toDataStreamResponse({
getErrorMessage: error => {
if (error == null) {
return 'unknown error';
}
if (typeof error === 'string') {
return error;
}
if (error instanceof Error) {
return error.message;
}
return JSON.stringify(error);
},
});
}
```
### Usage Information
By default, the usage information is sent back to the client. You can disable it by setting the `sendUsage` option to `false`:
```ts filename="app/api/chat/route.ts" highlight="13"
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
messages,
});
return result.toDataStreamResponse({
sendUsage: false,
});
}
```
### Text Streams
`useChat` can handle plain text streams by setting the `streamProtocol` option to `text`:
```tsx filename="app/page.tsx" highlight="7"
'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const { messages } = useChat({
streamProtocol: 'text',
});
return <>...>;
}
```
This configuration also works with other backend servers that stream plain text.
Check out the [stream protocol guide](/docs/ai-sdk-ui/stream-protocol) for more information.
When using `streamProtocol: 'text'`, tool calls, usage information and finish
reasons are not available.
## Empty Submissions
You can configure the `useChat` hook to allow empty submissions by setting the `allowEmptySubmit` option to `true`.
```tsx filename="app/page.tsx" highlight="18"
'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
{messages.map(m => (
{m.role}: {m.content}
))}
);
}
```
## Attachments (Experimental)
The attachments feature is currently only available for React and Vue.js.
The `useChat` hook supports sending attachments along with a message as well as rendering them on the client. This can be useful for building applications that involve sending images, files, or other media content to the AI provider.
There are two ways to send attachments with a message, either by providing a `FileList` object or a list of URLs to the `handleSubmit` function:
### FileList
By using `FileList`, you can send multiple files as attachments along with a message using the file input element. The `useChat` hook will automatically convert them into data URLs and send them to the AI provider.
Currently, only `image/*` and `text/*` content types get automatically
converted into [multi-modal content
parts](https://sdk.vercel.ai/docs/foundations/prompts#multi-modal-messages).
You will need to handle other content types manually.
```tsx filename="app/page.tsx"
'use client';
import { useChat } from 'ai/react';
import { useRef, useState } from 'react';
export default function Page() {
const { messages, input, handleSubmit, handleInputChange, isLoading } =
useChat();
const [files, setFiles] = useState(undefined);
const fileInputRef = useRef(null);
return (
);
}
```
### URLs
You can also send URLs as attachments along with a message. This can be useful for sending links to external resources or media content.
> **Note:** The URL can also be a data URL, which is a base64-encoded string that represents the content of a file. Currently, only `image/*` content types get automatically converted into [multi-modal content parts](https://sdk.vercel.ai/docs/foundations/prompts#multi-modal-messages). You will need to handle other content types manually.
```tsx filename="app/page.tsx"
'use client';
import { useChat } from 'ai/react';
import { useState } from 'react';
import { Attachment } from '@ai-sdk/ui-utils';
export default function Page() {
const { messages, input, handleSubmit, handleInputChange, isLoading } =
useChat();
const [attachments] = useState([
{
name: 'earth.png',
contentType: 'image/png',
url: 'https://example.com/earth.png',
},
{
name: 'moon.png',
contentType: 'image/png',
url: 'data:image/png;base64,iVBORw0KGgo...',
},
]);
return (
);
}
```
---
title: Chatbot with Tools
description: Learn how to use tools with the useChat hook.
---
# Chatbot with Tools
With [`useChat`](/docs/reference/ai-sdk-ui/use-chat) and [`streamText`](/docs/reference/ai-sdk-core/stream-text), you can use tools in your chatbot application.
The AI SDK supports three types of tools in this context:
1. Automatically executed server-side tools
2. Automatically executed client-side tools
3. Tools that require user interaction, such as confirmation dialogs
The flow is as follows:
1. The user enters a message in the chat UI.
1. The message is sent to the API route.
1. In your server side route, the language model generates tool calls during the `streamText` call.
1. All tool calls are forwarded to the client.
1. Server-side tools are executed using their `execute` method and their results are forwarded to the client.
1. Client-side tools that should be automatically executed are handled with the `onToolCall` callback.
You can return the tool result from the callback.
1. Client-side tool that require user interactions can be displayed in the UI.
The tool calls and results are available in the `toolInvocations` property of the last assistant message.
1. When the user interaction is done, `addToolResult` can be used to add the tool result to the chat.
1. When there are tool calls in the last assistant message and all tool results are available, the client sends the updated messages back to the server.
This triggers another iteration of this flow.
The tool call and tool executions are integrated into the assistant message as `toolInvocations`.
A tool invocation is at first a tool call, and then it becomes a tool result when the tool is executed.
The tool result contains all information about the tool call as well as the result of the tool execution.
In order to automatically send another request to the server when all tool
calls are server-side, you need to set
[`maxSteps`](/docs/reference/ai-sdk-ui/use-chat#max-steps) to a value greater
than 1 in the `useChat` options. It is disabled by default for backward
compatibility.
## Example
In this example, we'll use three tools:
- `getWeatherInformation`: An automatically executed server-side tool that returns the weather in a given city.
- `askForConfirmation`: A user-interaction client-side tool that asks the user for confirmation.
- `getLocation`: An automatically executed client-side tool that returns a random city.
### API route
```tsx filename='app/api/chat/route.ts'
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
import { z } from 'zod';
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
tools: {
// server-side tool with execute function:
getWeatherInformation: {
description: 'show the weather in a given city to the user',
parameters: z.object({ city: z.string() }),
execute: async ({}: { city: string }) => {
const weatherOptions = ['sunny', 'cloudy', 'rainy', 'snowy', 'windy'];
return weatherOptions[
Math.floor(Math.random() * weatherOptions.length)
];
},
},
// client-side tool that starts user interaction:
askForConfirmation: {
description: 'Ask the user for confirmation.',
parameters: z.object({
message: z.string().describe('The message to ask for confirmation.'),
}),
},
// client-side tool that is automatically executed on the client:
getLocation: {
description:
'Get the user location. Always ask for confirmation before using this tool.',
parameters: z.object({}),
},
},
});
return result.toDataStreamResponse();
}
```
### Client-side page
The client-side page uses the `useChat` hook to create a chatbot application with real-time message streaming.
Tool invocations are displayed in the chat UI.
There are three things worth mentioning:
1. The [`onToolCall`](/docs/reference/ai-sdk-ui/use-chat#on-tool-call) callback is used to handle client-side tools that should be automatically executed.
In this example, the `getLocation` tool is a client-side tool that returns a random city.
2. The `toolInvocations` property of the last assistant message contains all tool calls and results.
The client-side tool `askForConfirmation` is displayed in the UI.
It asks the user for confirmation and displays the result once the user confirms or denies the execution.
The result is added to the chat using `addToolResult`.
3. The [`maxSteps`](/docs/reference/ai-sdk-ui/use-chat#max-steps) option is set to 5.
This enables several tool use iterations between the client and the server.
```tsx filename='app/page.tsx' highlight="9,12,31"
'use client';
import { ToolInvocation } from 'ai';
import { Message, useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit, addToolResult } =
useChat({
maxSteps: 5,
// run client-side tools that are automatically executed:
async onToolCall({ toolCall }) {
if (toolCall.toolName === 'getLocation') {
const cities = [
'New York',
'Los Angeles',
'Chicago',
'San Francisco',
];
return cities[Math.floor(Math.random() * cities.length)];
}
},
});
return (
<>
{messages?.map((m: Message) => (
{m.role}:
{m.content}
{m.toolInvocations?.map((toolInvocation: ToolInvocation) => {
const toolCallId = toolInvocation.toolCallId;
const addResult = (result: string) =>
addToolResult({ toolCallId, result });
// render confirmation tool (client-side tool with user interaction)
if (toolInvocation.toolName === 'askForConfirmation') {
return (
))}
>
);
}
```
## Tool call streaming
This feature is experimental.
You can stream tool calls while they are being generated by enabling the
`experimental_toolCallStreaming` option in `streamText`.
```tsx filename='app/api/chat/route.ts' highlight="5"
export async function POST(req: Request) {
// ...
const result = streamText({
experimental_toolCallStreaming: true,
// ...
});
return result.toDataStreamResponse();
}
```
When the flag is enabled, partial tool calls will be streamed as part of the data stream.
They are available through the `useChat` hook.
The `toolInvocations` property of assistant messages will also contain partial tool calls.
You can use the `state` property of the tool invocation to render the correct UI.
```tsx filename='app/page.tsx' highlight="9,10"
export default function Chat() {
// ...
return (
<>
{messages?.map((m: Message) => (
{m.toolInvocations?.map((toolInvocation: ToolInvocation) => {
switch (toolInvocation.state) {
case 'partial-call':
return <>render partial tool call>;
case 'call':
return <>render full tool call>;
case 'result':
return <>render tool result>;
}
})}
))}
>
);
}
```
## Server-side Multi-Step Calls
You can also use multi-step calls on the server-side with `streamText`.
This works when all invoked tools have an `execute` function on the server side.
```tsx filename='app/api/chat/route.ts' highlight="15-21,24"
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
import { z } from 'zod';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
tools: {
getWeatherInformation: {
description: 'show the weather in a given city to the user',
parameters: z.object({ city: z.string() }),
// tool has execute function:
execute: async ({}: { city: string }) => {
const weatherOptions = ['sunny', 'cloudy', 'rainy', 'snowy', 'windy'];
return weatherOptions[
Math.floor(Math.random() * weatherOptions.length)
];
},
},
},
maxSteps: 5,
});
return result.toDataStreamResponse();
}
```
---
title: Generative User Interfaces
description: Learn how to build Generative UI with AI SDK UI.
---
# Generative User Interfaces
Generative user interfaces (generative UI) is the process of allowing a large language model (LLM) to go beyond text and "generate UI". This creates a more engaging and AI-native experience for users.
At the core of generative UI are [ tools ](/docs/ai-sdk-core/tools-and-tool-calling), which are functions you provide to the model to perform specialized tasks like getting the weather in a location. The model can decide when and how to use these tools based on the context of the conversation.
Generative UI is the process of connecting the results of a tool call to a React component. Here's how it works:
1. You provide the model with a prompt or conversation history, along with a set of tools.
2. Based on the context, the model may decide to call a tool.
3. If a tool is called, it will execute and return data.
4. This data can then be passed to a React component for rendering.
By passing the tool results to React components, you can create a generative UI experience that's more engaging and adaptive to your needs.
## Build a Generative UI Chat Interface
Let's create a chat interface that handles text-based conversations and incorporates dynamic UI elements based on model responses.
### Basic Chat Implementation
Start with a basic chat implementation using the `useChat` hook:
```tsx filename="app/page.tsx"
'use client';
import { useChat } from 'ai/react';
export default function Page() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
);
}
```
To handle the chat requests and model responses, set up an API route:
```ts filename="app/api/chat/route.ts"
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
export async function POST(request: Request) {
const { messages } = await request.json();
const result = streamText({
model: openai('gpt-4o'),
system: 'You are a friendly assistant!',
messages,
maxSteps: 5,
});
return result.toDataStreamResponse();
}
```
This API route uses the `streamText` function to process chat messages and stream the model's responses back to the client.
### Create a Tool
Before enhancing your chat interface with dynamic UI elements, you need to create a tool and corresponding React component. A tool will allow the model to perform a specific action, such as fetching weather information.
Create a new file called `ai/tools.ts` with the following content:
```ts filename="ai/tools.ts"
import { tool as createTool } from 'ai';
import { z } from 'zod';
export const weatherTool = createTool({
description: 'Display the weather for a location',
parameters: z.object({
location: z.string(),
}),
execute: async function ({ location }) {
await new Promise(resolve => setTimeout(resolve, 2000));
return { weather: 'Sunny', temperature: 75, location };
},
});
export const tools = {
displayWeather: weatherTool,
};
```
In this file, you've created a tool called `weatherTool`. This tool simulates fetching weather information for a given location. This tool will return simulated data after a 2-second delay. In a real-world application, you would replace this simulation with an actual API call to a weather service.
### Update the API Route
Update the API route to include the tool you've defined:
```ts filename="app/api/chat/route.ts" highlight="3,13"
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
import { tools } from '@/ai/tools';
export async function POST(request: Request) {
const { messages } = await request.json();
const result = streamText({
model: openai('gpt-4o'),
system: 'You are a friendly assistant!',
messages,
maxSteps: 5,
tools,
});
return result.toDataStreamResponse();
}
```
Now that you've defined the tool and added it to your `streamText` call, let's build a React component to display the weather information it returns.
### Create UI Components
Create a new file called `components/weather.tsx`:
```tsx filename="components/weather.tsx"
type WeatherProps = {
temperature: number;
weather: string;
location: string;
};
export const Weather = ({ temperature, weather, location }: WeatherProps) => {
return (
Current Weather for {location}
Condition: {weather}
Temperature: {temperature}°C
);
};
```
This component will display the weather information for a given location. It takes three props: `temperature`, `weather`, and `location` (exactly what the `weatherTool` returns).
### Render the Weather Component
Now that you have your tool and corresponding React component, let's integrate them into your chat interface. You'll render the Weather component when the model calls the weather tool.
To check if the model has called a tool, you can use the `toolInvocations` property of the message object. This property contains information about any tools that were invoked in that generation including `toolCallId`, `toolName`, `args`, `toolState`, and `result`.
Update your `page.tsx` file:
```tsx filename="app/page.tsx" highlight="4,16-39"
'use client';
import { useChat } from 'ai/react';
import { Weather } from '@/components/weather';
export default function Page() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
{messages.map(message => (
{message.role === 'user' ? 'User: ' : 'AI: '}
{message.content}
{message.toolInvocations?.map(toolInvocation => {
const { toolName, toolCallId, state } = toolInvocation;
if (state === 'result') {
if (toolName === 'displayWeather') {
const { result } = toolInvocation;
return (
);
}
} else {
return (
{toolName === 'displayWeather' ? (
Loading weather...
) : null}
);
}
})}
))}
);
}
```
In this updated code snippet, you:
1. Check if the message has `toolInvocations`.
2. Check if the tool invocation state is 'result'.
3. If it's a result and the tool name is 'displayWeather', render the Weather component.
4. If the tool invocation state is not 'result', show a loading message.
This approach allows you to dynamically render UI components based on the model's responses, creating a more interactive and context-aware chat experience.
## Expanding Your Generative UI Application
You can enhance your chat application by adding more tools and components, creating a richer and more versatile user experience. Here's how you can expand your application:
### Adding More Tools
To add more tools, simply define them in your `ai/tools.ts` file:
```ts
// Add a new stock tool
export const stockTool = createTool({
description: 'Get price for a stock',
parameters: z.object({
symbol: z.string(),
}),
execute: async function ({ symbol }) {
// Simulated API call
await new Promise(resolve => setTimeout(resolve, 2000));
return { symbol, price: 100 };
},
});
// Update the tools object
export const tools = {
displayWeather: weatherTool,
getStockPrice: stockTool,
};
```
Now, create a new file called `components/stock.tsx`:
```tsx
type StockProps = {
price: number;
symbol: string;
};
export const Stock = ({ price, symbol }: StockProps) => {
return (
Stock Information
Symbol: {symbol}
Price: ${price}
);
};
```
Finally, update your `page.tsx` file to include the new Stock component:
```tsx
'use client';
import { useChat } from 'ai/react';
import { Weather } from '@/components/weather';
import { Stock } from '@/components/stock';
export default function Page() {
const { messages, input, setInput, handleSubmit } = useChat();
return (
{messages.map(message => (
{message.role}
{message.content}
{message.toolInvocations?.map(toolInvocation => {
const { toolName, toolCallId, state } = toolInvocation;
if (state === 'result') {
if (toolName === 'displayWeather') {
const { result } = toolInvocation;
return (
);
}
```
By following this pattern, you can continue to add more tools and components, expanding the capabilities of your Generative UI application.
---
title: Completion
description: Learn how to use the useCompletion hook.
---
# Completion
The `useCompletion` hook allows you to create a user interface to handle text completions in your application. It enables the streaming of text completions from your AI provider, manages the state for chat input, and updates the UI automatically as new messages are received.
In this guide, you will learn how to use the `useCompletion` hook in your application to generate text completions and stream them in real-time to your users.
## Example
```tsx filename='app/page.tsx'
'use client';
import { useCompletion } from 'ai/react';
export default function Page() {
const { completion, input, handleInputChange, handleSubmit } = useCompletion({
api: '/api/completion',
});
return (
);
}
```
```ts filename='app/api/completion/route.ts'
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { prompt }: { prompt: string } = await req.json();
const result = streamText({
model: openai('gpt-3.5-turbo'),
prompt,
});
return result.toDataStreamResponse();
}
```
In the `Page` component, the `useCompletion` hook will request to your AI provider endpoint whenever the user submits a message. The completion is then streamed back in real-time and displayed in the UI.
This enables a seamless text completion experience where the user can see the AI response as soon as it is available, without having to wait for the entire response to be received.
## Customized UI
`useCompletion` also provides ways to manage the prompt via code, show loading and error states, and update messages without being triggered by user interactions.
### Loading and error states
To show a loading spinner while the chatbot is processing the user's message, you can use the `isLoading` state returned by the `useCompletion` hook:
```tsx
const { isLoading, ... } = useCompletion()
return(
<>
{isLoading ? : null}
>
)
```
Similarly, the `error` state reflects the error object thrown during the fetch request. It can be used to display an error message, or show a toast notification:
```tsx
const { error, ... } = useCompletion()
useEffect(() => {
if (error) {
toast.error(error.message)
}
}, [error])
// Or display the error message in the UI:
return (
<>
{error ?
{error.message}
: null}
>
)
```
### Controlled input
In the initial example, we have `handleSubmit` and `handleInputChange` callbacks that manage the input changes and form submissions. These are handy for common use cases, but you can also use uncontrolled APIs for more advanced scenarios such as form validation or customized components.
The following example demonstrates how to use more granular APIs like `setInput` with your custom input and submit button components:
```tsx
const { input, setInput } = useCompletion();
return (
<>
setInput(value)} />
>
);
```
### Cancelation
It's also a common use case to abort the response message while it's still streaming back from the AI provider. You can do this by calling the `stop` function returned by the `useCompletion` hook.
```tsx
const { stop, isLoading, ... } = useCompletion()
return (
<>
>
)
```
When the user clicks the "Stop" button, the fetch request will be aborted. This avoids consuming unnecessary resources and improves the UX of your application.
### Throttling UI Updates
This feature is currently only available for React.
By default, the `useCompletion` hook will trigger a render every time a new chunk is received.
You can throttle the UI updates with the `experimental_throttle` option.
```tsx filename="page.tsx" highlight="2-3"
const { completion, ... } = useCompletion({
// Throttle the completion and data updates to 50ms:
experimental_throttle: 50
})
```
## Event Callbacks
`useCompletion` also provides optional event callbacks that you can use to handle different stages of the chatbot lifecycle. These callbacks can be used to trigger additional actions, such as logging, analytics, or custom UI updates.
```tsx
const { ... } = useCompletion({
onResponse: (response: Response) => {
console.log('Received response from server:', response)
},
onFinish: (message: Message) => {
console.log('Finished streaming message:', message)
},
onError: (error: Error) => {
console.error('An error occurred:', error)
},
})
```
It's worth noting that you can abort the processing by throwing an error in the `onResponse` callback. This will trigger the `onError` callback and stop the message from being appended to the chat UI. This can be useful for handling unexpected responses from the AI provider.
## Configure Request Options
By default, the `useCompletion` hook sends a HTTP POST request to the `/api/completion` endpoint with the prompt as part of the request body. You can customize the request by passing additional options to the `useCompletion` hook:
```tsx
const { messages, input, handleInputChange, handleSubmit } = useCompletion({
api: '/api/custom-completion',
headers: {
Authorization: 'your_token',
},
body: {
user_id: '123',
},
credentials: 'same-origin',
});
```
In this example, the `useCompletion` hook sends a POST request to the `/api/completion` endpoint with the specified headers, additional body fields, and credentials for that fetch request. On your server side, you can handle the request with these additional information.
---
title: Object Generation
description: Learn how to use the useObject hook.
---
# Object Generation
`useObject` is an experimental feature and only available in React.
The [`useObject`](/docs/reference/ai-sdk-ui/use-object) hook allows you to create interfaces that represent a structured JSON object that is being streamed.
In this guide, you will learn how to use the `useObject` hook in your application to generate UIs for structured data on the fly.
## Example
The example shows a small notfications demo app that generates fake notifications in real-time.
### Schema
It is helpful to set up the schema in a separate file that is imported on both the client and server.
```ts filename='app/api/notifications/schema.ts'
import { z } from 'zod';
// define a schema for the notifications
export const notificationSchema = z.object({
notifications: z.array(
z.object({
name: z.string().describe('Name of a fictional person.'),
message: z.string().describe('Message. Do not use emojis or links.'),
}),
),
});
```
### Client
The client uses [`useObject`](/docs/reference/ai-sdk-ui/use-object) to stream the object generation process.
The results are partial and are displayed as they are received.
Please note the code for handling `undefined` values in the JSX.
```tsx filename='app/page.tsx'
'use client';
import { experimental_useObject as useObject } from 'ai/react';
import { notificationSchema } from './api/notifications/schema';
export default function Page() {
const { object, submit } = useObject({
api: '/api/notifications',
schema: notificationSchema,
});
return (
<>
{object?.notifications?.map((notification, index) => (
{notification?.name}
{notification?.message}
))}
>
);
}
```
### Server
On the server, we use [`streamObject`](/docs/reference/ai-sdk-core/stream-object) to stream the object generation process.
```typescript filename='app/api/notifications/route.ts'
import { openai } from '@ai-sdk/openai';
import { streamObject } from 'ai';
import { notificationSchema } from './schema';
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const context = await req.json();
const result = streamObject({
model: openai('gpt-4-turbo'),
schema: notificationSchema,
prompt:
`Generate 3 notifications for a messages app in this context:` + context,
});
return result.toTextStreamResponse();
}
```
## Customized UI
`useObject` also provides ways to show loading and error states:
### Loading State
The `isLoading` state returned by the `useObject` hook can be used for several
purposes
- To show a loading spinner while the object is generated.
- To show a "Stop" button to abort the current message.
- To disable the submit button.
```tsx filename='app/page.tsx' highlight="6,13-20,24"
'use client';
import { useObject } from 'ai/react';
export default function Page() {
const { isLoading, stop, object, submit } = useObject({
api: '/api/notifications',
schema: notificationSchema,
});
return (
<>
{isLoading && (
))}
>
);
}
```
### Error State
Similarly, the `error` state reflects the error object thrown during the fetch request.
It can be used to display an error message, or to disable the submit button:
We recommend showing a generic error message to the user, such as "Something
went wrong." This is a good practice to avoid leaking information from the
server.
```tsx file="app/page.tsx" highlight="6,13"
'use client';
import { useObject } from 'ai/react';
export default function Page() {
const { error, object, submit } = useObject({
api: '/api/notifications',
schema: notificationSchema,
});
return (
<>
{error &&
))}
>
);
}
```
## Event Callbacks
`useObject` provides optional event callbacks that you can use to handle life-cycle events.
- `onFinish`: Called when the object generation is completed.
- `onError`: Called when an error occurs during the fetch request.
These callbacks can be used to trigger additional actions, such as logging, analytics, or custom UI updates.
```tsx filename='app/page.tsx' highlight="10-20"
'use client';
import { experimental_useObject as useObject } from 'ai/react';
import { notificationSchema } from './api/notifications/schema';
export default function Page() {
const { object, submit } = useObject({
api: '/api/notifications',
schema: notificationSchema,
onFinish({ object, error }) {
// typed object, undefined if schema validation fails:
console.log('Object generation completed:', object);
// error, undefined if schema validation succeeds:
console.log('Schema validation error:', error);
},
onError(error) {
// error during fetch request:
console.error('An error occurred:', error);
},
});
return (
);
}
```
## Configure Request Options
You can configure the API endpoint and optional headers using the `api` and `headers` settings.
```tsx highlight="2-5"
const { submit, object } = useObject({
api: '/api/use-object',
headers: {
'X-Custom-Header': 'CustomValue',
},
schema: yourSchema,
});
```
---
title: OpenAI Assistants
description: Learn how to use the useAssistant hook.
---
# OpenAI Assistants
The `useAssistant` hook allows you to handle the client state when interacting with an OpenAI compatible assistant API.
This hook is useful when you want to integrate assistant capabilities into your application,
with the UI updated automatically as the assistant is streaming its execution.
The `useAssistant` hook is supported in `ai/react`, `ai/svelte`, and `ai/vue`.
## Example
```tsx filename='app/page.tsx'
'use client';
import { Message, useAssistant } from 'ai/react';
export default function Chat() {
const { status, messages, input, submitMessage, handleInputChange } =
useAssistant({ api: '/api/assistant' });
return (
);
}
```
```tsx filename='app/api/assistant/route.ts'
import { AssistantResponse } from 'ai';
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY || '',
});
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
// Parse the request body
const input: {
threadId: string | null;
message: string;
} = await req.json();
// Create a thread if needed
const threadId = input.threadId ?? (await openai.beta.threads.create({})).id;
// Add a message to the thread
const createdMessage = await openai.beta.threads.messages.create(threadId, {
role: 'user',
content: input.message,
});
return AssistantResponse(
{ threadId, messageId: createdMessage.id },
async ({ forwardStream, sendDataMessage }) => {
// Run the assistant on the thread
const runStream = openai.beta.threads.runs.stream(threadId, {
assistant_id:
process.env.ASSISTANT_ID ??
(() => {
throw new Error('ASSISTANT_ID is not set');
})(),
});
// forward run status would stream message deltas
let runResult = await forwardStream(runStream);
// status can be: queued, in_progress, requires_action, cancelling, cancelled, failed, completed, or expired
while (
runResult?.status === 'requires_action' &&
runResult.required_action?.type === 'submit_tool_outputs'
) {
const tool_outputs =
runResult.required_action.submit_tool_outputs.tool_calls.map(
(toolCall: any) => {
const parameters = JSON.parse(toolCall.function.arguments);
switch (toolCall.function.name) {
// configure your tool calls here
default:
throw new Error(
`Unknown tool call function: ${toolCall.function.name}`,
);
}
},
);
runResult = await forwardStream(
openai.beta.threads.runs.submitToolOutputsStream(
threadId,
runResult.id,
{ tool_outputs },
),
);
}
},
);
}
```
## Customized UI
`useAssistant` also provides ways to manage the chat message and input states via code and show loading and error states.
### Loading and error states
To show a loading spinner while the assistant is running the thread, you can use the `status` state returned by the `useAssistant` hook:
```tsx
const { status, ... } = useAssistant()
return(
<>
{status === "in_progress" ? : null}
>
)
```
Similarly, the `error` state reflects the error object thrown during the fetch request. It can be used to display an error message, or show a toast notification:
```tsx
const { error, ... } = useAssistant()
useEffect(() => {
if (error) {
toast.error(error.message)
}
}, [error])
// Or display the error message in the UI:
return (
<>
{error ?
{error.message}
: null}
>
)
```
### Controlled input
In the initial example, we have `handleSubmit` and `handleInputChange` callbacks that manage the input changes and form submissions. These are handy for common use cases, but you can also use uncontrolled APIs for more advanced scenarios such as form validation or customized components.
The following example demonstrates how to use more granular APIs like `append` with your custom input and submit button components:
```tsx
const { append } = useAssistant();
return (
<>
{
// Send a new message to the AI provider
append({
role: 'user',
content: input,
});
}}
/>
>
);
```
## Configure Request Options
By default, the `useAssistant` hook sends a HTTP POST request to the `/api/assistant` endpoint with the prompt as part of the request body. You can customize the request by passing additional options to the `useAssistant` hook:
```tsx
const { messages, input, handleInputChange, handleSubmit } = useAssistant({
api: '/api/custom-completion',
headers: {
Authorization: 'your_token',
},
body: {
user_id: '123',
},
credentials: 'same-origin',
});
```
In this example, the `useAssistant` hook sends a POST request to the `/api/custom-completion` endpoint with the specified headers, additional body fields, and credentials for that fetch request. On your server side, you can handle the request with these additional information.
---
title: Storing Messages
description: Welcome to the AI SDK documentation!
---
# Storing Messages
The ability to store message history is essential for chatbot use cases.
The AI SDK simplifies the process of storing chat history through the `onFinish` callback of the `streamText` function.
`onFinish` is called after the model's response and all tool executions have completed.
It provides the final text, tool calls, tool results, and usage information,
making it an ideal place to e.g. store the chat history in a database.
## Implementing Persistent Chat History
To implement persistent chat storage, you can utilize the `onFinish` callback on the `streamText` function.
This callback is triggered upon the completion of the model's response and all tool executions,
making it an ideal place to handle the storage of each interaction.
### API Route Example
```tsx highlight="13-16"
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
async onFinish({ text, toolCalls, toolResults, usage, finishReason }) {
// implement your own storage logic:
await saveChat({ text, toolCalls, toolResults });
},
});
return result.toDataStreamResponse();
}
```
### Server Action Example
```tsx highlight="10-13"
'use server';
import { openai } from '@ai-sdk/openai';
import { CoreMessage, streamText } from 'ai';
export async function continueConversation(messages: CoreMessage[]) {
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
async onFinish({ text, toolCalls, toolResults, finishReason, usage }) {
// implement your own storage logic:
await saveChat({ text, toolCalls, toolResults });
},
});
return result.toDataStreamResponse();
}
```
---
title: Streaming Custom Data
description: Learn how to stream custom data to the client.
---
# Streaming Custom Data
It is often useful to send additional data alongside the model's response.
For example, you may want to send status information, the message ids after storing them,
or references to content that the language model is referring to.
The AI SDK provides several helpers that allows you to stream additional data to the client
and attach it either to the `Message` or to the `data` object of the `useChat` hook:
- `createDataStream`: creates a data stream
- `createDataStreamResponse`: creates a response object that streams data
- `pipeDataStreamToResponse`: pipes a data stream to a server response object
The data is streamed as part of the response stream.
## Sending Custom Data from the Server
In your server-side route handler, you can use `createDataStreamResponse` and `pipeDataStreamToResponse` in combination with `streamText`.
You need to:
1. Call `createDataStreamResponse` or `pipeDataStreamToResponse` to get a callback function with a `DataStreamWriter`.
2. Write to the `DataStreamWriter` to stream additional data.
3. Merge the `streamText` result into the `DataStreamWriter`.
4. Return the response from `createDataStreamResponse` (if that method is used)
Here is an example:
```tsx highlight="7-10,16,19-23,25-26,30"
import { openai } from '@ai-sdk/openai';
import { generateId, createDataStreamResponse, streamText } from 'ai';
export async function POST(req: Request) {
const { messages } = await req.json();
// immediately start streaming (solves RAG issues with status, etc.)
return createDataStreamResponse({
execute: dataStream => {
dataStream.writeData('initialized call');
const result = streamText({
model: openai('gpt-4o'),
messages,
onChunk() {
dataStream.writeMessageAnnotation({ chunk: '123' });
},
onFinish() {
// message annotation:
dataStream.writeMessageAnnotation({
id: generateId(), // e.g. id from saved DB record
other: 'information',
});
// call annotation:
dataStream.writeData('call completed');
},
});
result.mergeIntoDataStream(dataStream);
},
onError: error => {
// Error messages are masked by default for security reasons.
// If you want to expose the error message to the client, you can do so here:
return error instanceof Error ? error.message : String(error);
},
});
}
```
You can also send stream data from custom backends, e.g. Python / FastAPI,
using the [Data Stream
Protocol](/docs/ai-sdk-ui/stream-protocol#data-stream-protocol).
## Processing Custom Data in `useChat`
The `useChat` hook automatically processes the streamed data and makes it available to you.
### Accessing Data
On the client, you can destructure `data` from the `useChat` hook which stores all `StreamData`
as a `JSONValue[]`.
```tsx
import { useChat } from 'ai/react';
const { data } = useChat();
```
### Accessing Message Annotations
Each message from the `useChat` hook has an optional `annotations` property that contains
the message annotations sent from the server.
Since the shape of the annotations depends on what you send from the server,
you have to destructure them in a type-safe way on the client side.
Here we just show the annotations as a JSON string:
```tsx highlight="9"
import { Message, useChat } from 'ai/react';
const { messages } = useChat();
const result = (
<>
{messages?.map((m: Message) => (
))}
>
);
```
### Updating and Clearing Data
You can update and clear the `data` object of the `useChat` hook using the `setData` function.
```tsx
const { setData } = useChat();
// clear existing data
setData(undefined);
// set new data
setData([{ test: 'value' }]);
// transform existing data, e.g. adding additional values:
setData(currentData => [...currentData, { test: 'value' }]);
```
#### Example: Clear on Submit
```tsx highlight="18-21"
'use client';
import { Message, useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit, data, setData } =
useChat();
return (
<>
{data &&
{JSON.stringify(data, null, 2)}
}
{messages?.map((m: Message) => (
{`${m.role}: ${m.content}`}
))}
>
);
}
```
---
title: Error Handling
description: Learn how to handle errors in the AI SDK UI
---
# Error Handling
### Error Helper Object
Each AI SDK UI hook also returns an [error](/docs/reference/ai-sdk-ui/use-chat#error) object that you can use to render the error in your UI.
You can use the error object to show an error message, disable the submit button, or show a retry button.
We recommend showing a generic error message to the user, such as "Something
went wrong." This is a good practice to avoid leaking information from the
server.
```tsx file="app/page.tsx" highlight="7,17-24,30"
'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit, error, reload } =
useChat({});
return (
{messages.map(m => (
{m.role}: {m.content}
))}
{error && (
<>
An error occurred.
>
)}
);
}
```
#### Alternative: replace last message
Alternatively you can write a custom submit handler that replaces the last message when an error is present.
```tsx file="app/page.tsx" highlight="15-21,33"
'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const {
handleInputChange,
handleSubmit,
error,
input,
messages,
setMesages,
} = useChat({});
function customSubmit(event: React.FormEvent) {
if (error != null) {
setMessages(messages.slice(0, -1)); // remove last message
}
handleSubmit(event);
}
return (
{messages.map(m => (
{m.role}: {m.content}
))}
{error &&
An error occurred.
}
);
}
```
### Error Handling Callback
Errors can be processed by passing an [`onError`](/docs/reference/ai-sdk-ui/use-chat#on-error) callback function as an option to the [`useChat`](/docs/reference/ai-sdk-ui/use-chat), [`useCompletion`](/docs/reference/ai-sdk-ui/use-completion) or [`useAssistant`](/docs/reference/ai-sdk-ui/use-assistant) hooks.
The callback function receives an error object as an argument.
```tsx file="app/page.tsx" highlight="8-11"
import { useChat } from 'ai/react';
export default function Page() {
const {
/* ... */
} = useChat({
// handle error:
onError: error => {
console.error(error);
},
});
}
```
### Injecting Errors for Testing
You might want to create errors for testing.
You can easily do so by throwing an error in your route handler:
```ts file="app/api/chat/route.ts"
export async function POST(req: Request) {
throw new Error('This is a test error');
}
```
---
title: Stream Protocols
description: Learn more about the supported stream protocols in the AI SDK.
---
# Stream Protocols
AI SDK UI functions such as `useChat` and `useCompletion` support both text streams and data streams.
The stream protocol defines how the data is streamed to the frontend on top of the HTTP protocol.
This page describes both protocols and how to use them in the backend and frontend.
You can use this information to develop custom backends and frontends for your use case, e.g.,
to provide compatible API endpoints that are implemented in a different language such as Python.
For instance, here's an example using [FastAPI](https://github.com/vercel/ai/tree/main/examples/next-fastapi) as a backend.
## Text Stream Protocol
A text stream contains chunks in plain text, that are streamed to the frontend.
Each chunk is then appended together to form a full text response.
Text streams are supported by `useChat`, `useCompletion`, and `useObject`.
When you use `useChat` or `useCompletion`, you need to enable text streaming
by setting the `streamProtocol` options to `text`.
You can generate text streams with `streamText` in the backend.
When you call `toTextStreamResponse()` on the result object,
a streaming HTTP response is returned.
Text streams only support basic text data. If you need to stream other types
of data such as tool calls, use data streams.
### Text Stream Example
Here is a Next.js example that uses the text stream protocol:
```tsx filename='app/page.tsx' highlight="7"
'use client';
import { useCompletion } from 'ai/react';
export default function Page() {
const { completion, input, handleInputChange, handleSubmit } = useCompletion({
streamProtocol: 'text',
});
return (
);
}
```
```ts filename='app/api/completion/route.ts' highlight="15"
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { prompt }: { prompt: string } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
prompt,
});
return result.toTextStreamResponse();
}
```
## Data Stream Protocol
A data stream follows a special protocol that the AI SDK provides to send information to the frontend.
Each stream part has the format `TYPE_ID:CONTENT_JSON\n`.
When you provide data streams from a custom backend, you need to set the
`x-vercel-ai-data-stream` header to `v1`.
The following stream parts are currently supported:
### Text Part
The text parts are appended to the message as they are received.
Format: `0:string\n`
Example: `0:"example"\n`
### Data Part
The data parts are parsed as JSON and appended to the message as they are received. The data part is available through `data`.
Format: `2:Array\n`
Example: `2:[{"key":"object1"},{"anotherKey":"object2"}]\n`
### Message Annotation Part
The message annotation parts are appended to the message as they are received. The annotation part is available through `annotations`.
Format: `8:Array\n`
Example: `8:[{"id":"message-123","other":"annotation"}]\n`
### Error Part
The error parts are appended to the message as they are received.
Format: `3:string\n`
Example: `3:"error message"\n`
### Tool Call Streaming Start Part
A part indicating the start of a streaming tool call. This part needs to be sent before any tool call delta for that tool call. Tool call streaming is optional, you can use tool call and tool result parts without it.
Format: `b:{toolCallId:string; toolName:string}\n`
Example: `b:{"toolCallId":"call-456","toolName":"streaming-tool"}\n`
### Tool Call Delta Part
A part representing a delta update for a streaming tool call.
Format: `c:{toolCallId:string; argsTextDelta:string}\n`
Example: `c:{"toolCallId":"call-456","argsTextDelta":"partial arg"}\n`
### Tool Call Part
A part representing a tool call. When there are streamed tool calls, the tool call part needs to come after the tool call streaming is finished.
Format: `9:{toolCallId:string; toolName:string; args:object}\n`
Example: `9:{"toolCallId":"call-123","toolName":"my-tool","args":{"some":"argument"}}\n`
### Tool Result Part
A part representing a tool result. The result part needs to be sent after the tool call part for that tool call.
Format: `a:{toolCallId:string; result:object}\n`
Example: `a:{"toolCallId":"call-123","result":"tool output"}\n`
### Finish Step Part
A part indicating that a step (i.e., one LLM API call in the backend) has been completed.
This part is necessary to correctly process multiple stitched assistant calls, e.g. when calling tools in the backend, and using steps in `useChat` at the same time.
It includes the following metadata:
- [`FinishReason`](/docs/reference/ai-sdk-ui/use-chat#on-finish-finish-reason)
- [`Usage`](/docs/reference/ai-sdk-ui/use-chat#on-finish-usage) for that step.
- `isContinued` to indicate if the step text will be continued in the next step.
The finish step part needs to come at the end of a step.
Format: `e:{finishReason:'stop' | 'length' | 'content-filter' | 'tool-calls' | 'error' | 'other' | 'unknown';usage:{promptTokens:number; completionTokens:number;},isContinued:boolean}\n`
Example: `e:{"finishReason":"stop","usage":{"promptTokens":10,"completionTokens":20},"isContinued":false}\n`
### Finish Message Part
A part indicating the completion of a message with additional metadata, such as [`FinishReason`](/docs/reference/ai-sdk-ui/use-chat#on-finish-finish-reason) and [`Usage`](/docs/reference/ai-sdk-ui/use-chat#on-finish-usage). This part needs to be the last part in the stream.
Format: `d:{finishReason:'stop' | 'length' | 'content-filter' | 'tool-calls' | 'error' | 'other' | 'unknown';usage:{promptTokens:number; completionTokens:number;}}\n`
Example: `d:{"finishReason":"stop","usage":{"promptTokens":10,"completionTokens":20}}\n`
The data stream protocol is supported
by `useChat` and `useCompletion` on the frontend and used by default.
`useCompletion` only supports the `text` and `data` stream parts.
On the backend, you can use `toDataStreamResponse()` from the `streamText` result object to return a streaming HTTP response.
### Data Stream Example
Here is a Next.js example that uses the data stream protocol:
```tsx filename='app/page.tsx' highlight="7"
'use client';
import { useCompletion } from 'ai/react';
export default function Page() {
const { completion, input, handleInputChange, handleSubmit } = useCompletion({
streamProtocol: 'data', // optional, this is the default
});
return (
);
}
```
```ts filename='app/api/completion/route.ts' highlight="15"
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { prompt }: { prompt: string } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
prompt,
});
return result.toDataStreamResponse();
}
```
---
title: AI SDK UI
description: Learn about the AI SDK UI.
---
# AI SDK UI
---
title: Overview
description: An overview of AI SDK RSC.
---
# AI SDK RSC
AI SDK RSC is currently experimental. We recommend using [AI SDK
UI](/docs/ai-sdk-ui/overview) for production. For guidance on migrating from
RSC to UI, see our [migration guide](/docs/ai-sdk-rsc/migrating-to-ui).
The `ai/rsc` package is compatible with frameworks that support React Server
Components.
[React Server Components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) (RSC) allow you to write UI that can be rendered on the server and streamed to the client. RSCs enable [ Server Actions ](https://nextjs.org/docs/app/building-your-application/data-fetching/server-actions-and-mutations#with-client-components), a new way to call server-side code directly from the client just like any other function with end-to-end type-safety. This combination opens the door to a new way of building AI applications, allowing the large language model (LLM) to generate and stream UI directly from the server to the client.
## AI SDK RSC Functions
AI SDK RSC has various functions designed to help you build AI-native applications with React Server Components. These functions:
1. Provide abstractions for building Generative UI applications.
- [`streamUI`](/docs/reference/ai-sdk-rsc/stream-ui): calls a model and allows it to respond with React Server Components.
- [`useUIState`](/docs/reference/ai-sdk-rsc/use-ui-state): returns the current UI state and a function to update the UI State (like React's `useState`). UI State is the visual representation of the AI state.
- [`useAIState`](/docs/reference/ai-sdk-rsc/use-ai-state): returns the current AI state and a function to update the AI State (like React's `useState`). The AI state is intended to contain context and information shared with the AI model, such as system messages, function responses, and other relevant data.
- [`useActions`](/docs/reference/ai-sdk-rsc/use-actions): provides access to your Server Actions from the client. This is particularly useful for building interfaces that require user interactions with the server.
- [`createAI`](/docs/reference/ai-sdk-rsc/create-ai): creates a client-server context provider that can be used to wrap parts of your application tree to easily manage both UI and AI states of your application.
2. Make it simple to work with streamable values between the server and client.
- [`createStreamableValue`](/docs/reference/ai-sdk-rsc/create-streamable-value): creates a stream that sends values from the server to the client. The value can be any serializable data.
- [`readStreamableValue`](/docs/reference/ai-sdk-rsc/read-streamable-value): reads a streamable value from the client that was originally created using `createStreamableValue`.
- [`createStreamableUI`](/docs/reference/ai-sdk-rsc/create-streamable-ui): creates a stream that sends UI from the server to the client.
- [`useStreamableValue`](/docs/reference/ai-sdk-rsc/use-streamable-value): accepts a streamable value created using `createStreamableValue` and returns the current value, error, and pending state.
## Templates
Check out the following templates to see AI SDK RSC in action.
## API Reference
Please check out the [AI SDK RSC API Reference](/docs/reference/ai-sdk-rsc) for more details on each function.
---
title: Streaming React Components
description: Overview of streaming RSCs
---
import { UIPreviewCard, Card } from '@/components/home/card';
import { EventPlanning } from '@/components/home/event-planning';
import { Searching } from '@/components/home/searching';
import { Weather } from '@/components/home/weather';
# Streaming React Components
AI SDK RSC is currently experimental. We recommend using [AI SDK
UI](/docs/ai-sdk-ui/overview) for production. For guidance on migrating from
RSC to UI, see our [migration guide](/docs/ai-sdk-rsc/migrating-to-ui).
The RSC API allows you to stream React components from the server to the client with the [`streamUI`](/docs/reference/ai-sdk-rsc/stream-ui) function. This is useful when you want to go beyond raw text and stream components to the client in real-time.
Similar to [ AI SDK Core ](/docs/ai-sdk-core/overview) APIs (like [ `streamText` ](/docs/reference/ai-sdk-core/stream-text) and [ `streamObject` ](/docs/reference/ai-sdk-core/stream-object)), `streamUI` provides a single function to call a model and allow it to respond with React Server Components.
It supports the same model interfaces as AI SDK Core APIs.
### Concepts
To give the model the ability to respond to a user's prompt with a React component, you can leverage [tools](/docs/ai-sdk-core/tools-and-tool-calling).
Remember, tools are like programs you can give to the model, and the model can
decide as and when to use based on the context of the conversation.
With the `streamUI` function, **you provide tools that return React components**. With the ability to stream components, the model is akin to a dynamic router that is able to understand the user's intention and display relevant UI.
At a high level, the `streamUI` works like other AI SDK Core functions: you can provide the model with a prompt or some conversation history and, optionally, some tools. If the model decides, based on the context of the conversation, to call a tool, it will generate a tool call. The `streamUI` function will then run the respective tool, returning a React component. If the model doesn't have a relevant tool to use, it will return a text generation, which will be passed to the `text` function, for you to handle (render and return as a React component).
Remember, the `streamUI` function must return a React component.
```tsx
const result = await streamUI({
model: openai('gpt-4o'),
prompt: 'Get the weather for San Francisco',
text: ({ content }) =>
{content}
,
tools: {},
});
```
This example calls the `streamUI` function using OpenAI's `gpt-4o` model, passes a prompt, specifies how the model's plain text response (`content`) should be rendered, and then provides an empty object for tools. Even though this example does not define any tools, it will stream the model's response as a `div` rather than plain text.
### Adding A Tool
Using tools with `streamUI` is similar to how you use tools with `generateText` and `streamText`.
A tool is an object that has:
- `description`: a string telling the model what the tool does and when to use it
- `parameters`: a Zod schema describing what the tool needs in order to run
- `generate`: an asynchronous function that will be run if the model calls the tool. This must return a React component
Let's expand the previous example to add a tool.
```tsx highlight="6-14"
const result = await streamUI({
model: openai('gpt-4o'),
prompt: 'Get the weather for San Francisco',
text: ({ content }) =>
{content}
,
tools: {
getWeather: {
description: 'Get the weather for a location',
parameters: z.object({ location: z.string() }),
generate: async function* ({ location }) {
yield ;
const weather = await getWeather(location);
return ;
},
},
},
});
```
This tool would be run if the user asks for the weather for their location. If the user hasn't specified a location, the model will ask for it before calling the tool. When the model calls the tool, the generate function will initially return a loading component. This component will show until the awaited call to `getWeather` is resolved, at which point, the model will stream the `` to the user.
Note: This example uses a [ generator function
](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/function*)
(`function*`), which allows you to pause its execution and return a value,
then resume from where it left off on the next call. This is useful for
handling data streams, as you can fetch and return data from an asynchronous
source like an API, then resume the function to fetch the next chunk when
needed. By yielding values one at a time, generator functions enable efficient
processing of streaming data without blocking the main thread.
## Using `streamUI` with Next.js
Let's see how you can use the example above in a Next.js application.
To use `streamUI` in a Next.js application, you will need two things:
1. A Server Action (where you will call `streamUI`)
2. A page to call the Server Action and render the resulting components
### Step 1: Create a Server Action
Server Actions are server-side functions that you can call directly from the
frontend. For more info, see [the
documentation](https://nextjs.org/docs/app/building-your-application/data-fetching/server-actions-and-mutations#with-client-components).
Create a Server Action at `app/actions.tsx` and add the following code:
```tsx filename="app/actions.tsx"
'use server';
import { streamUI } from 'ai/rsc';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
const LoadingComponent = () => (
The weather in {props.location} is {props.weather}
);
export async function streamComponent() {
const result = await streamUI({
model: openai('gpt-4o'),
prompt: 'Get the weather for San Francisco',
text: ({ content }) =>
{content}
,
tools: {
getWeather: {
description: 'Get the weather for a location',
parameters: z.object({
location: z.string(),
}),
generate: async function* ({ location }) {
yield ;
const weather = await getWeather(location);
return ;
},
},
},
});
return result.value;
}
```
The `getWeather` tool should look familiar as it is identical to the example in the previous section. In order for this tool to work:
1. First define a `LoadingComponent`, which renders a pulsing `div` that will show some loading text.
2. Next, define a `getWeather` function that will timeout for 2 seconds (to simulate fetching the weather externally) before returning the "weather" for a `location`. Note: you could run any asynchronous TypeScript code here.
3. Finally, define a `WeatherComponent` which takes in `location` and `weather` as props, which are then rendered within a `div`.
Your Server Action is an asynchronous function called `streamComponent` that takes no inputs, and returns a `ReactNode`. Within the action, you call the `streamUI` function, specifying the model (`gpt-4o`), the prompt, the component that should be rendered if the model chooses to return text, and finally, your `getWeather` tool. Last but not least, you return the resulting component generated by the model with `result.value`.
To call this Server Action and display the resulting React Component, you will need a page.
### Step 2: Create a Page
Create or update your root page (`app/page.tsx`) with the following code:
```tsx filename="app/page.tsx"
'use client';
import { useState } from 'react';
import { Button } from '@/components/ui/button';
import { streamComponent } from './actions';
export default function Page() {
const [component, setComponent] = useState();
return (
{component}
);
}
```
This page is first marked as a client component with the `"use client";` directive given it will be using hooks and interactivity. On the page, you render a form. When that form is submitted, you call the `streamComponent` action created in the previous step (just like any other function). The `streamComponent` action returns a `ReactNode` that you can then render on the page using React state (`setComponent`).
## Going beyond a single prompt
You can now allow the model to respond to your prompt with a React component. However, this example is limited to a static prompt that is set within your Server Action. You could make this example interactive by turning it into a chatbot.
Learn how to stream React components with the Next.js App Router using `streamUI` with this [example](https://sdk.vercel.ai/examples/next-app/interface/route-components).
---
title: Managing Generative UI State
description: Overview of the AI and UI states
---
# Managing Generative UI State
AI SDK RSC is currently experimental. We recommend using [AI SDK
UI](/docs/ai-sdk-ui/overview) for production. For guidance on migrating from
RSC to UI, see our [migration guide](/docs/ai-sdk-rsc/migrating-to-ui).
State is an essential part of any application. State is particularly important in AI applications as it is passed to large language models (LLMs) on each request to ensure they have the necessary context to produce a great generation. Traditional chatbots are text-based and have a structure that mirrors that of any chat application.
For example, in a chatbot, state is an array of `messages` where each `message` has:
- `id`: a unique identifier
- `role`: who sent the message (user/assistant/system/tool)
- `content`: the content of the message
This state can be rendered in the UI and sent to the model without any modifications.
With Generative UI, the model can now return a React component, rather than a plain text message. The client can render that component without issue, but that state can't be sent back to the model because React components aren't serialisable. So, what can you do?
**The solution is to split the state in two, where one (AI State) becomes a proxy for the other (UI State)**.
One way to understand this concept is through a Lego analogy. Imagine a 10,000 piece Lego model that, once built, cannot be easily transported because it is fragile. By taking the model apart, it can be easily transported, and then rebuilt following the steps outlined in the instructions pamphlet. In this way, the instructions pamphlet is a proxy to the physical structure. Similarly, AI State provides a serialisable (JSON) representation of your UI that can be passed back and forth to the model.
## What is AI and UI State?
The RSC API simplifies how you manage AI State and UI State, providing a robust way to keep them in sync between your database, server and client.
### AI State
AI State refers to the state of your application in a serialisable format that will be used on the server and can be shared with the language model.
For a chat app, the AI State is the conversation history (messages) between the user and the assistant. Components generated by the model would be represented in a JSON format as a tool alongside any necessary props. AI State can also be used to store other values and meta information such as `createdAt` for each message and `chatId` for each conversation. The LLM reads this history so it can generate the next message. This state serves as the source of truth for the current application state.
**Note**: AI state can be accessed/modified from both the server and the
client.
### UI State
UI State refers to the state of your application that is rendered on the client. It is a fully client-side state (similar to `useState`) that can store anything from Javascript values to React elements. UI state is a list of actual UI elements that are rendered on the client.
**Note**: UI State can only be accessed client-side.
## Using AI / UI State
### Creating the AI Context
AI SDK RSC simplifies managing AI and UI state across your application by providing several hooks. These hooks are powered by [ React context ](https://react.dev/reference/react/hooks#context-hooks) under the hood.
Notably, this means you do not have to pass the message history to the server explicitly for each request. You also can access and update your application state in any child component of the context provider. As you begin building [multistep generative interfaces](/docs/ai-sdk-rsc/multistep-interfaces), this will be particularly helpful.
To use `ai/rsc` to manage AI and UI State in your application, you can create a React context using [`createAI`](/docs/reference/ai-sdk-rsc/create-ai):
```tsx filename='app/actions.tsx'
// Define the AI state and UI state types
export type ServerMessage = {
role: 'user' | 'assistant';
content: string;
};
export type ClientMessage = {
id: string;
role: 'user' | 'assistant';
display: ReactNode;
};
export const sendMessage = async (input: string): Promise => {
"use server"
...
}
```
```tsx filename='app/ai.ts'
import { createAI } from 'ai/rsc';
import { ClientMessage, ServerMessage, sendMessage } from './actions';
export type AIState = ServerMessage[];
export type UIState = ClientMessage[];
// Create the AI provider with the initial states and allowed actions
export const AI = createAI({
initialAIState: [],
initialUIState: [],
actions: {
sendMessage,
},
});
```
You must pass Server Actions to the `actions` object.
In this example, you define types for AI State and UI State, respectively.
Next, wrap your application with your newly created context. With that, you can get and set AI and UI State across your entire application.
```tsx filename='app/layout.tsx'
import { type ReactNode } from 'react';
import { AI } from './ai';
export default function RootLayout({
children,
}: Readonly<{ children: ReactNode }>) {
return (
{children}
);
}
```
## Reading UI State in Client
The UI state can be accessed in Client Components using the [`useUIState`](/docs/reference/ai-sdk-rsc/use-ui-state) hook provided by the RSC API. The hook returns the current UI state and a function to update the UI state like React's `useState`.
```tsx filename='app/page.tsx'
'use client';
import { useUIState } from 'ai/rsc';
export default function Page() {
const [messages, setMessages] = useUIState();
return (
{messages.map(message => (
{message.display}
))}
);
}
```
## Reading AI State in Client
The AI state can be accessed in Client Components using the [`useAIState`](/docs/reference/ai-sdk-rsc/use-ai-state) hook provided by the RSC API. The hook returns the current AI state.
```tsx filename='app/page.tsx'
'use client';
import { useAIState } from 'ai/rsc';
export default function Page() {
const [messages, setMessages] = useAIState();
return (
{messages.map(message => (
{message.content}
))}
);
}
```
## Reading AI State on Server
The AI State can be accessed within any Server Action provided to the `createAI` context using the [`getAIState`](/docs/reference/ai-sdk-rsc/get-ai-state) function. It returns the current AI state as a read-only value:
```tsx filename='app/actions.ts'
import { getAIState } from 'ai/rsc';
export async function sendMessage(message: string) {
'use server';
const history = getAIState();
const response = await generateText({
model: openai('gpt-3.5-turbo'),
messages: [...history, { role: 'user', content: message }],
});
return response;
}
```
Remember, you can only access state within actions that have been passed to
the `createAI` context within the `actions` key.
## Updating AI State on Server
The AI State can also be updated from within your Server Action with the [`getMutableAIState`](/docs/reference/ai-sdk-rsc/get-mutable-ai-state) function. This function is similar to `getAIState`, but it returns the state with methods to read and update it:
```tsx filename='app/actions.ts'
import { getMutableAIState } from 'ai/rsc';
export async function sendMessage(message: string) {
'use server';
const history = getMutableAIState();
// Update the AI state with the new user message.
history.update([...history.get(), { role: 'user', content: message }]);
const response = await generateText({
model: openai('gpt-3.5-turbo'),
messages: history.get(),
});
// Update the AI state again with the response from the model.
history.done([...history.get(), { role: 'assistant', content: response }]);
return response;
}
```
It is important to update the AI State with new responses using `.update()`
and `.done()` to keep the conversation history in sync.
## Calling Server Actions from the Client
To call the `sendMessage` action from the client, you can use the [`useActions`](/docs/reference/ai-sdk-rsc/use-actions) hook. The hook returns all the available Actions that were provided to `createAI`:
```tsx filename='app/page.tsx'
'use client';
import { useActions, useUIState } from 'ai/rsc';
import { AI } from './ai';
export default function Page() {
const { sendMessage } = useActions();
const [messages, setMessages] = useUIState();
const handleSubmit = async event => {
event.preventDefault();
setMessages([
...messages,
{ id: Date.now(), role: 'user', display: event.target.message.value },
]);
const response = await sendMessage(event.target.message.value);
setMessages([
...messages,
{ id: Date.now(), role: 'assistant', display: response },
]);
};
return (
<>
{messages.map(message => (
{message.display}
))}
>
);
}
```
When the user submits a message, the `sendMessage` action is called with the message content. The response from the action is then added to the UI state, updating the displayed messages.
Important! Don't forget to update the UI State after you call your Server
Action otherwise the streamed component will not show in the UI.
To learn more, check out this [example](/examples/next-app/state-management/ai-ui-states) on managing AI and UI state using `ai/rsc`.
---
Next, you will learn how you can save and restore state with `ai/rsc`.
---
title: Saving and Restoring States
description: Saving and restoring AI and UI states with onGetUIState and onSetAIState
---
# Saving and Restoring States
AI SDK RSC is currently experimental. We recommend using [AI SDK
UI](/docs/ai-sdk-ui/overview) for production. For guidance on migrating from
RSC to UI, see our [migration guide](/docs/ai-sdk-rsc/migrating-to-ui).
AI SDK RSC provides convenient methods for saving and restoring AI and UI state. This is useful for saving the state of your application after every model generation, and restoring it when the user revisits the generations.
## AI State
### Saving AI state
The AI state can be saved using the [`onSetAIState`](/docs/reference/ai-sdk-rsc/create-ai#on-set-ai-state) callback, which gets called whenever the AI state is updated. In the following example, you save the chat history to a database whenever the generation is marked as done.
```tsx filename='app/ai.ts'
export const AI = createAI({
actions: {
continueConversation,
},
onSetAIState: async ({ state, done }) => {
'use server';
if (done) {
saveChatToDB(state);
}
},
});
```
### Restoring AI state
The AI state can be restored using the [`initialAIState`](/docs/reference/ai-sdk-rsc/create-ai#initial-ai-state) prop passed to the context provider created by the [`createAI`](/docs/reference/ai-sdk-rsc/create-ai) function. In the following example, you restore the chat history from a database when the component is mounted.
```tsx file='app/layout.tsx'
import { ReactNode } from 'react';
import { AI } from './ai';
export default async function RootLayout({
children,
}: Readonly<{ children: ReactNode }>) {
const chat = await loadChatFromDB();
return (
{children}
);
}
```
## UI State
### Saving UI state
The UI state cannot be saved directly, since the contents aren't yet serializable. Instead, you can use the AI state as proxy to store details about the UI state and use it to restore the UI state when needed.
### Restoring UI state
The UI state can be restored using the AI state as a proxy. In the following example, you restore the chat history from the AI state when the component is mounted. You use the [`onGetUIState`](/docs/reference/ai-sdk-rsc/create-ai#on-get-ui-state) callback to listen for SSR events and restore the UI state.
```tsx filename='app/ai.ts'
export const AI = createAI({
actions: {
continueConversation,
},
onGetUIState: async () => {
'use server';
const historyFromDB: ServerMessage[] = await loadChatFromDB();
const historyFromApp: ServerMessage[] = getAIState();
// If the history from the database is different from the
// history in the app, they're not in sync so return the UIState
// based on the history from the database
if (historyFromDB.length !== historyFromApp.length) {
return historyFromDB.map(({ role, content }) => ({
id: generateId(),
role,
display:
role === 'function' ? (
) : (
content
),
}));
}
},
});
```
To learn more, check out this [example](/examples/next-app/state-management/save-and-restore-states) that persists and restores states in your Next.js application.
---
Next, you will learn how you can use `ai/rsc` functions like `useActions` and `useUIState` to create interactive, multistep interfaces.
---
title: Multistep Interfaces
description: Overview of Building Multistep Interfaces with AI SDK RSC
---
# Designing Multistep Interfaces
AI SDK RSC is currently experimental. We recommend using [AI SDK
UI](/docs/ai-sdk-ui/overview) for production. For guidance on migrating from
RSC to UI, see our [migration guide](/docs/ai-sdk-rsc/migrating-to-ui).
Multistep interfaces refer to user interfaces that require multiple independent steps to be executed in order to complete a specific task.
For example, if you wanted to build a Generative UI chatbot capable of booking flights, it could have three steps:
- Search all flights
- Pick flight
- Check availability
To build this kind of application you will leverage two concepts, **tool composition** and **application context**.
**Tool composition** is the process of combining multiple [tools](/docs/ai-sdk-core/tools-and-tool-calling) to create a new tool. This is a powerful concept that allows you to break down complex tasks into smaller, more manageable steps. In the example above, _"search all flights"_, _"pick flight"_, and _"check availability"_ come together to create a holistic _"book flight"_ tool.
**Application context** refers to the state of the application at any given point in time. This includes the user's input, the output of the language model, and any other relevant information. In the example above, the flight selected in _"pick flight"_ would be used as context necessary to complete the _"check availability"_ task.
## Overview
In order to build a multistep interface with `ai/rsc`, you will need a few things:
- A Server Action that calls and returns the result from the `streamUI` function
- Tool(s) (sub-tasks necessary to complete your overall task)
- React component(s) that should be rendered when the tool is called
- A page to render your chatbot
The general flow that you will follow is:
- User sends a message (calls your Server Action with `useActions`, passing the message as an input)
- Message is appended to the AI State and then passed to the model alongside a number of tools
- Model can decide to call a tool, which will render the `` component
- Within that component, you can add interactivity by using `useActions` to call the model with your Server Action and `useUIState` to append the model's response (``) to the UI State
- And so on...
## Implementation
The turn-by-turn implementation is the simplest form of multistep interfaces. In this implementation, the user and the model take turns during the conversation. For every user input, the model generates a response, and the conversation continues in this turn-by-turn fashion.
In the following example, you specify two tools (`searchFlights` and `lookupFlight`) that the model can use to search for flights and lookup details for a specific flight.
```tsx filename="app/actions.tsx"
import { streamUI } from 'ai/rsc';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
const searchFlights = async (
source: string,
destination: string,
date: string,
) => {
return [
{
id: '1',
flightNumber: 'AA123',
},
{
id: '2',
flightNumber: 'AA456',
},
];
};
const lookupFlight = async (flightNumber: string) => {
return {
flightNumber: flightNumber,
departureTime: '10:00 AM',
arrivalTime: '12:00 PM',
};
};
export async function submitUserMessage(input: string) {
'use server';
const ui = await streamUI({
model: openai('gpt-4o'),
system: 'you are a flight booking assistant',
prompt: input,
text: async ({ content }) =>
{content}
,
tools: {
searchFlights: {
description: 'search for flights',
parameters: z.object({
source: z.string().describe('The origin of the flight'),
destination: z.string().describe('The destination of the flight'),
date: z.string().describe('The date of the flight'),
}),
generate: async function* ({ source, destination, date }) {
yield `Searching for flights from ${source} to ${destination} on ${date}...`;
const results = await searchFlights(source, destination, date);
return (
{results.map(result => (
{result.flightNumber}
))}
);
},
},
lookupFlight: {
description: 'lookup details for a flight',
parameters: z.object({
flightNumber: z.string().describe('The flight number'),
}),
generate: async function* ({ flightNumber }) {
yield `Looking up details for flight ${flightNumber}...`;
const details = await lookupFlight(flightNumber);
return (
Flight Number: {details.flightNumber}
Departure Time: {details.departureTime}
Arrival Time: {details.arrivalTime}
);
},
},
},
});
return ui.value;
}
```
Next, create an AI context that will hold the UI State and AI State.
```ts filename='app/ai.ts'
import { createAI } from 'ai/rsc';
import { submitUserMessage } from './actions';
export const AI = createAI({
initialUIState: [],
initialAIState: [],
actions: {
submitUserMessage,
},
});
```
Next, wrap your application with your newly created context.
```tsx filename='app/layout.tsx'
import { type ReactNode } from 'react';
import { AI } from './ai';
export default function RootLayout({
children,
}: Readonly<{ children: ReactNode }>) {
return (
{children}
);
}
```
To call your Server Action, update your root page with the following:
```tsx filename="app/page.tsx"
'use client';
import { useState } from 'react';
import { AI } from './ai';
import { useActions, useUIState } from 'ai/rsc';
export default function Page() {
const [input, setInput] = useState('');
const [conversation, setConversation] = useUIState();
const { submitUserMessage } = useActions();
const handleSubmit = async (e: React.FormEvent) => {
e.preventDefault();
setInput('');
setConversation(currentConversation => [
...currentConversation,
);
}
```
This page pulls in the current UI State using the `useUIState` hook, which is then mapped over and rendered in the UI. To access the Server Action, you use the `useActions` hook which will return all actions that were passed to the `actions` key of the `createAI` function in your `actions.tsx` file. Finally, you call the `submitUserMessage` function like any other TypeScript function. This function returns a React component (`message`) that is then rendered in the UI by updating the UI State with `setConversation`.
In this example, to call the next tool, the user must respond with plain text. **Given you are streaming a React component, you can add a button to trigger the next step in the conversation**.
To add user interaction, you will have to convert the component into a client component and use the `useAction` hook to trigger the next step in the conversation.
```tsx filename="components/flights.tsx"
'use client';
import { useActions, useUIState } from 'ai/rsc';
import { ReactNode } from 'react';
interface FlightsProps {
flights: { id: string; flightNumber: string }[];
}
export const Flights = ({ flights }: FlightsProps) => {
const { submitUserMessage } = useActions();
const [_, setMessages] = useUIState();
return (
);
};
```
Now, update your `searchFlights` tool to render the new `` component.
```tsx filename="actions.tsx"
...
searchFlights: {
description: 'search for flights',
parameters: z.object({
source: z.string().describe('The origin of the flight'),
destination: z.string().describe('The destination of the flight'),
date: z.string().describe('The date of the flight'),
}),
generate: async function* ({ source, destination, date }) {
yield `Searching for flights from ${source} to ${destination} on ${date}...`;
const results = await searchFlights(source, destination, date);
return ();
},
}
...
```
In the above example, the `Flights` component is used to display the search results. When the user clicks on a flight number, the `lookupFlight` tool is called with the flight number as a parameter. The `submitUserMessage` action is then called to trigger the next step in the conversation.
Learn more about tool calling in Next.js App Router by checking out examples [here](/examples/next-app/tools).
---
title: Streaming Values
description: Overview of streaming RSCs
---
import { UIPreviewCard, Card } from '@/components/home/card';
import { EventPlanning } from '@/components/home/event-planning';
import { Searching } from '@/components/home/searching';
import { Weather } from '@/components/home/weather';
# Streaming Values
AI SDK RSC is currently experimental. We recommend using [AI SDK
UI](/docs/ai-sdk-ui/overview) for production. For guidance on migrating from
RSC to UI, see our [migration guide](/docs/ai-sdk-rsc/migrating-to-ui).
The RSC API provides several utility functions to allow you to stream values from the server to the client. This is useful when you need more granular control over what you are streaming and how you are streaming it.
These utilities can also be paired with [AI SDK Core](/docs/ai-sdk-core)
functions like [`streamText`](/docs/reference/ai-sdk-core/stream-text) and
[`streamObject`](/docs/reference/ai-sdk-core/stream-object) to easily stream
LLM generations from the server to the client.
There are two functions provided by the RSC API that allow you to create streamable values:
- [`createStreamableValue`](/docs/reference/ai-sdk-rsc/create-streamable-value) - creates a streamable (serializable) value, with full control over how you create, update, and close the stream.
- [`createStreamableUI`](/docs/reference/ai-sdk-rsc/create-streamable-ui) - creates a streamable React component, with full control over how you create, update, and close the stream.
## `createStreamableValue`
The RSC API allows you to stream serializable Javascript values from the server to the client using [`createStreamableValue`](/docs/reference/ai-sdk-rsc/create-streamable-value), such as strings, numbers, objects, and arrays.
This is useful when you want to stream:
- Text generations from the language model in real-time.
- Buffer values of image and audio generations from multi-modal models.
- Progress updates from multi-step agent runs.
## Creating a Streamable Value
You can import `createStreamableValue` from `ai/rsc` and use it to create a streamable value.
```tsx file='app/actions.ts'
'use server';
import { createStreamableValue } from 'ai/rsc';
export const runThread = async () => {
const streamableStatus = createStreamableValue('thread.init');
setTimeout(() => {
streamableStatus.update('thread.run.create');
streamableStatus.update('thread.run.update');
streamableStatus.update('thread.run.end');
streamableStatus.done('thread.end');
}, 1000);
return {
status: streamableStatus.value,
};
};
```
## Reading a Streamable Value
You can read streamable values on the client using `readStreamableValue`. It returns an async iterator that yields the value of the streamable as it is updated:
```tsx file='app/page.tsx'
import { readStreamableValue } from 'ai/rsc';
import { runThread } from '@/actions';
export default function Page() {
return (
);
}
```
Learn how to stream a text generation (with `streamText`) using the Next.js App Router and `createStreamableValue` in this [example](/examples/next-app/basics/streaming-text-generation).
## `createStreamableUI`
`createStreamableUI` creates a stream that holds a React component. Unlike AI SDK Core APIs, this function does not call a large language model. Instead, it provides a primitive that can be used to have granular control over streaming a React component.
## Using `createStreamableUI`
Let's look at how you can use the `createStreamableUI` function with a Server Action.
```tsx filename='app/actions.tsx'
'use server';
import { createStreamableUI } from 'ai/rsc';
export async function getWeather() {
const weatherUI = createStreamableUI();
weatherUI.update(
Loading...
);
setTimeout(() => {
weatherUI.done(
It's a sunny day!
);
}, 1000);
return weatherUI.value;
}
```
First, you create a streamable UI with an empty state and then update it with a loading message. After 1 second, you mark the stream as done passing in the actual weather information as it's final value. The `.value` property contains the actual UI that can be sent to the client.
## Reading a Streamable UI
On the client side, you can call the `getWeather` Server Action and render the returned UI like any other React component.
```tsx filename='app/page.tsx'
'use client';
import { useState } from 'react';
import { readStreamableValue } from 'ai/rsc';
import { getWeather } from '@/actions';
export default function Page() {
const [weather, setWeather] = useState(null);
return (
{weather}
);
}
```
When the button is clicked, the `getWeather` function is called, and the returned UI is set to the `weather` state and rendered on the page. Users will see the loading message first and then the actual weather information after 1 second.
Learn more about handling multiple streams in a single request in the [Multiple Streamables](/docs/advanced/multiple-streamables) guide.
Learn more about handling state for more complex use cases with [ AI/UI State ](/docs/ai-sdk-rsc/generative-ui-state).
---
title: Handling Loading State
description: Overview of handling loading state with AI SDK RSC
---
# Handling Loading State
AI SDK RSC is currently experimental. We recommend using [AI SDK
UI](/docs/ai-sdk-ui/overview) for production. For guidance on migrating from
RSC to UI, see our [migration guide](/docs/ai-sdk-rsc/migrating-to-ui).
Given that responses from language models can often take a while to complete, it's crucial to be able to show loading state to users. This provides visual feedback that the system is working on their request and helps maintain a positive user experience.
There are three approaches you can take to handle loading state with the AI SDK RSC:
- Managing loading state similar to how you would in a traditional Next.js application. This involves setting a loading state variable in the client and updating it when the response is received.
- Streaming loading state from the server to the client. This approach allows you to track loading state on a more granular level and provide more detailed feedback to the user.
- Streaming loading component from the server to the client. This approach allows you to stream a React Server Component to the client while awaiting the model's response.
## Handling Loading State on the Client
### Client
Let's create a simple Next.js page that will call the `generateResponse` function when the form is submitted. The function will take in the user's prompt (`input`) and then generate a response (`response`). To handle the loading state, use the `loading` state variable. When the form is submitted, set `loading` to `true`, and when the response is received, set it back to `false`. While the response is being streamed, the input field will be disabled.
```tsx filename='app/page.tsx'
'use client';
import { useState } from 'react';
import { generateResponse } from './actions';
import { readStreamableValue } from 'ai/rsc';
// Force the page to be dynamic and allow streaming responses up to 30 seconds
export const maxDuration = 30;
export default function Home() {
const [input, setInput] = useState('');
const [generation, setGeneration] = useState('');
const [loading, setLoading] = useState(false);
return (
{generation}
);
}
```
### Server
Now let's implement the `generateResponse` function. Use the `streamText` function to generate a response to the input.
```typescript filename='app/actions.ts'
'use server';
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { createStreamableValue } from 'ai/rsc';
export async function generateResponse(prompt: string) {
const stream = createStreamableValue();
(async () => {
const { textStream } = streamText({
model: openai('gpt-4o'),
prompt,
});
for await (const text of textStream) {
stream.update(text);
}
stream.done();
})();
return stream.value;
}
```
## Streaming Loading State from the Server
If you are looking to track loading state on a more granular level, you can create a new streamable value to store a custom variable and then read this on the frontend. Let's update the example to create a new streamable value for tracking loading state:
### Server
```typescript filename='app/actions.ts' highlight='9,22,25'
'use server';
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { createStreamableValue } from 'ai/rsc';
export async function generateResponse(prompt: string) {
const stream = createStreamableValue();
const loadingState = createStreamableValue({ loading: true });
(async () => {
const { textStream } = streamText({
model: openai('gpt-4o'),
prompt,
});
for await (const text of textStream) {
stream.update(text);
}
stream.done();
loadingState.done({ loading: false });
})();
return { response: stream.value, loadingState: loadingState.value };
}
```
### Client
```tsx filename='app/page.tsx' highlight="22,30-34"
'use client';
import { useState } from 'react';
import { generateResponse } from './actions';
import { readStreamableValue } from 'ai/rsc';
// Force the page to be dynamic and allow streaming responses up to 30 seconds
export const maxDuration = 30;
export default function Home() {
const [input, setInput] = useState('');
const [generation, setGeneration] = useState('');
const [loading, setLoading] = useState(false);
return (
{generation}
);
}
```
This allows you to provide more detailed feedback about the generation process to your users.
## Streaming Loading Components with `streamUI`
If you are using the [ `streamUI` ](/docs/reference/ai-sdk-rsc/stream-ui) function, you can stream the loading state to the client in the form of a React component. `streamUI` supports the usage of [ JavaScript generator functions ](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/function*), which allow you to yield some value (in this case a React component) while some other blocking work completes.
## Server
```ts
'use server';
import { openai } from '@ai-sdk/openai';
import { streamUI } from 'ai/rsc';
export async function generateResponse(prompt: string) {
const result = await streamUI({
model: openai('gpt-4o'),
prompt,
text: async function* ({ content }) {
yield
loading...
;
return
{content}
;
},
});
return result.value;
}
```
Remember to update the file from `.ts` to `.tsx` because you are defining a
React component in the `streamUI` function.
## Client
```tsx
'use client';
import { useState } from 'react';
import { generateResponse } from './actions';
import { readStreamableValue } from 'ai/rsc';
// Force the page to be dynamic and allow streaming responses up to 30 seconds
export const maxDuration = 30;
export default function Home() {
const [input, setInput] = useState('');
const [generation, setGeneration] = useState();
return (
{generation}
);
}
```
---
title: Error Handling
description: Learn how to handle errors with the AI SDK.
---
# Error Handling
AI SDK RSC is currently experimental. We recommend using [AI SDK
UI](/docs/ai-sdk-ui/overview) for production. For guidance on migrating from
RSC to UI, see our [migration guide](/docs/ai-sdk-rsc/migrating-to-ui).
Two categories of errors can occur when working with the RSC API: errors while streaming user interfaces and errors while streaming other values.
## Handling UI Errors
To handle errors while generating UI, the [`streamableUI`](/docs/reference/ai-sdk-rsc/create-streamable-ui) object exposes an `error()` method.
```tsx filename='app/actions.tsx'
'use server';
import { createStreamableUI } from 'ai/rsc';
export async function getStreamedUI() {
const ui = createStreamableUI();
(async () => {
ui.update(
loading
);
const data = await fetchData();
ui.done(
{data}
);
})().catch(e => {
ui.error(
Error: {e.message}
);
});
return ui.value;
}
```
With this method, you can catch any error with the stream, and return relevant UI. On the client, you can also use a [React Error Boundary](https://react.dev/reference/react/Component#catching-rendering-errors-with-an-error-boundary) to wrap the streamed component and catch any additional errors.
```tsx filename='app/page.tsx'
import { getStreamedUI } from '@/actions';
import { useState } from 'react';
import { ErrorBoundary } from './ErrorBoundary';
export default function Page() {
const [streamedUI, setStreamedUI] = useState(null);
return (
{streamedUI}
);
}
```
## Handling Other Errors
To handle other errors while streaming, you can return an error object that the reciever can use to determine why the failure occurred.
```tsx filename='app/actions.tsx'
'use server';
import { createStreamableValue } from 'ai/rsc';
import { fetchData, emptyData } from '../utils/data';
export const getStreamedData = async () => {
const streamableData = createStreamableValue(emptyData);
try {
(() => {
const data1 = await fetchData();
streamableData.update(data1);
const data2 = await fetchData();
streamableData.update(data2);
const data3 = await fetchData();
streamableData.done(data3);
})();
return { data: streamableData.value };
} catch (e) {
return { error: e.message };
}
};
```
---
title: Handling Authentication
description: Learn how to authenticate with the AI SDK.
---
# Authentication
AI SDK RSC is currently experimental. We recommend using [AI SDK
UI](/docs/ai-sdk-ui/overview) for production. For guidance on migrating from
RSC to UI, see our [migration guide](/docs/ai-sdk-rsc/migrating-to-ui).
The RSC API makes extensive use of [`Server Actions`](https://nextjs.org/docs/app/building-your-application/data-fetching/server-actions-and-mutations) to power streaming values and UI from the server.
Server Actions are exposed as public, unprotected endpoints. As a result, you should treat Server Actions as you would public-facing API endpoints and ensure that the user is authorized to perform the action before returning any data.
```tsx filename="app/actions.tsx"
'use server';
import { cookies } from 'next/headers';
import { createStremableUI } from 'ai/rsc';
import { validateToken } from '../utils/auth';
export const getWeather = async () => {
const token = cookies().get('token');
if (!token || !validateToken(token)) {
return {
error: 'This action requires authentication',
};
}
const streamableDisplay = createStreamableUI(null);
streamableDisplay.update();
streamableDisplay.done();
return {
display: streamableDisplay.value,
};
};
```
---
title: Migrating from RSC to UI
description: Learn how to migrate from AI SDK RSC to AI SDK UI.
---
# Migrating from RSC to UI
This guide helps you migrate from AI SDK RSC to AI SDK UI.
## Background
The AI SDK has two packages that help you build the frontend for your applications – [AI SDK UI](/docs/ai-sdk-ui) and [AI SDK RSC](/docs/ai-sdk-rsc).
We introduced support for using [React Server Components](https://react.dev/reference/rsc/server-components) (RSC) within the AI SDK to simplify building generative user interfaces for frameworks that support RSC.
However, given we're pushing the boundaries of this technology, AI SDK RSC currently faces significant limitations that make it unsuitable for stable production use.
- It is not possible to abort a stream using server actions. This will be improved in future releases of React and Next.js [(1122)](https://github.com/vercel/ai/issues/1122).
- When using `createStreamableUI` and `streamUI`, components remount on `.done()`, causing them to flicker [(2939)](https://github.com/vercel/ai/issues/2939).
- Many suspense boundaries can lead to crashes [(2843)](https://github.com/vercel/ai/issues/2843).
- Using `createStreamableUI` can lead to quadratic data transfer. You can avoid this using createStreamableValue instead, and rendering the component client-side.
- Closed RSC streams cause update issues [(3007)](https://github.com/vercel/ai/issues/3007).
Due to these limitations, AI SDK RSC is marked as experimental, and we do not recommend using it for stable production environments.
As a result, we strongly recommend migrating to AI SDK UI, which has undergone extensive development to provide a more stable and production grade experience.
In building [v0](https://v0.dev), we have invested considerable time exploring how to create the best chat experience on the web. AI SDK UI ships with many of these best practices and commonly used patterns like [language model middleware](/docs/ai-sdk-core/middleware), [multi-step tool calls](/docs/ai-sdk-core/tools-and-tool-calling#multi-step-calls), [attachments](/docs/ai-sdk-ui/chatbot#attachments-experimental), [telemetry](/docs/ai-sdk-core/telemetry), [provider registry](/docs/ai-sdk-core/provider-management#provider-registry), and many more. These features have been considerately designed into a neat abstraction that you can use to reliably integrate AI into your applications.
## Streaming Chat Completions
### Basic Setup
The `streamUI` function executes as part of a server action as illustrated below.
#### Before: Handle generation and rendering in a single server action
```tsx filename="@/app/actions.tsx"
import { openai } from '@ai-sdk/openai';
import { getMutableAIState, streamUI } from 'ai/rsc';
export async function sendMessage(message: string) {
'use server';
const messages = getMutableAIState('messages');
messages.update([...messages.get(), { role: 'user', content: message }]);
const { value: stream } = await streamUI({
model: openai('gpt-4o'),
system: 'you are a friendly assistant!',
messages: messages.get(),
text: async function* ({ content, done }) {
// process text
},
tools: {
// tool definitions
},
});
return stream;
}
```
#### Before: Call server action and update UI state
The chat interface calls the server action. The response is then saved using the `useUIState` hook.
```tsx filename="@/app/page.tsx"
'use client';
import { useState, ReactNode } from 'react';
import { useActions, useUIState } from 'ai/rsc';
export default function Page() {
const { sendMessage } = useActions();
const [input, setInput] = useState('');
const [messages, setMessages] = useUIState();
return (
{messages.map(message => message)}
);
}
```
The `streamUI` function combines generating text and rendering the user interface. To migrate to AI SDK UI, you need to **separate these concerns** – streaming generations with `streamText` and rendering the UI with `useChat`.
#### After: Replace server action with route handler
The `streamText` function executes as part of a route handler and streams the response to the client. The `useChat` hook on the client decodes this stream and renders the response within the chat interface.
```ts filename="@/app/api/chat/route.ts"
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
export async function POST(request) {
const { messages } = await request.json();
const result = streamText({
model: openai('gpt-4o'),
system: 'you are a friendly assistant!',
messages,
tools: {
// tool definitions
},
});
return result.toDataStreamResponse();
}
```
#### After: Update client to use chat hook
```tsx filename="@/app/page.tsx"
'use client';
import { useChat } from 'ai/react';
export default function Page() {
const { messages, input, setInput, handleSubmit } = useChat();
return (
{messages.map(message => (
{message.role}
{message.content}
))}
);
}
```
### Parallel Tool Calls
In AI SDK RSC, `streamUI` does not support parallel tool calls. You will have to use a combination of `streamText`, `createStreamableUI` and `createStreamableValue`.
With AI SDK UI, `useChat` comes with built-in support for parallel tool calls. You can define multiple tools in the `streamText` and have them called them in parallel. The `useChat` hook will then handle the parallel tool calls for you automatically.
### Multi-Step Tool Calls
In AI SDK RSC, `streamUI` does not support multi-step tool calls. You will have to use a combination of `streamText`, `createStreamableUI` and `createStreamableValue`.
With AI SDK UI, `useChat` comes with built-in support for multi-step tool calls. You can set `maxSteps` in the `streamText` function to define the number of steps the language model can make in a single call. The `useChat` hook will then handle the multi-step tool calls for you automatically.
### Generative User Interfaces
The `streamUI` function uses `tools` as a way to execute functions based on user input and renders React components based on the function output to go beyond text in the chat interface.
#### Before: Render components within the server action and stream to client
```tsx filename="@/app/actions.tsx"
import { z } from 'zod';
import { streamUI } from 'ai/rsc';
import { openai } from '@ai-sdk/openai';
import { getWeather } from '@/utils/queries';
import { Weather } from '@/components/weather';
const { value: stream } = await streamUI({
model: openai('gpt-4o'),
system: 'you are a friendly assistant!',
messages,
text: async function* ({ content, done }) {
// process text
},
tools: {
displayWeather: {
description: 'Display the weather for a location',
parameters: z.object({
latitude: z.number(),
longitude: z.number(),
}),
generate: async function* ({ latitude, longitude }) {
yield
Loading weather...
;
const { value, unit } = await getWeather({ latitude, longitude });
return ;
},
},
},
});
```
As mentioned earlier, `streamUI` generates text and renders the React component in a single server action call.
#### After: Replace with route handler and stream props data to client
The `streamText` function streams the props data as response to the client, while `useChat` decode the stream as `toolInvocations` and renders the chat interface.
```ts filename="@/app/api/chat/route.ts"
import { z } from 'zod';
import { openai } from '@ai-sdk/openai';
import { getWeather } from '@/utils/queries';
import { streamText } from 'ai';
export async function POST(request) {
const { messages } = await request.json();
const result = streamText({
model: openai('gpt-4o'),
system: 'you are a friendly assistant!',
messages,
tools: {
displayWeather: {
description: 'Display the weather for a location',
parameters: z.object({
latitude: z.number(),
longitude: z.number(),
}),
execute: async function ({ latitude, longitude }) {
const props = await getWeather({ latitude, longitude });
return props;
},
},
},
});
return result.toDataStreamResponse();
}
```
#### After: Update client to use chat hook and render components using tool invocations
```tsx filename="@/app/page.tsx"
'use client';
import { useChat } from 'ai/react';
import { Weather } from '@/components/weather';
export default function Page() {
const { messages, input, setInput, handleSubmit } = useChat();
return (
{messages.map(message => (
{message.role}
{message.content}
{message.toolInvocations.map(toolInvocation => {
const { toolName, toolCallId, state } = toolInvocation;
if (state === 'result') {
const { result } = toolInvocation;
return (
{toolName === 'displayWeather' ? (
) : null}
);
} else {
return (
{toolName === 'displayWeather' ? (
Loading weather...
) : null}
);
}
})}
))}
);
}
```
### Handling Client Interactions
With AI SDK RSC, components streamed to the client can trigger subsequent generations by calling the relevant server action using the `useActions` hooks. This is possible as long as the component is a descendant of the `` context provider.
#### Before: Use actions hook to send messages
```tsx filename="@/app/components/list-flights.tsx"
'use client';
import { useActions, useUIState } from 'ai/rsc';
export function ListFlights({ flights }) {
const { sendMessage } = useActions();
const [_, setMessages] = useUIState();
return (
{flights.map(flight => (
{
const response = await sendMessage(
`I would like to choose flight ${flight.id}!`,
);
setMessages(msgs => [...msgs, response]);
}}
>
{flight.name}
))}
);
}
```
#### After: Use another chat hook with same ID from the component
After switching to AI SDK UI, these messages are synced by initializing the `useChat` hook in the component with the same `id` as the parent component.
```tsx filename="@/app/components/list-flights.tsx"
'use client';
import { useChat } from 'ai/react';
export function ListFlights({ chatId, flights }) {
const { append } = useChat({
id: chatId,
body: { id: chatId },
maxSteps: 5,
});
return (
{flights.map(flight => (
{
await append({
role: 'user',
content: `I would like to choose flight ${flight.id}!`,
});
}}
>
{flight.name}
))}
);
}
```
### Loading Indicators
In AI SDK RSC, you can use the `initial` parameter of `streamUI` to define the component to display while the generation is in progress.
#### Before: Use `loading` to show loading indicator
```tsx filename="@/app/actions.tsx"
import { openai } from '@ai-sdk/openai';
import { streamUI } from 'ai/rsc';
const { value: stream } = await streamUI({
model: openai('gpt-4o'),
system: 'you are a friendly assistant!',
messages,
initial:
Loading...
,
text: async function* ({ content, done }) {
// process text
},
tools: {
// tool definitions
},
});
return stream;
```
With AI SDK UI, you can use the tool invocation state to show a loading indicator while the tool is executing.
#### After: Use tool invocation state to show loading indicator
```tsx filename="@/app/components/message.tsx"
'use client';
export function Message({ role, content, toolInvocations }) {
return (
{role}
{content}
{toolInvocations && (
{toolInvocations.map(toolInvocation => {
const { toolName, toolCallId, state } = toolInvocation;
if (state === 'result') {
const { result } = toolInvocation;
return (
{toolName === 'getWeather' ? (
) : null}
);
} else {
return (
{toolName === 'getWeather' ? (
) : (
Loading...
)}
);
}
})}
)}
);
}
```
### Saving Chats
Before implementing `streamUI` as a server action, you should create an `` provider and wrap your application at the root layout to sync the AI and UI states. During initialization, you typically use the `onSetAIState` callback function to track updates to the AI state and save it to the database when `done(...)` is called.
#### Before: Save chats using callback function of context provider
```ts filename="@/app/actions.ts"
import { createAI } from 'ai/rsc';
import { saveChat } from '@/utils/queries';
export const AI = createAI({
initialAIState: {},
initialUIState: {},
actions: {
// server actions
},
onSetAIState: async ({ state, done }) => {
'use server';
if (done) {
await saveChat(state);
}
},
});
```
#### After: Save chats using callback function of `streamText`
With AI SDK UI, you will save chats using the `onFinish` callback function of `streamText` in your route handler.
```ts filename="@/app/api/chat/route.ts"
import { openai } from '@ai-sdk/openai';
import { saveChat } from '@/utils/queries';
import { streamText, convertToCoreMessages } from 'ai';
export async function POST(request) {
const { id, messages } = await request.json();
const coreMessages = convertToCoreMessages(messages);
const result = streamText({
model: openai('gpt-4o'),
system: 'you are a friendly assistant!',
messages: coreMessages,
onFinish: async ({ responseMessages }) => {
try {
await saveChat({
id,
messages: [...coreMessages, ...responseMessages],
});
} catch (error) {
console.error('Failed to save chat');
}
},
});
return result.toDataStreamResponse();
}
```
### Restoring Chats
When using AI SDK RSC, the `useUIState` hook contains the UI state of the chat. When restoring a previously saved chat, the UI state needs to be loaded with messages.
Similar to how you typically save chats in AI SDK RSC, you should use the `onGetUIState` callback function to retrieve the chat from the database, convert it into UI state, and return it to be accessible through `useUIState`.
#### Before: Load chat from database using callback function of context provider
```ts filename="@/app/actions.ts"
import { createAI } from 'ai/rsc';
import { loadChatFromDB, convertToUIState } from '@/utils/queries';
export const AI = createAI({
actions: {
// server actions
},
onGetUIState: async () => {
'use server';
const chat = await loadChatFromDB();
const uiState = convertToUIState(chat);
return uiState;
},
});
```
AI SDK UI uses the `messages` field of `useChat` to store messages. To load messages when `useChat` is mounted, you should use `initialMessages`.
As messages are typically loaded from the database, we can use a server actions inside a Page component to fetch an older chat from the database during static generation and pass the messages as props to the `` component.
#### After: Load chat from database during static generation of page
```tsx filename="@/app/chat/[id]/page.tsx"
import { Chat } from '@/app/components/chat';
import { getChatById } from '@/utils/queries';
// link to example implementation: https://github.com/vercel/ai-chatbot/blob/00b125378c998d19ef60b73fe576df0fe5a0e9d4/lib/utils.ts#L87-L127
import { convertToUIMessages } from '@/utils/functions';
export default async function Page({ params }: { params: any }) {
const { id } = params;
const chatFromDb = await getChatById({ id });
const chat: Chat = {
...chatFromDb,
messages: convertToUIMessages(chatFromDb.messages),
};
return ;
}
```
#### After: Pass chat messages as props and load into chat hook
```tsx filename="@/app/components/chat.tsx"
'use client';
import { Message } from 'ai';
import { useChat } from 'ai/react';
export function Chat({
id,
initialMessages,
}: {
id;
initialMessages: Array;
}) {
const { messages } = useChat({
id,
initialMessages,
});
return (
{messages.map(message => (
{message.role}
{message.content}
))}
);
}
```
## Streaming Object Generation
The `createStreamableValue` function streams any serializable data from the server to the client. As a result, this function allows you to stream object generations from the server to the client when paired with `streamObject`.
#### Before: Use streamable value to stream object generations
```ts filename="@/app/actions.ts"
import { streamObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { createStreamableValue } from 'ai/rsc';
import { notificationsSchema } from '@/utils/schemas';
export async function generateSampleNotifications() {
'use server';
const stream = createStreamableValue();
(async () => {
const { partialObjectStream } = streamObject({
model: openai('gpt-4o'),
system: 'generate sample ios messages for testing',
prompt: 'messages from a family group chat during diwali, max 4',
schema: notificationsSchema,
});
for await (const partialObject of partialObjectStream) {
stream.update(partialObject);
}
})();
stream.done();
return { partialNotificationsStream: stream.value };
}
```
#### Before: Read streamable value and update object
```tsx filename="@/app/page.tsx"
'use client';
import { useState } from 'react';
import { readStreamableValue } from 'ai/rsc';
import { generateSampleNotifications } from '@/app/actions';
export default function Page() {
const [notifications, setNotifications] = useState(null);
return (
);
}
```
To migrate to AI SDK UI, you should use the `useObject` hook and implement `streamObject` within your route handler.
#### After: Replace with route handler and stream text response
```ts filename="@/app/api/object/route.ts"
import { streamObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { notificationSchema } from '@/utils/schemas';
export async function POST(req: Request) {
const context = await req.json();
const result = streamObject({
model: openai('gpt-4-turbo'),
schema: notificationSchema,
prompt:
`Generate 3 notifications for a messages app in this context:` + context,
});
return result.toTextStreamResponse();
}
```
#### After: Use object hook to decode stream and update object
```tsx filename="@/app/page.tsx"
'use client';
import { useObject } from 'ai/react';
import { notificationSchema } from '@/utils/schemas';
export default function Page() {
const { object, submit } = useObject({
api: '/api/object',
schema: notificationSchema,
});
return (
);
}
```
---
title: AI SDK RSC
description: Learn about AI SDK RSC.
collapsed: true
---
# AI SDK RSC
AI SDK RSC is currently experimental. We recommend using [AI SDK
UI](/docs/ai-sdk-ui/overview) for production. For guidance on migrating from
RSC to UI, see our [migration guide](/docs/ai-sdk-rsc/migrating-to-ui).
---
title: Prompt Engineering
description: Learn how to engineer prompts for LLMs with the AI SDK
---
# Prompt Engineering
## What is a Large Language Model (LLM)?
A Large Language Model is essentially a prediction engine that takes a sequence of words as input and aims to predict the most likely sequence to follow. It does this by assigning probabilities to potential next sequences and then selecting one. The model continues to generate sequences until it meets a specified stopping criterion.
These models learn by training on massive text corpuses, which means they will be better suited to some use cases than others. For example, a model trained on GitHub data would understand the probabilities of sequences in source code particularly well. However, it's crucial to understand that the generated sequences, while often seeming plausible, can sometimes be random and not grounded in reality. As these models become more accurate, many surprising abilities and applications emerge.
## What is a prompt?
Prompts are the starting points for LLMs. They are the inputs that trigger the model to generate text. The scope of prompt engineering involves not just crafting these prompts but also understanding related concepts such as hidden prompts, tokens, token limits, and the potential for prompt hacking, which includes phenomena like jailbreaks and leaks.
## Why is prompt engineering needed?
Prompt engineering currently plays a pivotal role in shaping the responses of LLMs. It allows us to tweak the model to respond more effectively to a broader range of queries. This includes the use of techniques like semantic search, command grammars, and the ReActive model architecture. The performance, context window, and cost of LLMs varies between models and model providers which adds further constraints to the mix. For example, the GPT-4 model is more expensive than GPT-3.5-turbo and significantly slower, but it can also be more effective at certain tasks. And so, like many things in software engineering, there is a trade-offs between cost and performance.
To assist with comparing and tweaking LLMs, we've built an AI playground that allows you to compare the performance of different models side-by-side online. When you're ready, you can even generate code with the AI SDK to quickly use your prompt and your selected model into your own applications.
## Example: Build a Slogan Generator
### Start with an instruction
Imagine you want to build a slogan generator for marketing campaigns. Creating catchy slogans isn't always straightforward!
First, you'll need a prompt that makes it clear what you want. Let's start with an instruction. Submit this prompt to generate your first completion.
Not bad! Now, try making your instruction more specific.
Introducing a single descriptive term to our prompt influences the completion. Essentially, crafting your prompt is the means by which you "instruct" or "program" the model.
### Include examples
Clear instructions are key for quality outcomes, but that might not always be enough. Let's try to enhance your instruction further.
These slogans are fine, but could be even better. It appears the model overlooked the 'live' part in our prompt. Let's change it slightly to generate more appropriate suggestions.
Often, it's beneficial to both demonstrate and tell the model your requirements. Incorporating examples in your prompt can aid in conveying patterns or subtleties. Test this prompt that carries a few examples.
Great! Incorporating examples of expected output for a certain input prompted the model to generate the kind of names we aimed for.
### Tweak your settings
Apart from designing prompts, you can influence completions by tweaking model settings. A crucial setting is the **temperature**.
You might have seen that the same prompt, when repeated, yielded the same or nearly the same completions. This happens when your temperature is at 0.
Attempt to re-submit the identical prompt a few times with temperature set to 1.
Notice the difference? With a temperature above 0, the same prompt delivers varied completions each time.
Keep in mind that the model forecasts the text most likely to follow the preceding text. Temperature, a value from 0 to 1, essentially governs the model's confidence level in making these predictions. A lower temperature implies lesser risks, leading to more precise and deterministic completions. A higher temperature yields a broader range of completions.
For your slogan generator, you might want a large pool of name suggestions. A moderate temperature of 0.6 should serve well.
## Recommended Resources
Prompt Engineering is evolving rapidly, with new methods and research papers surfacing every week. Here are some resources that we've found useful for learning about and experimenting with prompt engineering:
- [The Vercel AI Playground](/playground)
- [Brex Prompt Engineering](https://github.com/brexhq/prompt-engineering)
- [Prompt Engineering Guide by Dair AI](https://www.promptingguide.ai/)
---
title: Stopping Streams
description: Learn how to cancel streams with the AI SDK
---
# Stopping Streams
Cancelling ongoing streams is often needed.
For example, users might want to stop a stream when they realize that the response is not what they want.
The different parts of the AI SDK support cancelling streams in different ways.
## AI SDK Core
The AI SDK functions have an `abortSignal` argument that you can use to cancel a stream.
You would use this if you want to cancel a stream from the server side to the LLM API, e.g. by
forwarding the `abortSignal` from the request.
```tsx highlight="10,11"
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
export async function POST(req: Request) {
const { prompt } = await req.json();
const result = streamText({
model: openai('gpt-4-turbo'),
prompt,
// forward the abort signal:
abortSignal: req.signal,
});
return result.toTextStreamResponse();
}
```
## AI SDK UI
The hooks, e.g. `useChat` or `useCompletion`, provide a `stop` helper function that can be used to cancel a stream.
This will cancel the stream from the client side to the server.
```tsx file="app/page.tsx" highlight="9,18-20"
'use client';
import { useCompletion } from 'ai/react';
export default function Chat() {
const {
input,
completion,
stop,
isLoading,
handleSubmit,
handleInputChange,
} = useCompletion();
return (
{isLoading && (
)}
{completion}
);
}
```
## AI SDK RSC
The AI SDK RSC does not currently support stopping streams.
---
title: Backpressure
description: How to handle backpressure and cancellation when working with the AI SDK
---
# Stream Back-pressure and Cancellation
This page focuses on understanding back-pressure and cancellation when working with streams. You do not need to know this information to use the AI SDK, but for those interested, it offers a deeper dive on why and how the SDK optimally streams responses.
In the following sections, we'll explore back-pressure and cancellation in the context of a simple example program. We'll discuss the issues that can arise from an eager approach and demonstrate how a lazy approach can resolve them.
## Back-pressure and Cancellation with Streams
Let's begin by setting up a simple example program:
```jsx
// A generator that will yield positive integers
async function* integers() {
let i = 1;
while (true) {
console.log(`yielding ${i}`);
yield i++;
await sleep(100);
}
}
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
// Wraps a generator into a ReadableStream
function createStream(iterator) {
return new ReadableStream({
async start(controller) {
for await (const v of iterator) {
controller.enqueue(v);
}
controller.close();
},
});
}
// Collect data from stream
async function run() {
// Set up a stream of integers
const stream = createStream(integers());
// Read values from our stream
const reader = stream.getReader();
for (let i = 0; i < 10_000; i++) {
// we know our stream is infinite, so there's no need to check `done`.
const { value } = await reader.read();
console.log(`read ${value}`);
await sleep(1_000);
}
}
run();
```
In this example, we create an async-generator that yields positive integers, a `ReadableStream` that wraps our integer generator, and a reader which will read values out of our stream. Notice, too, that our integer generator logs out `"yielding ${i}"`, and our reader logs out `"read ${value}"`. Both take an arbitrary amount of time to process data, represented with a 100ms sleep in our generator, and a 1sec sleep in our reader.
## Back-pressure
If you were to run this program, you'd notice something funny. We'll see roughly 10 "yield" logs for every "read" log. This might seem obvious, the generator can push values 10x faster than the reader can pull them out. But it represents a problem, our `stream` has to maintain an ever expanding queue of items that have been pushed in but not pulled out.
The problem stems from the way we wrap our generator into a stream. Notice the use of `for await (…)` inside our `start` handler. This is an **eager** for-loop, and it is constantly running to get the next value from our generator to be enqueued in our stream. This means our stream does not respect back-pressure, the signal from the consumer to the producer that more values aren't needed _yet_. We've essentially spawned a thread that will perpetually push more data into the stream, one that runs as fast as possible to push new data immediately. Worse, there's no way to signal to this thread to stop running when we don't need additional data.
To fix this, `ReadableStream` allows a `pull` handler. `pull` is called every time the consumer attempts to read more data from our stream (if there's no data already queued internally). But it's not enough to just move the `for await(…)` into `pull`, we also need to convert from an eager enqueuing to a **lazy** one. By making these 2 changes, we'll be able to react to the consumer. If they need more data, we can easily produce it, and if they don't, then we don't need to spend any time doing unnecessary work.
```jsx
function createStream(iterator) {
return new ReadableStream({
async pull(controller) {
const { value, done } = await iterator.next();
if (done) {
controller.close();
} else {
controller.enqueue(value);
}
},
});
}
```
Our `createStream` is a little more verbose now, but the new code is important. First, we need to manually call our `iterator.next()` method. This returns a `Promise` for an object with the type signature `{ done: boolean, value: T }`. If `done` is `true`, then we know that our iterator won't yield any more values and we must `close` the stream (this allows the consumer to know that the stream is also finished producing values). Else, we need to `enqueue` our newly produced value.
When we run this program, we see that our "yield" and "read" logs are now paired. We're no longer yielding 10x integers for every read! And, our stream now only needs to maintain 1 item in its internal buffer. We've essentially given control to the consumer, so that it's responsible for producing new values as it needs it. Neato!
## Cancellation
Let's go back to our initial eager example, with 1 small edit. Now instead of reading 10,000 integers, we're only going to read 3:
```jsx
// A generator that will yield positive integers
async function* integers() {
let i = 1;
while (true) {
console.log(`yielding ${i}`);
yield i++;
await sleep(100);
}
}
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
// Wraps a generator into a ReadableStream
function createStream(iterator) {
return new ReadableStream({
async start(controller) {
for await (const v of iterator) {
controller.enqueue(v);
}
controller.close();
},
});
}
// Collect data from stream
async function run() {
// Set up a stream that of integers
const stream = createStream(integers());
// Read values from our stream
const reader = stream.getReader();
// We're only reading 3 items this time:
for (let i = 0; i < 3; i++) {
// we know our stream is infinite, so there's no need to check `done`.
const { value } = await reader.read();
console.log(`read ${value}`);
await sleep(1000);
}
}
run();
```
We're back to yielding 10x the number of values read. But notice now, after we've read 3 values, we're continuing to yield new values. We know that our reader will never read another value, but our stream doesn't! The eager `for await (…)` will continue forever, loudly enqueuing new values into our stream's buffer and increasing our memory usage until it consumes all available program memory.
The fix to this is exactly the same: use `pull` and manual iteration. By producing values _**lazily**_, we tie the lifetime of our integer generator to the lifetime of the reader. Once the reads stop, the yields will stop too:
```jsx
// Wraps a generator into a ReadableStream
function createStream(iterator) {
return new ReadableStream({
async pull(controller) {
const { value, done } = await iterator.next();
if (done) {
controller.close();
} else {
controller.enqueue(value);
}
},
});
}
```
Since the solution is the same as implementing back-pressure, it shows that they're just 2 facets of the same problem: Pushing values into a stream should be done **lazily**, and doing it eagerly results in expected problems.
## Tying Stream Laziness to AI Responses
Now let's imagine you're integrating AIBot service into your product. Users will be able to prompt "count from 1 to infinity", the browser will fetch your AI API endpoint, and your servers connect to AIBot to get a response. But "infinity" is, well, infinite. The response will never end!
After a few seconds, the user gets bored and navigates away. Or maybe you're doing local development and a hot-module reload refreshes your page. The browser will have ended its connection to the API endpoint, but will your server end its connection with AIBot?
If you used the eager `for await (...)` approach, then the connection is still running and your server is asking for more and more data from AIBot. Our server spawned a "thread" and there's no signal when we can end the eager pulls. Eventually, the server is going to run out of memory (remember, there's no active fetch connection to read the buffering responses and free them).
{/* When we started writing the streaming code for the AI SDK, we confirm aborting a fetch will end a streamed response from Next.js */}
With the lazy approach, this is taken care of for you. Because the stream will only request new data from AIBot when the consumer requests it, navigating away from the page naturally frees all resources. The fetch connection aborts and the server can clean up the response. The `ReadableStream` tied to that response can now be garbage collected. When that happens, the connection it holds to AIBot can then be freed.
---
title: Caching
description: How to handle caching when working with the AI SDK
---
# Caching Responses
Depending on the type of application you're building, you may want to cache the responses you receive from your AI provider, at least temporarily.
## Using Language Model Middleware (Recommended)
The recommended approach to caching responses is using [language model middleware](/docs/ai-sdk-core/middleware). Language model middleware is a way to enhance the behavior of language models by intercepting and modifying the calls to the language model. Let's see how you can use language model middleware to cache responses.
```ts filename="ai/middleware.ts"
import { Redis } from '@upstash/redis';
import type {
LanguageModelV1,
Experimental_LanguageModelV1Middleware as LanguageModelV1Middleware,
LanguageModelV1StreamPart,
} from 'ai';
import { simulateReadableStream } from 'ai/test';
const redis = new Redis({
url: process.env.KV_URL,
token: process.env.KV_TOKEN,
});
export const cacheMiddleware: LanguageModelV1Middleware = {
wrapGenerate: async ({ doGenerate, params }) => {
const cacheKey = JSON.stringify(params);
const cached = (await redis.get(cacheKey)) as Awaited<
ReturnType
> | null;
if (cached !== null) {
return {
...cached,
response: {
...cached.response,
timestamp: cached?.response?.timestamp
? new Date(cached?.response?.timestamp)
: undefined,
},
};
}
const result = await doGenerate();
redis.set(cacheKey, result);
return result;
},
wrapStream: async ({ doStream, params }) => {
const cacheKey = JSON.stringify(params);
// Check if the result is in the cache
const cached = await redis.get(cacheKey);
// If cached, return a simulated ReadableStream that yields the cached result
if (cached !== null) {
// Format the timestamps in the cached response
const formattedChunks = (cached as LanguageModelV1StreamPart[]).map(p => {
if (p.type === 'response-metadata' && p.timestamp) {
return { ...p, timestamp: new Date(p.timestamp) };
} else return p;
});
return {
stream: simulateReadableStream({
initialDelayInMs: 0,
chunkDelayInMs: 10,
chunks: formattedChunks,
}),
rawCall: { rawPrompt: null, rawSettings: {} },
};
}
// If not cached, proceed with streaming
const { stream, ...rest } = await doStream();
const fullResponse: LanguageModelV1StreamPart[] = [];
const transformStream = new TransformStream<
LanguageModelV1StreamPart,
LanguageModelV1StreamPart
>({
transform(chunk, controller) {
fullResponse.push(chunk);
controller.enqueue(chunk);
},
flush() {
// Store the full response in the cache after streaming is complete
redis.set(cacheKey, fullResponse);
},
});
return {
stream: stream.pipeThrough(transformStream),
...rest,
};
},
};
```
This example uses `@upstash/redis` to store and retrieve the assistant's
responses but you can use any KV storage provider you would like.
`LanguageModelMiddleware` has two methods: `wrapGenerate` and `wrapStream`. `wrapGenerate` is called when using [`generateText`](/docs/reference/ai-sdk-core/generate-text) and [`generateObject`](/docs/reference/ai-sdk-core/generate-object), while `wrapStream` is called when using [`streamText`](/docs/reference/ai-sdk-core/stream-text) and [`streamObject`](/docs/reference/ai-sdk-core/stream-object).
For `wrapGenerate`, you can cache the response directly. Instead, for `wrapStream`, you cache an array of the stream parts, which can then be used with [`simulateReadableStream`](/docs/ai-sdk-core/testing#simulate-data-stream-protocol-responses) function to create a simulated `ReadableStream` that returns the cached response. In this way, the cached response is returned chunk-by-chunk as if it were being generated by the model. You can control the initial delay and delay between chunks by adjusting the `initialDelayInMs` and `chunkDelayInMs` parameters of `simulateReadableStream`.
You can see a full example of caching with Redis in a Next.js application in our [Caching Middleware Recipe](/cookbook/next/caching-middleware).
## Using Lifecycle Callbacks
Alternatively, each AI SDK Core function has special lifecycle callbacks you can use. The one of interest is likely `onFinish`, which is called when the generation is complete. This is where you can cache the full response.
Here's an example of how you can implement caching using Vercel KV and Next.js to cache the OpenAI response for 1 hour:
This example uses [Upstash Redis](https://upstash.com/docs/redis/overall/getstarted) and Next.js to cache the response for 1 hour.
```tsx filename="app/api/chat/route.ts"
import { openai } from '@ai-sdk/openai';
import { formatDataStreamPart, streamText } from 'ai';
import { Redis } from '@upstash/redis';
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
const redis = new Redis({
url: process.env.KV_URL,
token: process.env.KV_TOKEN,
});
export async function POST(req: Request) {
const { messages } = await req.json();
// come up with a key based on the request:
const key = JSON.stringify(messages);
// Check if we have a cached response
const cached = await redis.get(key);
if (cached != null) {
return new Response(formatDataStreamPart('text', cached), {
status: 200,
headers: { 'Content-Type': 'text/plain' },
});
}
// Call the language model:
const result = streamText({
model: openai('gpt-4o'),
messages,
async onFinish({ text }) {
// Cache the response text:
await redis.set(key, text);
await redis.expire(key, 60 * 60);
},
});
// Respond with the stream
return result.toDataStreamResponse();
}
```
---
title: Multiple Streamables
description: Learn to handle multiple streamables in your application.
---
# Multiple Streams
## Multiple Streamable UIs
The AI SDK RSC APIs allow you to compose and return any number of streamable UIs, along with other data, in a single request. This can be useful when you want to decouple the UI into smaller components and stream them separately.
```tsx file='app/actions.tsx'
'use server';
import { createStreamableUI } from 'ai/rsc';
export async function getWeather() {
const weatherUI = createStreamableUI();
const forecastUI = createStreamableUI();
weatherUI.update(
);
});
// Return both streamable UIs and other data fields.
return {
requestedAt: Date.now(),
weather: weatherUI.value,
forecast: forecastUI.value,
};
}
```
The client side code is similar to the previous example, but the [tool call](/docs/ai-sdk-core/tools-and-tool-calling) will return the new data structure with the weather and forecast UIs. Depending on the speed of getting weather and forecast data, these two components might be updated independently.
## Nested Streamable UIs
You can stream UI components within other UI components. This allows you to create complex UIs that are built up from smaller, reusable components. In the example below, we pass a `historyChart` streamable as a prop to a `StockCard` component. The StockCard can render the `historyChart` streamable, and it will automatically update as the server responds with new data.
```tsx file='app/actions.tsx'
async function getStockHistoryChart({ symbol: string }) {
'use server';
const ui = createStreamableUI();
// We need to wrap this in an async IIFE to avoid blocking.
(async () => {
const price = await getStockPrice({ symbol });
// Show a spinner as the history chart for now.
const historyChart = createStreamableUI();
ui.done();
// Getting the history data and then update that part of the UI.
const historyData = await fetch('https://my-stock-data-api.com');
historyChart.done();
})();
return ui;
}
```
---
title: Rate Limiting
description: Learn how to rate limit your application.
---
# Rate Limiting
Rate limiting helps you protect your APIs from abuse. It involves setting a
maximum threshold on the number of requests a client can make within a
specified timeframe. This simple technique acts as a gatekeeper,
preventing excessive usage that can degrade service performance and incur
unnecessary costs.
## Rate Limiting with Vercel KV and Upstash Ratelimit
In this example, you will protect an API endpoint using [Vercel KV](https://vercel.com/storage/kv)
and [Upstash Ratelimit](https://github.com/upstash/ratelimit).
```tsx filename='app/api/generate/route.ts'
import kv from '@vercel/kv';
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
import { Ratelimit } from '@upstash/ratelimit';
import { NextRequest } from 'next/server';
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
// Create Rate limit
const ratelimit = new Ratelimit({
redis: kv,
limiter: Ratelimit.fixedWindow(5, '30s'),
});
export async function POST(req: NextRequest) {
// call ratelimit with request ip
const ip = req.ip ?? 'ip';
const { success, remaining } = await ratelimit.limit(ip);
// block the request if unsuccessfull
if (!success) {
return new Response('Ratelimited!', { status: 429 });
}
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-3.5-turbo'),
messages,
});
return result.toDataStreamResponse();
}
```
## Simplify API Protection
With Vercel KV and Upstash Ratelimit, it is possible to protect your APIs
from such attacks with ease. To learn more about how Ratelimit works and
how it can be configured to your needs, see [Ratelimit Documentation](https://upstash.com/docs/oss/sdks/ts/ratelimit/overview).
---
title: Rendering UI with Language Models
description: Rendering UI with Language Models
---
# Rendering User Interfaces with Language Models
Language models generate text, so at first it may seem like you would only need to render text in your application.
```tsx highlight="16" filename="app/actions.tsx"
const text = generateText({
model: openai('gpt-3.5-turbo'),
system: 'You are a friendly assistant',
prompt: 'What is the weather in SF?',
tools: {
getWeather: {
description: 'Get the weather for a location',
parameters: z.object({
city: z.string().describe('The city to get the weather for'),
unit: z
.enum(['C', 'F'])
.describe('The unit to display the temperature in'),
}),
execute: async ({ city, unit }) => {
const weather = getWeather({ city, unit });
return `It is currently ${weather.value}°${unit} and ${weather.description} in ${city}!`;
},
},
},
});
```
Above, the language model is passed a [tool](/docs/ai-sdk-core/tools-and-tool-calling) called `getWeather` that returns the weather information as text. However, instead of returning text, if you return a JSON object that represents the weather information, you can use it to render a React component instead.
```tsx highlight="18-23" filename="app/action.ts"
const text = generateText({
model: openai('gpt-3.5-turbo'),
system: 'You are a friendly assistant',
prompt: 'What is the weather in SF?',
tools: {
getWeather: {
description: 'Get the weather for a location',
parameters: z.object({
city: z.string().describe('The city to get the weather for'),
unit: z
.enum(['C', 'F'])
.describe('The unit to display the temperature in'),
}),
execute: async ({ city, unit }) => {
const weather = getWeather({ city, unit });
const { temperature, unit, description, forecast } = weather;
return {
temperature,
unit,
description,
forecast,
};
},
},
},
});
```
Now you can use the object returned by the `getWeather` function to conditionally render a React component `` that displays the weather information by passing the object as props.
```tsx filename="app/page.tsx"
return (
)
```
Here's a little preview of what that might look like.
Rendering interfaces as part of language model generations elevates the user experience of your application, allowing people to interact with language models beyond text.
They also make it easier for you to interpret [sequential tool calls](/docs/ai-sdk-rsc/multistep-interfaces) that take place in multiple steps and help identify and debug where the model reasoned incorrectly.
## Rendering Multiple User Interfaces
To recap, an application has to go through the following steps to render user interfaces as part of model generations:
1. The user prompts the language model.
2. The language model generates a response that includes a tool call.
3. The tool call returns a JSON object that represents the user interface.
4. The response is sent to the client.
5. The client receives the response and checks if the latest message was a tool call.
6. If it was a tool call, the client renders the user interface based on the JSON object returned by the tool call.
Most applications have multiple tools that are called by the language model, and each tool can return a different user interface.
For example, a tool that searches for courses can return a list of courses, while a tool that searches for people can return a list of people. As this list grows, the complexity of your application will grow as well and it can become increasingly difficult to manage these user interfaces.
```tsx filename='app/page.tsx'
{
message.role === 'tool' ? (
message.name === 'api-search-course' ? (
) : message.name === 'api-search-profile' ? (
) : message.name === 'api-meetings' ? (
) : message.name === 'api-search-building' ? (
) : message.name === 'api-events' ? (
) : message.name === 'api-meals' ? (
) : null
) : (
{message.content}
);
}
```
## Rendering User Interfaces on the Server
The **AI SDK RSC (`ai/rsc`)** takes advantage of RSCs to solve the problem of managing all your React components on the client side, allowing you to render React components on the server and stream them to the client.
Rather than conditionally rendering user interfaces on the client based on the data returned by the language model, you can directly stream them from the server during a model generation.
```tsx highlight="3,22-31,38" filename="app/action.ts"
import { createStreamableUI } from 'ai/rsc'
const uiStream = createStreamableUI();
const text = generateText({
model: openai('gpt-3.5-turbo'),
system: 'you are a friendly assistant'
prompt: 'what is the weather in SF?'
tools: {
getWeather: {
description: 'Get the weather for a location',
parameters: z.object({
city: z.string().describe('The city to get the weather for'),
unit: z
.enum(['C', 'F'])
.describe('The unit to display the temperature in')
}),
execute: async ({ city, unit }) => {
const weather = getWeather({ city, unit })
const { temperature, unit, description, forecast } = weather
uiStream.done(
)
}
}
}
})
return {
display: uiStream.value
}
```
The [`createStreamableUI`](/docs/reference/ai-sdk-rsc/create-streamable-ui) function belongs to the `ai/rsc` module and creates a stream that can send React components to the client.
On the server, you render the `` component with the props passed to it, and then stream it to the client. On the client side, you only need to render the UI that is streamed from the server.
```tsx filename="app/page.tsx" highlight="4"
return (
{messages.map(message => (
{message.display}
))}
);
```
Now the steps involved are simplified:
1. The user prompts the language model.
2. The language model generates a response that includes a tool call.
3. The tool call renders a React component along with relevant props that represent the user interface.
4. The response is streamed to the client and rendered directly.
> **Note:** You can also render text on the server and stream it to the client using React Server Components. This way, all operations from language model generation to UI rendering can be done on the server, while the client only needs to render the UI that is streamed from the server.
Check out this [example](/examples/next-app/interface/stream-component-updates) for a full illustration of how to stream component updates with React Server Components in Next.js App Router.
---
title: Language Models as Routers
description: Generative User Interfaces and Language Models as Routers
---
# Generative User Interfaces
Since language models can render user interfaces as part of their generations, the resulting model generations are referred to as generative user interfaces.
In this section we will learn more about generative user interfaces and their impact on the way AI applications are built.
## Deterministic Routes and Probabilistic Routing
Generative user interfaces are not deterministic in nature because they depend on the model's generation output. Since these generations are probabilistic in nature, it is possible for every user query to result in a different user interface.
Users expect their experience using your application to be predictable, so non-deterministic user interfaces can sound like a bad idea at first. However, language models can be set up to limit their generations to a particular set of outputs using their ability to call functions.
When language models are provided with a set of function definitions and instructed to execute any of them based on user query, they do either one of the following things:
- Execute a function that is most relevant to the user query.
- Not execute any function if the user query is out of bounds of the set of functions available to them.
```tsx filename='app/actions.ts'
const sendMessage = (prompt: string) =>
generateText({
model: 'gpt-3.5-turbo',
system: 'you are a friendly weather assistant!',
prompt,
tools: {
getWeather: {
description: 'Get the weather in a location',
parameters: z.object({
location: z.string().describe('The location to get the weather for'),
}),
execute: async ({ location }: { location: string }) => ({
location,
temperature: 72 + Math.floor(Math.random() * 21) - 10,
}),
},
},
});
sendMessage('What is the weather in San Francisco?'); // getWeather is called
sendMessage('What is the weather in New York?'); // getWeather is called
sendMessage('What events are happening in London?'); // No function is called
```
This way, it is possible to ensure that the generations result in deterministic outputs, while the choice a model makes still remains to be probabilistic.
This emergent ability exhibited by a language model to choose whether a function needs to be executed or not based on a user query is believed to be models emulating "reasoning".
As a result, the combination of language models being able to reason which function to execute as well as render user interfaces at the same time gives you the ability to build applications where language models can be used as a router.
## Language Models as Routers
Historically, developers had to write routing logic that connected different parts of an application to be navigable by a user and complete a specific task.
In web applications today, most of the routing logic takes place in the form of routes:
- `/login` would navigate you to a page with a login form.
- `/user/john` would navigate you to a page with profile details about John.
- `/api/events?limit=5` would display the five most recent events from an events database.
While routes help you build web applications that connect different parts of an application into a seamless user experience, it can also be a burden to manage them as the complexity of applications grow.
Next.js has helped reduce complexity in developing with routes by introducing:
- File-based routing system
- Dynamic routing
- API routes
- Middleware
- App router, and so on...
With language models becoming better at reasoning, we believe that there is a future where developers only write core application specific components while models take care of routing them based on the user's state in an application.
With generative user interfaces, the language model decides which user interface to render based on the user's state in the application, giving users the flexibility to interact with your application in a conversational manner instead of navigating through a series of predefined routes.
### Routing by parameters
For routes like:
- `/profile/[username]`
- `/search?q=[query]`
- `/media/[id]`
that have segments dependent on dynamic data, the language model can generate the correct parameters and render the user interface.
For example, when you're in a search application, you can ask the language model to search for artworks from different artists. The language model will call the search function with the artist's name as a parameter and render the search results.
### Routing by sequence
For actions that require a sequence of steps to be completed by navigating through different routes, the language model can generate the correct sequence of routes to complete in order to fulfill the user's request.
For example, when you're in a calendar application, you can ask the language model to schedule a happy hour evening with your friends. The language model will then understand your request and will perform the right sequence of [tool calls](/docs/ai-sdk-core/tools-and-tool-calling) to:
1. Lookup your calendar
2. Lookup your friends' calendars
3. Determine the best time for everyone
4. Search for nearby happy hour spots
5. Create an event and send out invites to your friends
Just by defining functions to lookup contacts, pull events from a calendar, and search for nearby locations, the model is able to sequentially navigate the routes for you.
To learn more, check out these [examples](/examples/next-app/interface) using the `streamUI` function to stream generative user interfaces to the client based on the response from the language model.
---
title: Multistep Interfaces
description: Concepts behind building multistep interfaces
---
# Multistep Interfaces
Multistep interfaces refer to user interfaces that require multiple independent steps to be executed in order to complete a specific task.
In order to understand multistep interfaces, it is important to understand two concepts:
- Tool composition
- Application context
**Tool composition** is the process of combining multiple [tools](/docs/ai-sdk-core/tools-and-tool-calling) to create a new tool. This is a powerful concept that allows you to break down complex tasks into smaller, more manageable steps.
**Application context** refers to the state of the application at any given point in time. This includes the user's input, the output of the language model, and any other relevant information.
When designing multistep interfaces, you need to consider how the tools in your application can be composed together to form a coherent user experience as well as how the application context changes as the user progresses through the interface.
## Application Context
The application context can be thought of as the conversation history between the user and the language model. The richer the context, the more information the model has to generate relevant responses.
In the context of multistep interfaces, the application context becomes even more important. This is because **the user's input in one step may affect the output of the model in the next step**.
For example, consider a meal logging application that helps users track their daily food intake. The language model is provided with the following tools:
- `log_meal` takes in parameters like the name of the food, the quantity, and the time of consumption to log a meal.
- `delete_meal` takes in the name of the meal to be deleted.
When the user logs a meal, the model generates a response confirming the meal has been logged.
```txt highlight="2"
User: Log a chicken shawarma for lunch.
Tool: log_meal("chicken shawarma", "250g", "12:00 PM")
Model: Chicken shawarma has been logged for lunch.
```
Now when the user decides to delete the meal, the model should be able to reference the previous step to identify the meal to be deleted.
```txt highlight="7"
User: Log a chicken shawarma for lunch.
Tool: log_meal("chicken shawarma", "250g", "12:00 PM")
Model: Chicken shawarma has been logged for lunch.
...
...
User: I skipped lunch today, can you update my log?
Tool: delete_meal("chicken shawarma")
Model: Chicken shawarma has been deleted from your log.
```
In this example, managing the application context is important for the model to generate the correct response. The model needs to have information about the previous actions in order for it to use generate the parameters for the `delete_meal` tool.
## Tool Composition
Tool composition is the process of combining multiple tools to create a new tool. This involves defining the inputs and outputs of each tool, as well as how they interact with each other.
The design of how these tools can be composed together to form a multistep interface is crucial to both the user experience of your application and the model's ability to generate the correct output.
For example, consider a flight booking assistant that can help users book flights. The assistant can be designed to have the following tools:
- `searchFlights`: Searches for flights based on the user's query.
- `lookupFlight`: Looks up details of a specific flight based on the flight number.
- `bookFlight`: Books a flight based on the user's selection.
The `searchFlights` tool is called when the user wants to lookup flights for a specific route. This would typically mean the tool should be able to take in parameters like the origin and destination of the flight.
The `lookupFlight` tool is called when the user wants to get more details about a specific flight. This would typically mean the tool should be able to take in parameters like the flight number and return information about seat availability.
The `bookFlight` tool is called when the user decides to book a flight. In order to identify the flight to book, the tool should be able to take in parameters like the flight number, trip date, and passenger details.
So the conversation between the user and the model could look like this:
```txt highlight="8"
User: I want to book a flight from New York to London.
Tool: searchFlights("New York", "London")
Model: Here are the available flights from New York to London.
User: I want to book flight number BA123 on 12th December for myself and my wife.
Tool: lookupFlight("BA123") -> "4 seats available"
Model: Sure, there are seats available! Can you provide the names of the passengers?
User: John Doe and Jane Doe.
Tool: bookFlight("BA123", "12th December", ["John Doe", "Jane Doe"])
Model: Your flight has been booked!
```
In the last tool call, the `bookFlight` tool does not include passenger details as it is not available in the application context. As a result, it requests the user to provide the passenger details before proceeding with the booking.
Looking up passenger information could've been another tool that the model could've called before calling the `bookFlight` tool assuming that the user is logged into the application. This way, the model does not have to ask the user for the passenger details and can proceed with the booking.
```txt highlight="5,6"
User: I want to book a flight from New York to London.
Tool: searchFlights("New York", "London")
Model: Here are the available flights from New York to London.
User: I want to book flight number BA123 on 12th December for myself an my wife.
Tool: lookupContacts() -> ["John Doe", "Jane Doe"]
Tool: bookFlight("BA123", "12th December", ["John Doe", "Jane Doe"])
Model: Your flight has been booked!
```
The `lookupContacts` tool is called before the `bookFlight` tool to ensure that the passenger details are available in the application context when booking the flight. This way, the model can reduce the number of steps required from the user and use its ability to call tools that populate its context and use that information to complete the booking process.
Now, let's introduce another tool called `lookupBooking` that can be used to show booking details by taking in the name of the passenger as parameter. This tool can be composed with the existing tools to provide a more complete user experience.
```txt highlight="2-4"
User: What's the status of my wife's upcoming flight?
Tool: lookupContacts() -> ["John Doe", "Jane Doe"]
Tool: lookupBooking("Jane Doe") -> "BA123 confirmed"
Tool: lookupFlight("BA123") -> "Flight BA123 is scheduled to depart on 12th December."
Model: Your wife's flight BA123 is confirmed and scheduled to depart on 12th December.
```
In this example, the `lookupBooking` tool is used to provide the user with the status of their wife's upcoming flight. By composing this tool with the existing tools, the model is able to generate a response that includes the booking status and the departure date of the flight without requiring the user to provide additional information.
As a result, the more tools you design that can be composed together, the more complex and powerful your application can become.
---
title: Sequential Generations
description: Learn how to implement sequential generations ("chains") with the AI SDK
---
# Sequential Generations
When working with the AI SDK, you may want to create sequences of generations (often referred to as "chains" or "pipes"), where the output of one becomes the input for the next. This can be useful for creating more complex AI-powered workflows or for breaking down larger tasks into smaller, more manageable steps.
## Example
In a sequential chain, the output of one generation is directly used as input for the next generation. This allows you to create a series of dependent generations, where each step builds upon the previous one.
Here's an example of how you can implement sequential actions:
```typescript
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
async function sequentialActions() {
// Generate blog post ideas
const ideasGeneration = await generateText({
model: openai('gpt-4o'),
prompt: 'Generate 10 ideas for a blog post about making spaghetti.',
});
console.log('Generated Ideas:\n', ideasGeneration);
// Pick the best idea
const bestIdeaGeneration = await generateText({
model: openai('gpt-4o'),
prompt: `Here are some blog post ideas about making spaghetti:
${ideasGeneration}
Pick the best idea from the list above and explain why it's the best.`,
});
console.log('\nBest Idea:\n', bestIdeaGeneration);
// Generate an outline
const outlineGeneration = await generateText({
model: openai('gpt-4o'),
prompt: `We've chosen the following blog post idea about making spaghetti:
${bestIdeaGeneration}
Create a detailed outline for a blog post based on this idea.`,
});
console.log('\nBlog Post Outline:\n', outlineGeneration);
}
sequentialActions().catch(console.error);
```
In this example, we first generate ideas for a blog post, then pick the best idea, and finally create an outline based on that idea. Each step uses the output from the previous step as input for the next generation.
---
title: Vercel Deployment Guide
description: Learn how to deploy an AI application to production on Vercel
---
# Vercel Deployment Guide
In this guide, you will deploy an AI application to [Vercel](https://vercel.com) using [Next.js](https://nextjs.org) (App Router).
Vercel is a platform for developers that provides the tools, workflows, and infrastructure you need to build and deploy your web apps faster, without the need for additional configuration.
Vercel allows for automatic deployments on every branch push and merges onto the production branch of your GitHub, GitLab, and Bitbucket projects. It is a great option for deploying your AI application.
## Before You Begin
To follow along with this guide, you will need:
- a Vercel account
- an account with a Git provider (this tutorial will use [Github](https://github.com))
- an OpenAI API key
This guide will teach you how to deploy the application you built in the Next.js (App Router) quickstart tutorial to Vercel. If you haven’t completed the quickstart guide, you can start with [this repo](https://github.com/vercel-labs/ai-sdk-deployment-guide).
## Commit Changes
Vercel offers a powerful git-centered workflow that automatically deploys your application to production every time you push to your repository’s main branch.
Before committing your local changes, make sure that you have a `.gitignore`. Within your `.gitignore`, ensure that you are excluding your environment variables (`.env`) and your node modules (`node_modules`).
If you have any local changes, you can commit them by running the following commands:
```bash
git add .
git commit -m "init"
```
## Create Git Repo
You can create a GitHub repository from within your terminal, or on [github.com](https://github.com/). For this tutorial, you will use the GitHub CLI ([more info here](https://cli.github.com/)).
To create your GitHub repository:
1. Navigate to [github.com](http://github.com/)
2. In the top right corner, click the "plus" icon and select "New repository"
3. Pick a name for your repository (this can be anything)
4. Click "Create repository"
Once you have created your repository, GitHub will redirect you to your new repository.
1. Scroll down the page and copy the commands under the title "...or push an existing repository from the command line"
2. Go back to the terminal, paste and then run the commands
Note: if you run into the error "error: remote origin already exists.", this is because your local repository is still linked to the repository you cloned. To "unlink", you can run the following command:
```bash
rm -rf .git
git init
git add .
git commit -m "init"
```
Rerun the code snippet from the previous step.
## Import Project in Vercel
On the [New Project](https://vercel.com/new) page, under the **Import Git Repository** section, select the Git provider that you would like to import your project from. Follow the prompts to sign in to your GitHub account.
Once you have signed in, you should see your newly created repository from the previous step in the "Import Git Repository" section. Click the "Import" button next to that project.
### Add Environment Variables
Your application stores uses environment secrets to store your OpenAI API key using a `.env.local` file locally in development. To add this API key to your production deployment, expand the "Environment Variables" section and paste in your `.env.local` file. Vercel will automatically parse your variables and enter them in the appropriate `key:value` format.
### Deploy
Press the **Deploy** button. Vercel will create the Project and deploy it based on the chosen configurations.
### Enjoy the confetti!
To view your deployment, select the Project in the dashboard and then select the **Domain**. This page is now visible to anyone who has the URL.
## Considerations
When deploying an AI application, there are infrastructure-related considerations to be aware of.
### Function Duration
In most cases, you will call the large language model (LLM) on the server. By default, Vercel serverless functions have a maximum duration of 10 seconds on the Hobby Tier. Depending on your prompt, it can take an LLM more than this limit to complete a response. If the response is not resolved within this limit, the server will throw an error.
You can specify the maximum duration of your Vercel function using [route segment config](https://nextjs.org/docs/app/api-reference/file-conventions/route-segment-config). To update your maximum duration, add the following route segment config to the top of your route handler or the page which is calling your server action.
```ts
export const maxDuration = 30;
```
You can increase the max duration to 60 seconds on the Hobby Tier. For other tiers, [see the documentation](https://vercel.com/docs/functions/runtimes#max-duration) for limits.
## Security Considerations
Given the high cost of calling an LLM, it's important to have measures in place that can protect your application from abuse.
### Rate Limit
Rate limiting is a method used to regulate network traffic by defining a maximum number of requests that a client can send to a server within a given time frame.
Follow [this guide](https://vercel.com/guides/securing-ai-app-rate-limiting) to add rate limiting to your application.
### Firewall
A firewall helps protect your applications and websites from DDoS attacks and unauthorized access.
[Vercel Firewall](https://vercel.com/docs/security/vercel-firewall) is a set of tools and infrastructure, created specifically with security in mind. It automatically mitigates DDoS attacks and Enterprise teams can get further customization for their site, including dedicated support and custom rules for IP blocking.
## Troubleshooting
- Streaming not working ([App Router](/docs/troubleshooting/common-issues/streaming-not-working-on-vercel) / [Pages Router](/docs/troubleshooting/common-issues/streaming-not-working-on-vercel-pages-router))
- Experiencing [Timeouts](/docs/troubleshooting/common-issues/timeout-on-vercel)
---
title: Advanced
description: Learn how to use advanced functionality within the AI SDK and RSC API.
collapsed: true
---
# Advanced
This section covers advanced topics and concepts for the AI SDK and RSC API. Working with LLMs often requires a different mental model compared to traditional software development.
After these concepts, you should have a better understanding of the paradigms behind the AI SDK and RSC API, and how to use them to build more AI applications.
---
title: generateText
description: API Reference for generateText.
---
# `generateText()`
Generates text and calls tools for a given prompt using a language model.
It is ideal for non-interactive use cases such as automation tasks where you need to write text (e.g. drafting email or summarizing web pages) and for agents that use tools.
```ts
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
const { text } = await generateText({
model: openai('gpt-4-turbo'),
prompt: 'Invent a new holiday and describe its traditions.',
});
console.log(text);
```
To see `generateText` in action, check out [these examples](#examples).
## Import
## API Signature
### Parameters
| Array',
description:
'A list of messages that represent a conversation. Automatically converts UI messages from the useChat hook.',
properties: [
{
type: 'CoreSystemMessage',
parameters: [
{
name: 'role',
type: "'system'",
description: 'The role for the system message.',
},
{
name: 'content',
type: 'string',
description: 'The content of the message.',
},
],
},
{
type: 'CoreUserMessage',
parameters: [
{
name: 'role',
type: "'user'",
description: 'The role for the user message.',
},
{
name: 'content',
type: 'string | Array',
description: 'The content of the message.',
properties: [
{
type: 'TextPart',
parameters: [
{
name: 'type',
type: "'text'",
description: 'The type of the message part.',
},
{
name: 'text',
type: 'string',
description: 'The text content of the message part.',
},
],
},
{
type: 'ImagePart',
parameters: [
{
name: 'type',
type: "'image'",
description: 'The type of the message part.',
},
{
name: 'image',
type: 'string | Uint8Array | Buffer | ArrayBuffer | URL',
description:
'The image content of the message part. String are either base64 encoded content, base64 data URLs, or http(s) URLs.',
},
{
name: 'mimeType',
type: 'string',
isOptional: true,
description: 'The mime type of the image. Optional.',
},
],
},
{
type: 'FilePart',
parameters: [
{
name: 'type',
type: "'file'",
description: 'The type of the message part.',
},
{
name: 'data',
type: 'string | Uint8Array | Buffer | ArrayBuffer | URL',
description:
'The file content of the message part. String are either base64 encoded content, base64 data URLs, or http(s) URLs.',
},
{
name: 'mimeType',
type: 'string',
description: 'The mime type of the file.',
},
],
},
],
},
],
},
{
type: 'CoreAssistantMessage',
parameters: [
{
name: 'role',
type: "'assistant'",
description: 'The role for the assistant message.',
},
{
name: 'content',
type: 'string | Array',
description: 'The content of the message.',
properties: [
{
type: 'TextPart',
parameters: [
{
name: 'type',
type: "'text'",
description: 'The type of the message part.',
},
{
name: 'text',
type: 'string',
description: 'The text content of the message part.',
},
],
},
{
type: 'ToolCallPart',
parameters: [
{
name: 'type',
type: "'tool-call'",
description: 'The type of the message part.',
},
{
name: 'toolCallId',
type: 'string',
description: 'The id of the tool call.',
},
{
name: 'toolName',
type: 'string',
description:
'The name of the tool, which typically would be the name of the function.',
},
{
name: 'args',
type: 'object based on zod schema',
description:
'Parameters generated by the model to be used by the tool.',
},
],
},
],
},
],
},
{
type: 'CoreToolMessage',
parameters: [
{
name: 'role',
type: "'tool'",
description: 'The role for the assistant message.',
},
{
name: 'content',
type: 'Array',
description: 'The content of the message.',
properties: [
{
type: 'ToolResultPart',
parameters: [
{
name: 'type',
type: "'tool-result'",
description: 'The type of the message part.',
},
{
name: 'toolCallId',
type: 'string',
description:
'The id of the tool call the result corresponds to.',
},
{
name: 'toolName',
type: 'string',
description:
'The name of the tool the result corresponds to.',
},
{
name: 'result',
type: 'unknown',
description:
'The result returned by the tool after execution.',
},
{
name: 'isError',
type: 'boolean',
isOptional: true,
description:
'Whether the result is an error or an error message.',
},
],
},
],
},
],
},
],
},
{
name: 'tools',
type: 'Record',
description:
'Tools that are accessible to and can be called by the model. The model needs to support calling tools.',
properties: [
{
type: 'CoreTool',
parameters: [
{
name: 'description',
isOptional: true,
type: 'string',
description:
'Information about the purpose of the tool including details on how and when it can be used by the model.',
},
{
name: 'parameters',
type: 'Zod Schema | JSON Schema',
description:
'The schema of the input that the tool expects. The language model will use this to generate the input. It is also used to validate the output of the language model. Use descriptions to make the input understandable for the language model. You can either pass in a Zod schema or a JSON schema (using the `jsonSchema` function).',
},
{
name: 'execute',
isOptional: true,
type: 'async (parameters: T, options: ToolExecutionOptions) => RESULT',
description:
'An async function that is called with the arguments from the tool call and produces a result. If not provided, the tool will not be executed automatically.',
properties: [
{
type: 'ToolExecutionOptions',
parameters: [
{
name: 'toolCallId',
type: 'string',
description:
'The ID of the tool call. You can use it e.g. when sending tool-call related information with stream data.',
},
{
name: 'messages',
type: 'CoreMessage[]',
description:
'Messages that were sent to the language model to initiate the response that contained the tool call. The messages do not include the system prompt nor the assistant response that contained the tool call.',
},
{
name: 'abortSignal',
type: 'AbortSignal',
description:
'An optional abort signal that indicates that the overall operation should be aborted.',
},
],
},
],
},
],
},
],
},
{
name: 'toolChoice',
isOptional: true,
type: '"auto" | "none" | "required" | { "type": "tool", "toolName": string }',
description:
'The tool choice setting. It specifies how tools are selected for execution. The default is "auto". "none" disables tool execution. "required" requires tools to be executed. { "type": "tool", "toolName": string } specifies a specific tool to execute.',
},
{
name: 'maxTokens',
type: 'number',
isOptional: true,
description: 'Maximum number of tokens to generate.',
},
{
name: 'temperature',
type: 'number',
isOptional: true,
description:
'Temperature setting. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either `temperature` or `topP`, but not both.',
},
{
name: 'topP',
type: 'number',
isOptional: true,
description:
'Nucleus sampling. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either `temperature` or `topP`, but not both.',
},
{
name: 'topK',
type: 'number',
isOptional: true,
description:
'Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature.',
},
{
name: 'presencePenalty',
type: 'number',
isOptional: true,
description:
'Presence penalty setting. It affects the likelihood of the model to repeat information that is already in the prompt. The value is passed through to the provider. The range depends on the provider and model.',
},
{
name: 'frequencyPenalty',
type: 'number',
isOptional: true,
description:
'Frequency penalty setting. It affects the likelihood of the model to repeatedly use the same words or phrases. The value is passed through to the provider. The range depends on the provider and model.',
},
{
name: 'stopSequences',
type: 'string[]',
isOptional: true,
description:
'Sequences that will stop the generation of the text. If the model generates any of these sequences, it will stop generating further text.',
},
{
name: 'seed',
type: 'number',
isOptional: true,
description:
'The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.',
},
{
name: 'maxRetries',
type: 'number',
isOptional: true,
description:
'Maximum number of retries. Set to 0 to disable retries. Default: 2.',
},
{
name: 'abortSignal',
type: 'AbortSignal',
isOptional: true,
description:
'An optional abort signal that can be used to cancel the call.',
},
{
name: 'headers',
type: 'Record',
isOptional: true,
description:
'Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.',
},
{
name: 'maxSteps',
type: 'number',
isOptional: true,
description:
'Maximum number of sequential LLM calls (steps), e.g. when you use tool calls. A maximum number is required to prevent infinite loops in the case of misconfigured tools. By default, it is set to 1.',
},
{
name: 'experimental_continueSteps',
type: 'boolean',
isOptional: true,
description: 'Enable or disable continue steps. Disabled by default.',
},
{
name: 'experimental_telemetry',
type: 'TelemetrySettings',
isOptional: true,
description: 'Telemetry configuration. Experimental feature.',
properties: [
{
type: 'TelemetrySettings',
parameters: [
{
name: 'isEnabled',
type: 'boolean',
isOptional: true,
description:
'Enable or disable telemetry. Disabled by default while experimental.',
},
{
name: 'recordInputs',
type: 'boolean',
isOptional: true,
description:
'Enable or disable input recording. Enabled by default.',
},
{
name: 'recordOutputs',
type: 'boolean',
isOptional: true,
description:
'Enable or disable output recording. Enabled by default.',
},
{
name: 'functionId',
type: 'string',
isOptional: true,
description:
'Identifier for this function. Used to group telemetry data by function.',
},
{
name: 'metadata',
isOptional: true,
type: 'Record | Array | Array>',
description:
'Additional information to include in the telemetry data.',
},
],
},
],
},
{
name: 'experimental_providerMetadata',
type: 'Record> | undefined',
isOptional: true,
description:
'Optional metadata from the provider. The outer key is the provider name. The inner values are the metadata. Details depend on the provider.',
},
{
name: 'experimental_activeTools',
type: 'Array | undefined',
isOptional: true,
description:
'The tools that are currently active. All tools are active by default.',
},
{
name: 'experimental_repairToolCall',
type: '(options: ToolCallRepairOptions) => Promise',
isOptional: true,
description:
'A function that attempts to repair a tool call that failed to parse. Return either a repaired tool call or null if the tool call cannot be repaired.',
properties: [
{
type: 'ToolCallRepairOptions',
parameters: [
{
name: 'system',
type: 'string | undefined',
description: 'The system prompt.',
},
{
name: 'messages',
type: 'CoreMessage[]',
description: 'The messages in the current generation step.',
},
{
name: 'toolCall',
type: 'LanguageModelV1FunctionToolCall',
description: 'The tool call that failed to parse.',
},
{
name: 'tools',
type: 'TOOLS',
description: 'The tools that are available.',
},
{
name: 'parameterSchema',
type: '(options: { toolName: string }) => JSONSchema7',
description:
'A function that returns the JSON Schema for a tool.',
},
{
name: 'error',
type: 'NoSuchToolError | InvalidToolArgumentsError',
description:
'The error that occurred while parsing the tool call.',
},
],
},
],
},
{
name: 'experimental_output',
type: 'Output',
isOptional: true,
description: 'Experimental setting for generating structured outputs.',
properties: [
{
type: 'Output',
parameters: [
{
name: 'Output.text()',
type: 'Output',
description: 'Forward text output.',
},
{
name: 'Output.object()',
type: 'Output',
description: 'Generate a JSON object of type OBJECT.',
properties: [
{
type: 'Options',
parameters: [
{
name: 'schema',
type: 'Schema