--- title: AI SDK by Vercel description: Welcome to the AI SDK documentation! --- # AI SDK The AI SDK is the TypeScript toolkit designed to help developers build AI-powered applications with React, Next.js, Vue, Svelte, Node.js, and more. ## Why use the AI SDK? Integrating large language models (LLMs) into applications is complicated and heavily dependent on the specific model provider you use. - **[AI SDK Core](/docs/ai-sdk-core):** A unified API for generating text, structured objects, and tool calls with LLMs. - **[AI SDK UI](/docs/ai-sdk-ui):** A set of framework-agnostic hooks for quickly building chat and generative user interface. ## Model Providers The AI SDK supports [multiple model providers](/providers). ## Templates We've built some [templates](https://vercel.com/templates?type=ai) that include AI SDK integrations for different use cases, providers, and frameworks. You can use these templates to get started with your AI-powered application. ### Starter Kits ### Feature Exploration ### Frameworks ### Generative UI ### Security ## Join our Community If you have questions about anything related to the AI SDK, you're always welcome to ask our community on [GitHub Discussions](https://github.com/vercel/ai/discussions). ## `llms.txt` You can access the entire AI SDK documentation in Markdown format at [sdk.vercel.ai/llms.txt](/llms.txt). This can be used to ask any LLM (assuming it has a big enough context window) questions about the AI SDK based on the most up-to-date documentation. ### Example Usage For instance, to prompt an LLM with questions about the AI SDK: 1. Copy the documentation contents from [sdk.vercel.ai/llms.txt](/llms.txt) 2. Use the following prompt format: ```prompt Documentation: {paste documentation here} --- Based on the above documentation, answer the following: {your question} ``` --- title: Overview description: An overview of foundational concepts critical to understanding the AI SDK --- # Overview This page is a beginner-friendly introduction to high-level artificial intelligence (AI) concepts. To dive right into implementing the AI SDK, feel free to skip ahead to our [quickstarts](/docs/getting-started) or learn about our [supported models and providers](/docs/foundations/providers-and-models). The AI SDK standardizes integrating artificial intelligence (AI) models across [supported providers](/docs/foundations/providers-and-models). This enables developers to focus on building great AI applications, not waste time on technical details. For example, here’s how you can generate text with various models using the AI SDK: To effectively leverage the AI SDK, it helps to familiarize yourself with the following concepts: ## Generative Artificial Intelligence **Generative artificial intelligence** refers to models that predict and generate various types of outputs (such as text, images, or audio) based on what’s statistically likely, pulling from patterns they’ve learned from their training data. For example: - Given a photo, a generative model can generate a caption. - Given an audio file, a generative model can generate a transcription. - Given a text description, a generative model can generate an image. ## Large Language Models A **large language model (LLM)** is a subset of generative models focused primarily on **text**. An LLM takes a sequence of words as input and aims to predict the most likely sequence to follow. It assigns probabilities to potential next sequences and then selects one. The model continues to generate sequences until it meets a specified stopping criterion. LLMs learn by training on massive collections of written text, which means they will be better suited to some use cases than others. For example, a model trained on GitHub data would understand the probabilities of sequences in source code particularly well. However, it's crucial to understand LLMs' limitations. When asked about less known or absent information, like the birthday of a personal relative, LLMs might "hallucinate" or make up information. It's essential to consider how well-represented the information you need is in the model. ## Embedding Models An **embedding model** is used to convert complex data (like words or images) into a dense vector (a list of numbers) representation, known as an embedding. Unlike generative models, embedding models do not generate new text or data. Instead, they provide representations of semantic and syntactic relationships between entities that can be used as input for other models or other natural language processing tasks. In the next section, you will learn about the difference between models providers and models, and which ones are available in the AI SDK. --- title: Providers and Models description: Learn about the providers and models available in the AI SDK. --- # Providers and Models Companies such as OpenAI and Anthropic (providers) offer access to a range of large language models (LLMs) with differing strengths and capabilities through their own APIs. Each provider typically has its own unique method for interfacing with their models, complicating the process of switching providers and increasing the risk of vendor lock-in. To solve these challenges, AI SDK Core offers a standardized approach to interacting with LLMs through a [language model specification](https://github.com/vercel/ai/tree/main/packages/provider/src/language-model/v1) that abstracts differences between providers. This unified interface allows you to switch between providers with ease while using the same API for all providers. Here is an overview of the AI SDK Provider Architecture: ## AI SDK Providers The AI SDK comes with several providers that you can use to interact with different language models: - [OpenAI Provider](/providers/ai-sdk-providers/openai) (`@ai-sdk/openai`) - [Azure OpenAI Provider](/providers/ai-sdk-providers/azure) (`@ai-sdk/azure`) - [Anthropic Provider](/providers/ai-sdk-providers/anthropic) (`@ai-sdk/anthropic`) - [Amazon Bedrock Provider](/providers/ai-sdk-providers/amazon-bedrock) (`@ai-sdk/amazon-bedrock`) - [Google Generative AI Provider](/providers/ai-sdk-providers/google-generative-ai) (`@ai-sdk/google`) - [Google Vertex Provider](/providers/ai-sdk-providers/google-vertex) (`@ai-sdk/google-vertex`) - [Mistral Provider](/providers/ai-sdk-providers/mistral) (`@ai-sdk/mistral`) - [xAI Grok Provider](/providers/ai-sdk-providers/xai) (`@ai-sdk/xai`) - [Together.ai Provider](/providers/ai-sdk-providers/togetherai) (`@ai-sdk/togetherai`) - [Cohere Provider](/providers/ai-sdk-providers/cohere) (`@ai-sdk/cohere`) - [Groq](/providers/ai-sdk-providers/groq) (`@ai-sdk/groq`) You can also use the OpenAI provider with OpenAI-compatible APIs: - [Perplexity](/providers/ai-sdk-providers/perplexity) - [Fireworks](/providers/ai-sdk-providers/fireworks) - [LM Studio](/providers/openai-compatible-providers/lmstudio) - [Baseten](/providers/openai-compatible-providers/baseten) Our [language model specification](https://github.com/vercel/ai/tree/main/packages/provider/src/language-model/v1) is published as an open-source package, which you can use to create [custom providers](/providers/community-providers/custom-providers). The open-source community has created the following providers: - [Ollama Provider](/providers/community-providers/ollama) (`ollama-ai-provider`) - [ChromeAI Provider](/providers/community-providers/chrome-ai) (`chrome-ai`) - [AnthropicVertex Provider](/providers/community-providers/anthropic-vertex-ai) (`anthropic-vertex-ai`) - [FriendliAI Provider](/providers/community-providers/friendliai) (`@friendliai/ai-provider`) - [Portkey Provider](/providers/community-providers/portkey) (`@portkey-ai/vercel-provider`) - [Cloudflare Workers AI Provider](/providers/community-providers/cloudflare-workers-ai) (`workers-ai-provider`) - [Crosshatch Provider](/providers/community-providers/crosshatch) (`@crosshatch/ai-provider`) - [Mixedbread Provider](/providers/community-providers/mixedbread) (`mixedbread-ai-provider`) - [Voyage AI Provider](/providers/community-providers/voyage-ai) (`voyage-ai-provider`) - [LLamaCpp Provider](/providers/community-providers/llama-cpp) (`llamacpp-ai-provider`) ## Model Capabilities The AI providers support different language models with various capabilities. Here are the capabilities of popular models: | Provider | Model | Image Input | Object Generation | Tool Usage | Tool Streaming | | ------------------------------------------------------------------------ | ---------------------------- | ------------------- | ------------------- | ------------------- | ------------------- | | [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-4o` | | | | | | [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-4o-mini` | | | | | | [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-4-turbo` | | | | | | [OpenAI](/providers/ai-sdk-providers/openai) | `gpt-4` | | | | | | [OpenAI](/providers/ai-sdk-providers/openai) | `o1-preview` | | | | | | [OpenAI](/providers/ai-sdk-providers/openai) | `o1-mini` | | | | | | [Anthropic](/providers/ai-sdk-providers/anthropic) | `claude-3-5-sonnet-20241022` | | | | | | [Anthropic](/providers/ai-sdk-providers/anthropic) | `claude-3-5-sonnet-20240620` | | | | | | [Anthropic](/providers/ai-sdk-providers/anthropic) | `claude-3-5-haiku-20241022` | | | | | | [Mistral](/providers/ai-sdk-providers/mistral) | `pixtral-large-latest` | | | | | | [Mistral](/providers/ai-sdk-providers/mistral) | `mistral-large-latest` | | | | | | [Mistral](/providers/ai-sdk-providers/mistral) | `mistral-small-latest` | | | | | | [Mistral](/providers/ai-sdk-providers/mistral) | `pixtral-12b-2409` | | | | | | [Google Generative AI](/providers/ai-sdk-providers/google-generative-ai) | `gemini-2.0-flash-exp` | | | | | | [Google Generative AI](/providers/ai-sdk-providers/google-generative-ai) | `gemini-1.5-flash` | | | | | | [Google Generative AI](/providers/ai-sdk-providers/google-generative-ai) | `gemini-1.5-pro` | | | | | | [Google Vertex](/providers/ai-sdk-providers/google-vertex) | `gemini-1.5-flash` | | | | | | [Google Vertex](/providers/ai-sdk-providers/google-vertex) | `gemini-1.5-pro` | | | | | | [xAI Grok](/providers/ai-sdk-providers/xai) | `grok-beta` | | | | | | [xAI Grok](/providers/ai-sdk-providers/xai) | `grok-vision-beta` | | | | | | [Groq](/providers/ai-sdk-providers/groq) | `llama-3.3-70b-versatile` | | | | | | [Groq](/providers/ai-sdk-providers/groq) | `llama-3.1-8b-instant` | | | | | | [Groq](/providers/ai-sdk-providers/groq) | `mixtral-8x7b-32768` | | | | | | [Groq](/providers/ai-sdk-providers/groq) | `gemma2-9b-it` | | | | | This table is not exhaustive. Additional models can be found in the provider documentation pages and on the provider websites. --- title: Prompts description: Learn about the Prompt structure used in the AI SDK. --- # Prompts Prompts are instructions that you give a [large language model (LLM)](/docs/foundations/overview#large-language-models) to tell it what to do. It's like when you ask someone for directions; the clearer your question, the better the directions you'll get. Many LLM providers offer complex interfaces for specifying prompts. They involve different roles and message types. While these interfaces are powerful, they can be hard to use and understand. In order to simplify prompting, the AI SDK support text, message, and system prompts. ## Text Prompts Text prompts are strings. They are ideal for simple generation use cases, e.g. repeatedly generating content for variants of the same prompt text. You can set text prompts using the `prompt` property made available by AI SDK functions like [`streamText`](/docs/reference/ai-sdk-core/stream-text) or [`generateObject`](/docs/reference/ai-sdk-core/generate-object). You can structure the text in any way and inject variables, e.g. using a template literal. ```ts highlight="3" const result = await generateText({ model: yourModel, prompt: 'Invent a new holiday and describe its traditions.', }); ``` You can also use template literals to provide dynamic data to your prompt. ```ts highlight="3-5" const result = await generateText({ model: yourModel, prompt: `I am planning a trip to ${destination} for ${lengthOfStay} days. ` + `Please suggest the best tourist activities for me to do.`, }); ``` ## System Prompts System prompts are the initial set of instructions given to models that help guide and constrain the models' behaviors and responses. You can set system prompts using the `system` property. System prompts work with both the `prompt` and the `messages` properties. ```ts highlight="3-6" const result = await generateText({ model: yourModel, system: `You help planning travel itineraries. ` + `Respond to the users' request with a list ` + `of the best stops to make in their destination.`, prompt: `I am planning a trip to ${destination} for ${lengthOfStay} days. ` + `Please suggest the best tourist activities for me to do.`, }); ``` When you use a message prompt, you can also use system messages instead of a system prompt. ## Message Prompts A message prompt is an array of user, assistant, and tool messages. They are great for chat interfaces and more complex, multi-modal prompts. You can use the `messages` property to set message prompts. Each message has a `role` and a `content` property. The content can either be text (for user and assistant messages), or an array of relevant parts (data) for that message type. ```ts highlight="3-7" const result = await streamUI({ model: yourModel, messages: [ { role: 'user', content: 'Hi!' }, { role: 'assistant', content: 'Hello, how can I help?' }, { role: 'user', content: 'Where can I buy the best Currywurst in Berlin?' }, ], }); ``` Instead of sending a text in the `content` property, you can send an array of parts that includes a mix of text and other content parts. Not all language models support all message and content types. For example, some models might not be capable of handling multi-modal inputs or tool messages. [Learn more about the capabilities of select models](./providers-and-models#model-capabilities). ### User Messages #### Text Parts Text content is the most common type of content. It is a string that is passed to the model. If you only need to send text content in a message, the `content` property can be a string, but you can also use the `parts` property to send multiple parts of content. ```ts highlight="7" const result = await generateText({ model: yourModel, messages: [ { role: 'user', content: [ { type: 'text', text: 'Where can I buy the best Currywurst in Berlin?', }, ], }, ], }); ``` #### Image Parts User messages can include image parts. An image can be one of the following: - base64-encoded image: - `string` with base-64 encoded content - data URL `string`, e.g. `data:image/png;base64,...` - binary image: - `ArrayBuffer` - `Uint8Array` - `Buffer` - URL: - http(s) URL `string`, e.g. `https://example.com/image.png` - `URL` object, e.g. `new URL('https://example.com/image.png')` ##### Example: Binary image (Buffer) ```ts highlight="8-11" const result = await generateText({ model, messages: [ { role: 'user', content: [ { type: 'text', text: 'Describe the image in detail.' }, { type: 'image', image: fs.readFileSync('./data/comic-cat.png'), }, ], }, ], }); ``` ##### Example: Base-64 encoded image (string) ```ts highlight="8-11" const result = await generateText({ model: yourModel, messages: [ { role: 'user', content: [ { type: 'text', text: 'Describe the image in detail.' }, { type: 'image', image: fs.readFileSync('./data/comic-cat.png').toString('base64'), }, ], }, ], }); ``` ##### Example: Image URL (string) ```ts highlight="8-12" const result = await generateText({ model: yourModel, messages: [ { role: 'user', content: [ { type: 'text', text: 'Describe the image in detail.' }, { type: 'image', image: 'https://github.com/vercel/ai/blob/main/examples/ai-core/data/comic-cat.png?raw=true', }, ], }, ], }); ``` #### File Parts Only a few providers and models currently support file parts: [Google Generative AI](/providers/ai-sdk-providers/google-generative-ai), [Google Vertex AI](/providers/ai-sdk-providers/google-vertex), [OpenAI](/providers/ai-sdk-providers/openai) (for `wav` and `mp3` audio with `gpt-4o-audio-preview`), [Anthropic](/providers/ai-sdk-providers/anthropic) (for `pdf`). User messages can include file parts. A file can be one of the following: - base64-encoded file: - `string` with base-64 encoded content - data URL `string`, e.g. `data:image/png;base64,...` - binary data: - `ArrayBuffer` - `Uint8Array` - `Buffer` - URL: - http(s) URL `string`, e.g. `https://example.com/some.pdf` - `URL` object, e.g. `new URL('https://example.com/some.pdf')` You need to specify the MIME type of the file you are sending. ##### Example: PDF file from Buffer ```ts highlight="12-14" import { google } from '@ai-sdk/google'; import { generateText } from 'ai'; const result = await generateText({ model: google('gemini-1.5-flash'), messages: [ { role: 'user', content: [ { type: 'text', text: 'What is the file about?' }, { type: 'file', mimeType: 'application/pdf', data: fs.readFileSync('./data/example.pdf'), }, ], }, ], }); ``` ##### Example: mp3 audio file from Buffer ```ts highlight="12-14" import { openai } from '@ai-sdk/openai'; import { generateText } from 'ai'; const result = await generateText({ model: openai('gpt-4o-audio-preview'), messages: [ { role: 'user', content: [ { type: 'text', text: 'What is the audio saying?' }, { type: 'file', mimeType: 'audio/mpeg', data: fs.readFileSync('./data/galileo.mp3'), }, ], }, ], }); ``` ### Assistant Messages Assistant messages are messages that have a role of `assistant`. They are typically previous responses from the assistant and can contain text and tool call parts. #### Example: Assistant message with text ```ts highlight="5" const result = await generateText({ model: yourModel, messages: [ { role: 'user', content: 'Hi!' }, { role: 'assistant', content: 'Hello, how can I help?' }, ], }); ``` #### Example: Assistant message with tool call ```ts highlight="5-10" const result = await generateText({ model: yourModel, messages: [ { role: 'user', content: 'How many calories are in this block of cheese?' }, { type: 'tool-call', toolCallId: '12345', toolName: 'get-nutrition-data', args: { cheese: 'Roquefort' }, }, ], }); ``` ### Tool messages [Tools](/docs/foundations/tools) (also known as function calling) are programs that you can provide an LLM to extend it's built-in functionality. This can be anything from calling an external API to calling functions within your UI. Learn more about Tools in [the next section](/docs/foundations/tools). For models that support [tool](/docs/foundations/tools) calls, assistant messages can contain tool call parts, and tool messages can contain tool result parts. A single assistant message can call multiple tools, and a single tool message can contain multiple tool results. ```ts highlight="14-42" const result = await generateText({ model: yourModel, messages: [ { role: 'user', content: [ { type: 'text', text: 'How many calories are in this block of cheese?', }, { type: 'image', image: fs.readFileSync('./data/roquefort.jpg') }, ], }, { role: 'assistant', content: [ { type: 'tool-call', toolCallId: '12345', toolName: 'get-nutrition-data', args: { cheese: 'Roquefort' }, }, // there could be more tool calls here (parallel calling) ], }, { role: 'tool', content: [ { type: 'tool-result', toolCallId: '12345', // needs to match the tool call id toolName: 'get-nutrition-data', result: { name: 'Cheese, roquefort', calories: 369, fat: 31, protein: 22, }, }, // there could be more tool results here (parallel calling) ], }, ], }); ``` #### Multi-modal Tool Results Multi-part tool results are experimental and only supported by Anthropic. Tool results can be multi-part and multi-modal, e.g. a text and an image. You can use the `experimental_content` property on tool parts to specify multi-part tool results. ```ts highlight="20-32" const result = await generateText({ model: yourModel, messages: [ // ... { role: 'tool', content: [ { type: 'tool-result', toolCallId: '12345', // needs to match the tool call id toolName: 'get-nutrition-data', // for models that do not support multi-part tool results, // you can include a regular result part: result: { name: 'Cheese, roquefort', calories: 369, fat: 31, protein: 22, }, // for models that support multi-part tool results, // you can include a multi-part content part: content: [ { type: 'text', text: 'Here is an image of the nutrition data for the cheese:', }, { type: 'image', data: fs.readFileSync('./data/roquefort-nutrition-data.png'), mimeType: 'image/png', }, ], }, ], }, ], }); ``` ### System Messages System messages are messages that are sent to the model before the user messages to guide the assistant's behavior. You can alternatively use the `system` property. ```ts highlight="4" const result = await generateText({ model: yourModel, messages: [ { role: 'system', content: 'You help planning travel itineraries.' }, { role: 'user', content: 'I am planning a trip to Berlin for 3 days. Please suggest the best tourist activities for me to do.', }, ], }); ``` --- title: Tools description: Learn about tools with the AI SDK. --- # Tools While [large language models (LLMs)](/docs/foundations/overview#large-language-models) have incredible generation capabilities, they struggle with discrete tasks (e.g. mathematics) and interacting with the outside world (e.g. getting the weather). Tools are actions that an LLM can invoke. The results of these actions can be reported back to the LLM to be considered in the next response. For example, when you ask an LLM for the "weather in London", and there is a weather tool available, it could call a tool with London as the argument. The tool would then fetch the weather data and return it to the LLM. The LLM can then use this information in its response. ## What is a tool? A tool is an object that can be called by the model to perform a specific task. You can use tools with [`generateText`](/docs/reference/ai-sdk-core/generate-text) and [`streamText`](/docs/reference/ai-sdk-core/stream-text) by passing one or more tools to the `tools` parameter. A tool consists of three properties: - **`description`**: An optional description of the tool that can influence when the tool is picked. - **`parameters`**: A [Zod schema](/docs/foundations/tools#schema-specification-and-validation-with-zod) or a [JSON schema](/docs/reference/ai-sdk-core/json-schema) that defines the parameters. The schema is consumed by the LLM, and also used to validate the LLM tool calls. - **`execute`**: An optional async function that is called with the arguments from the tool call. `streamUI` uses UI generator tools with a `generate` function that can return React components. If the LLM decides to use a tool, it will generate a tool call. Tools with an `execute` function are run automatically when these calls are generated. The results of the tool calls are returned using tool result objects. You can automatically pass tool results back to the LLM using [multi-step calls](/docs/ai-sdk-core/tools-and-tool-calling#multi-step-calls) with `streamText` and `generateText`. ## Schemas Schemas are used to define the parameters for tools and to validate the [tool calls](/docs/ai-sdk-core/tools-and-tool-calling). The AI SDK supports both raw JSON schemas (using the `jsonSchema` function) and [Zod](https://zod.dev/) schemas. [Zod](https://zod.dev/) is the most popular JavaScript schema validation library. You can install Zod with: You can then specify a Zod schema, for example: ```ts import z from 'zod'; const recipeSchema = z.object({ recipe: z.object({ name: z.string(), ingredients: z.array( z.object({ name: z.string(), amount: z.string(), }), ), steps: z.array(z.string()), }), }); ``` You can also use schemas for structured output generation with [`generateObject`](/docs/reference/ai-sdk-core/generate-object) and [`streamObject`](/docs/reference/ai-sdk-core/stream-object). ## Toolkits When you work with tools, you typically need a mix of application specific tools and general purpose tools. There are several providers that offer pre-built tools as **toolkits** that you can use out of the box: - **[agentic](https://github.com/transitive-bullshit/agentic)** - A collection of 20+ tools. Most tools connect to access external APIs such as [Exa](https://exa.ai/) or [E2B](https://e2b.dev/). - **[browserbase](https://github.com/browserbase/js-sdk?tab=readme-ov-file#vercel-ai-sdk-integration)** - Browser tool that runs a headless browser - **[Stripe agent tools](https://docs.stripe.com/agents)** - Tools for interacting with Stripe. - **[Toolhouse](https://docs.toolhouse.ai/toolhouse/using-vercel-ai)** - AI function-calling in 3 lines of code for over 25 different actions. Do you have open source tools or tool libraries that are compatible with the AI SDK? Please [file a pull request](https://github.com/vercel/ai/pulls) to add them to this list. ## Learn more The AI SDK Core [Tool Calling](/docs/ai-sdk-core/tools-and-tool-calling) and [Agents](/docs/ai-sdk-core/agents) documentation has more information about tools and tool calling. --- title: Streaming description: Why use streaming for AI applications? --- # Streaming Streaming conversational text UIs (like ChatGPT) have gained massive popularity over the past few months. This section explores the benefits and drawbacks of streaming and blocking interfaces. [Large language models (LLMs)](/docs/foundations/overview#large-language-models) are extremely powerful. However, when generating long outputs, they can be very slow compared to the latency you're likely used to. If you try to build a traditional blocking UI, your users might easily find themselves staring at loading spinners for 5, 10, even up to 40s waiting for the entire LLM response to be generated. This can lead to a poor user experience, especially in conversational applications like chatbots. Streaming UIs can help mitigate this issue by **displaying parts of the response as they become available**.
## Real-world Examples Here are 2 examples that illustrate how streaming UIs can improve user experiences in a real-world setting – the first uses a blocking UI, while the second uses a streaming UI. ### Blocking UI ### Streaming UI As you can see, the streaming UI is able to start displaying the response much faster than the blocking UI. This is because the blocking UI has to wait for the entire response to be generated before it can display anything, while the streaming UI can display parts of the response as they become available. While streaming interfaces can greatly enhance user experiences, especially with larger language models, they aren't always necessary or beneficial. If you can achieve your desired functionality using a smaller, faster model without resorting to streaming, this route can often lead to simpler and more manageable development processes. However, regardless of the speed of your model, the AI SDK is designed to make implementing streaming UIs as simple as possible. In the example below, we stream text generation from OpenAI's `gpt-4-turbo` in under 10 lines of code using the SDK's [`streamText`](/docs/reference/ai-sdk-core/stream-text) function: ```ts import { openai } from '@ai-sdk/openai'; import { streamText } from 'ai'; const { textStream } = streamText({ model: openai('gpt-4-turbo'), prompt: 'Write a poem about embedding models.', }); for await (const textPart of textStream) { console.log(textPart); } ``` For an introduction to streaming UIs and the AI SDK, check out our [Getting Started guides](/docs/getting-started). --- title: Foundations description: A section that covers foundational knowledge around LLMs and concepts crucial to the AI SDK --- # Foundations --- title: Navigating the Library description: Learn how to navigate the AI SDK. --- # Navigating the Library the AI SDK is a powerful toolkit for building AI applications. This page will help you pick the right tools for your requirements. Let’s start with a quick overview of the AI SDK, which is comprised of three parts: - **[AI SDK Core](/docs/ai-sdk-core/overview):** A unified, provider agnostic API for generating text, structured objects, and tool calls with LLMs. - **[AI SDK UI](/docs/ai-sdk-ui/overview):** A set of framework-agnostic hooks for building chat and generative user interfaces. - [AI SDK RSC](/docs/ai-sdk-rsc/overview): Stream generative user interfaces with React Server Components (RSC). Development is currently experimental and we recommend using [AI SDK UI](/docs/ai-sdk-ui/overview). ## Choosing the Right Tool for Your Environment When deciding which part of the AI SDK to use, your first consideration should be the environment and existing stack you are working with. Different components of the SDK are tailored to specific frameworks and environments. | Library | Purpose | Environment Compatibility | | ----------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------- | | [AI SDK Core](/docs/ai-sdk-core/overview) | Call any LLM with unified API (e.g. [generateText](/docs/reference/ai-sdk-core/generate-text) and [generateObject](/docs/reference/ai-sdk-core/generate-object)) | Any JS environment (e.g. Node.js, Deno, Browser) | | [AI SDK UI](/docs/ai-sdk-ui/overview) | Build streaming chat and generative UIs (e.g. [useChat](/docs/reference/ai-sdk-ui/use-chat)) | React & Next.js, Vue & Nuxt, Svelte & SvelteKit, Solid.js & SolidStart | | [AI SDK RSC](/docs/ai-sdk-rsc/overview) | Stream generative UIs from Server to Client (e.g. [streamUI](/docs/reference/ai-sdk-rsc/stream-ui)). Development is currently experimental and we recommend using [AI SDK UI](/docs/ai-sdk-ui/overview). | Any framework that supports React Server Components (e.g. Next.js) | ## Environment Compatibility These tools have been designed to work seamlessly with each other and it's likely that you will be using them together. Let's look at how you could decide which libraries to use based on your application environment, existing stack, and requirements. The following table outlines AI SDK compatibility based on environment: | Environment | [AI SDK Core](/docs/ai-sdk-core/overview) | [AI SDK UI](/docs/ai-sdk-ui/overview) | [AI SDK RSC](/docs/ai-sdk-rsc/overview) | | --------------------- | ----------------------------------------- | ------------------------------------- | --------------------------------------- | | None / Node.js / Deno | | | | | Vue / Nuxt | | | | | Svelte / SvelteKit | | | | | Solid.js / SolidStart | | | | | Next.js Pages Router | | | | | Next.js App Router | | | | ## When to use AI SDK UI AI SDK UI provides a set of framework-agnostic hooks for quickly building **production-ready AI-native applications**. It offers: - Full support for streaming chat and client-side generative UI - Utilities for handling common AI interaction patterns (i.e. chat, completion, assistant) - Production-tested reliability and performance - Compatibility across popular frameworks ## AI SDK UI Framework Compatibility AI SDK UI supports the following frameworks: [React](https://react.dev/), [Svelte](https://svelte.dev/), [Vue.js](https://vuejs.org/), and [SolidJS](https://www.solidjs.com/). Here is a comparison of the supported functions across these frameworks: | Function | React | Svelte | Vue.js | SolidJS | | ---------------------------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- | | [useChat](/docs/reference/ai-sdk-ui/use-chat) | | | | | | [useChat](/docs/reference/ai-sdk-ui/use-chat) tool calling | | | | | | [useChat](/docs/reference/ai-sdk-ui/use-chat) attachments | | | | | | [useCompletion](/docs/reference/ai-sdk-ui/use-completion) | | | | | | [useObject](/docs/reference/ai-sdk-ui/use-object) | | | | | | [useAssistant](/docs/reference/ai-sdk-ui/use-assistant) | | | | | [Contributions](https://github.com/vercel/ai/blob/main/CONTRIBUTING.md) are welcome to implement missing features for non-React frameworks. ## When to use AI SDK RSC AI SDK RSC is currently experimental. We recommend using [AI SDK UI](/docs/ai-sdk-ui/overview) for production. For guidance on migrating from RSC to UI, see our [migration guide](/docs/ai-sdk-rsc/migrating-to-ui). [React Server Components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) (RSCs) provide a new approach to building React applications that allow components to render on the server, fetch data directly, and stream the results to the client, reducing bundle size and improving performance. They also introduce a new way to call server-side functions from anywhere in your application called [Server Actions](https://nextjs.org/docs/app/building-your-application/data-fetching/server-actions-and-mutations). AI SDK RSC provides a number of utilities that allow you to stream values and UI directly from the server to the client. However, **it's important to be aware of current limitations**: - **Cancellation**: currently, it is not possible to abort a stream using Server Actions. This will be improved in future releases of React and Next.js. - **Increased Data Transfer**: using [`createStreamableUI`](/docs/reference/ai-sdk-rsc/create-streamable-ui) can lead to quadratic data transfer (quadratic to the length of generated text). You can avoid this using [ `createStreamableValue` ](/docs/reference/ai-sdk-rsc/create-streamable-value) instead, and rendering the component client-side. - **Re-mounting Issue During Streaming**: when using `createStreamableUI`, components re-mount on `.done()`, causing [flickering](https://github.com/vercel/ai/issues/2232). Given these limitations, **we recommend using [AI SDK UI](/docs/ai-sdk-ui/overview) for production applications**. --- title: Next.js App Router description: Welcome to the AI SDK quickstart guide for Next.js App Router! --- # Next.js App Router Quickstart In this quick start tutorial, you'll build a simple AI-chatbot with a streaming user interface. Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects. Check out [Prompt Engineering](/docs/advanced/prompt-engineering) and [HTTP Streaming](/docs/advanced/why-streaming) if you haven't heard of them. ## Prerequisites To follow this quickstart, you'll need: - Node.js 18+ and pnpm installed on your local development machine. - An OpenAI API key. If you haven't obtained your OpenAI API key, you can do so by [signing up](https://platform.openai.com/signup/) on the OpenAI website. ## Create Your Application Start by creating a new Next.js application. This command will create a new directory named `my-ai-app` and set up a basic Next.js application inside it.
Be sure to select yes when prompted to use the App Router. If you are looking for the Next.js Pages Router quickstart guide, you can find it [here](/docs/getting-started/nextjs-pages-router).
Navigate to the newly created directory: ### Install dependencies Install `ai` and `@ai-sdk/openai`, the AI package and AI SDK's [ OpenAI provider ](/providers/ai-sdk-providers/openai) respectively. The AI SDK is designed to be a unified interface to interact with any large language model. This means that you can change model and providers with just one line of code! Learn more about [available providers](/providers) and [building custom providers](/providers/community-providers/custom-providers) in the [providers](/providers) section.
Make sure you are using `ai` version 3.1 or higher. ### Configure OpenAI API key Create a `.env.local` file in your project root and add your OpenAI API Key. This key is used to authenticate your application with the OpenAI service. Edit the `.env.local` file: ```env filename=".env.local" OPENAI_API_KEY=xxxxxxxxx ``` Replace `xxxxxxxxx` with your actual OpenAI API key. The AI SDK's OpenAI Provider will default to using the `OPENAI_API_KEY` environment variable. ## Create a Route Handler Create a route handler, `app/api/chat/route.ts` and add the following code: ```tsx filename="app/api/chat/route.ts" import { openai } from '@ai-sdk/openai'; import { streamText } from 'ai'; // Allow streaming responses up to 30 seconds export const maxDuration = 30; export async function POST(req: Request) { const { messages } = await req.json(); const result = streamText({ model: openai('gpt-4o'), messages, }); return result.toDataStreamResponse(); } ``` Let's take a look at what is happening in this code: 1. Define an asynchronous `POST` request handler and extract `messages` from the body of the request. The `messages` variable contains a history of the conversation between you and the chatbot and provides the chatbot with the necessary context to make the next generation. 2. Call [`streamText`](/docs/reference/ai-sdk-core/stream-text), which is imported from the `ai` package. This function accepts a configuration object that contains a `model` provider (imported from `@ai-sdk/openai`) and `messages` (defined in step 1). You can pass additional [settings](/docs/ai-sdk-core/settings) to further customise the model's behaviour. 3. The `streamText` function returns a [`StreamTextResult`](/docs/reference/ai-sdk-core/stream-text#result-object). This result object contains the [ `toDataStreamResponse` ](/docs/reference/ai-sdk-core/stream-text#to-data-stream-response) function which converts the result to a streamed response object. 4. Finally, return the result to the client to stream the response. This Route Handler creates a POST request endpoint at `/api/chat`. ## Wire up the UI Now that you have a Route Handler that can query an LLM, it's time to setup your frontend. The AI SDK's [ UI ](/docs/ai-sdk-ui) package abstracts the complexity of a chat interface into one hook, [`useChat`](/docs/reference/ai-sdk-ui/use-chat). Update your root page (`app/page.tsx`) with the following code to show a list of chat messages and provide a user message input: ```tsx filename="app/page.tsx" 'use client'; import { useChat } from 'ai/react'; export default function Chat() { const { messages, input, handleInputChange, handleSubmit } = useChat(); return (
{messages.map(m => (
{m.role === 'user' ? 'User: ' : 'AI: '} {m.content}
))}
); } ``` Make sure you add the `"use client"` directive to the top of your file. This allows you to add interactivity with Javascript. This page utilizes the `useChat` hook, which will, by default, use the `POST` API route you created earlier (`/api/chat`). The hook provides functions and state for handling user input and form submission. The `useChat` hook provides multiple utility functions and state variables: - `messages` - the current chat messages (an array of objects with `id`, `role`, and `content` properties). - `input` - the current value of the user's input field. - `handleInputChange` and `handleSubmit` - functions to handle user interactions (typing into the input field and submitting the form, respectively). ## Running Your Application With that, you have built everything you need for your chatbot! To start your application, use the command: Head to your browser and open http://localhost:3000. You should see an input field. Test it out by entering a message and see the AI chatbot respond in real-time! The AI SDK makes it fast and easy to build AI chat interfaces with Next.js. ## Enhance Your Chatbot with Tools While large language models (LLMs) have incredible generation capabilities, they struggle with discrete tasks (e.g. mathematics) and interacting with the outside world (e.g. getting the weather). This is where [tools](/docs/ai-sdk-core/tools-and-tool-calling) come in. Tools are actions that an LLM can invoke. The results of these actions can be reported back to the LLM to be considered in the next response. For example, if a user asks about the current weather, without tools, the model would only be able to provide general information based on its training data. But with a weather tool, it can fetch and provide up-to-date, location-specific weather information. Let's enhance your chatbot by adding a simple weather tool. ### Update Your Route Handler Modify your `app/api/chat/route.ts` file to include the new weather tool: ```tsx filename="app/api/chat/route.ts" highlight="2,13-27" import { openai } from '@ai-sdk/openai'; import { streamText, tool } from 'ai'; import { z } from 'zod'; export const maxDuration = 30; export async function POST(req: Request) { const { messages } = await req.json(); const result = streamText({ model: openai('gpt-4o'), messages, tools: { weather: tool({ description: 'Get the weather in a location (fahrenheit)', parameters: z.object({ location: z.string().describe('The location to get the weather for'), }), execute: async ({ location }) => { const temperature = Math.round(Math.random() * (90 - 32) + 32); return { location, temperature, }; }, }), }, }); return result.toDataStreamResponse(); } ``` In this updated code: 1. You import the `tool` function from the `ai` package and `z` from `zod` for schema validation. 2. You define a `tools` object with a `weather` tool. This tool: - Has a description that helps the model understand when to use it. - Defines parameters using a Zod schema, specifying that it requires a `location` string to execute this tool. The model will attempt to extract this parameter from the context of the conversation. If it can't, it will ask the user for the missing information. - Defines an `execute` function that simulates getting weather data (in this case, it returns a random temperature). This is an asynchronous function running on the server so you can fetch real data from an external API. Now your chatbot can "fetch" weather information for any location the user asks about. When the model determines it needs to use the weather tool, it will generate a tool call with the necessary parameters. The `execute` function will then be automatically run, and you can access the results via `toolInvocations` that is available on the message object. Try asking something like "What's the weather in New York?" and see how the model uses the new tool. Notice the blank response in the UI? This is because instead of generating a text response, the model generated a tool call. You can access the tool call and subsequent tool result in the `toolInvocations` key of the message object. ### Update the UI To display the tool invocations in your UI, update your `app/page.tsx` file: ```tsx filename="app/page.tsx" highlight="12-16" 'use client'; import { useChat } from 'ai/react'; export default function Chat() { const { messages, input, handleInputChange, handleSubmit } = useChat(); return (
{messages.map(m => (
{m.role === 'user' ? 'User: ' : 'AI: '} {m.toolInvocations ? (
{JSON.stringify(m.toolInvocations, null, 2)}
) : (

{m.content}

)}
))}
); } ``` With this change, you check each message for any tool calls (`toolInvocations`). These tool calls will be displayed as stringified JSON. Otherwise, you show the message content as before. Now, when you ask about the weather, you'll see the tool invocation and its result displayed in your chat interface. ## Enabling Multi-Step Tool Calls You may have noticed that while the tool results are visible in the chat interface, the model isn't using this information to answer your original query. This is because once the model generates a tool call, it has technically completed its generation. To solve this, you can enable multi-step tool calls using the `maxSteps` option in your `useChat` hook. This feature will automatically send tool results back to the model to trigger an additional generation. In this case, you want the model to answer your question using the results from the weather tool. ### Update Your Client-Side Code Modify your `app/page.tsx` file to include the `maxSteps` option: ```tsx filename="app/page.tsx" highlight="7" 'use client'; import { useChat } from 'ai/react'; export default function Chat() { const { messages, input, handleInputChange, handleSubmit } = useChat({ maxSteps: 5, }); // ... rest of your component code } ``` Head back to the browser and ask about the weather in a location. You should now see the model using the weather tool results to answer your question. By setting `maxSteps` to 5, you're allowing the model to use up to 5 "steps" for any given generation. This enables more complex interactions and allows the model to gather and process information over several steps if needed. You can see this in action by adding another tool to convert the temperature from Fahrenheit to Celsius. ### Update Your Route Handler Update your `app/api/chat/route.ts` file to add a new tool to convert the temperature from Fahrenheit to Celsius: ```tsx filename="app/api/chat/route.ts" highlight="27-40" import { openai } from '@ai-sdk/openai'; import { streamText, tool } from 'ai'; import { z } from 'zod'; export const maxDuration = 30; export async function POST(req: Request) { const { messages } = await req.json(); const result = streamText({ model: openai('gpt-4o'), messages, tools: { weather: tool({ description: 'Get the weather in a location (fahrenheit)', parameters: z.object({ location: z.string().describe('The location to get the weather for'), }), execute: async ({ location }) => { const temperature = Math.round(Math.random() * (90 - 32) + 32); return { location, temperature, }; }, }), convertFahrenheitToCelsius: tool({ description: 'Convert a temperature in fahrenheit to celsius', parameters: z.object({ temperature: z .number() .describe('The temperature in fahrenheit to convert'), }), execute: async ({ temperature }) => { const celsius = Math.round((temperature - 32) * (5 / 9)); return { celsius, }; }, }), }, }); return result.toDataStreamResponse(); } ``` Now, when you ask "What's the weather in New York in celsius?", you should see a more complete interaction: 1. The model will call the weather tool for New York. 2. You'll see the tool result displayed. 3. It will then call the temperature conversion tool to convert the temperature from Fahrenheit to Celsius. 4. The model will then use that information to provide a natural language response about the weather in New York. This multi-step approach allows the model to gather information and use it to provide more accurate and contextual responses, making your chatbot considerably more useful. This simple example demonstrates how tools can expand your model's capabilities. You can create more complex tools to integrate with real APIs, databases, or any other external systems, allowing the model to access and process real-world data in real-time. Tools bridge the gap between the model's knowledge cutoff and current information. ## Where to Next? You've built an AI chatbot using the AI SDK! From here, you have several paths to explore: - To learn more about the AI SDK, read through the [documentation](/docs). - If you're interested in diving deeper with guides, check out the [RAG (retrieval-augmented generation)](/docs/guides/rag-chatbot) and [multi-modal chatbot](/docs/guides/multi-modal-chatbot) guides. - To jumpstart your first AI project, explore available [templates](https://vercel.com/templates?type=ai). --- title: Next.js Pages Router description: Welcome to the AI SDK quickstart guide for Next.js Pages Router! --- # Next.js Pages Router Quickstart The AI SDK is a powerful Typescript library designed to help developers build AI-powered applications. In this quickstart tutorial, you'll build a simple AI-chatbot with a streaming user interface. Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects. If you are unfamiliar with the concepts of [Prompt Engineering](/docs/advanced/prompt-engineering) and [HTTP Streaming](/docs/advanced/why-streaming), you can optionally read these documents first. ## Prerequisites To follow this quickstart, you'll need: - Node.js 18+ and pnpm installed on your local development machine. - An OpenAI API key. If you haven't obtained your OpenAI API key, you can do so by [signing up](https://platform.openai.com/signup/) on the OpenAI website. ## Setup Your Application Start by creating a new Next.js application. This command will create a new directory named `my-ai-app` and set up a basic Next.js application inside it. Be sure to select no when prompted to use the App Router. If you are looking for the Next.js App Router quickstart guide, you can find it [here](/docs/getting-started/nextjs-app-router). Navigate to the newly created directory: ### Install dependencies Install `ai` and `@ai-sdk/openai`, the AI SDK's OpenAI provider. The AI SDK is designed to be a unified interface to interact with any large language model. This means that you can change model and providers with just one line of code! Learn more about [available providers](/providers) and [building custom providers](/providers/community-providers/custom-providers) in the [providers](/providers) section.
Make sure you are using `ai` version 3.1 or higher. ### Configure OpenAI API Key Create a `.env.local` file in your project root and add your OpenAI API Key. This key is used to authenticate your application with the OpenAI service. Edit the `.env.local` file: ```env filename=".env.local" OPENAI_API_KEY=xxxxxxxxx ``` Replace `xxxxxxxxx` with your actual OpenAI API key. The AI SDK's OpenAI Provider will default to using the `OPENAI_API_KEY` environment variable. ## Create a Route Handler As long as you are on Next.js 13+, you can use Route Handlers (using the App Router) alongside the Pages Router. This is recommended to enable you to use the Web APIs interface/signature and to better support streaming. Create a Route Handler (`app/api/chat/route.ts`) and add the following code: ```tsx filename="app/api/chat/route.ts" import { openai } from '@ai-sdk/openai'; import { streamText } from 'ai'; // Allow streaming responses up to 30 seconds export const maxDuration = 30; export async function POST(req: Request) { const { messages } = await req.json(); const result = streamText({ model: openai('gpt-4o'), messages, }); return result.toDataStreamResponse(); } ``` Let's take a look at what is happening in this code: 1. Define an asynchronous `POST` request handler and extract `messages` from the body of the request. The `messages` variable contains a history of the conversation between you and the chatbot and provides the chatbot with the necessary context to make the next generation. 2. Call [`streamText`](/docs/reference/ai-sdk-core/stream-text), which is imported from the `ai` package. This function accepts a configuration object that contains a `model` provider (imported from `@ai-sdk/openai`) and `messages` (defined in step 1). You can pass additional [settings](/docs/ai-sdk-core/settings) to further customise the model's behaviour. 3. The `streamText` function returns a [`StreamTextResult`](/docs/reference/ai-sdk-core/stream-text#result-object). This result object contains the [ `toDataStreamResponse` ](/docs/reference/ai-sdk-core/stream-text#to-data-stream-response) function which converts the result to a streamed response object. 4. Finally, return the result to the client to stream the response. This Route Handler creates a POST request endpoint at `/api/chat`. ## Wire up the UI Now that you have an API route that can query an LLM, it's time to setup your frontend. The AI SDK's [ UI ](/docs/ai-sdk-ui) package abstract the complexity of a chat interface into one hook, [`useChat`](/docs/reference/ai-sdk-ui/use-chat). Update your root page (`pages/index.tsx`) with the following code to show a list of chat messages and provide a user message input: ```tsx filename="pages/index.tsx" import { useChat } from 'ai/react'; export default function Chat() { const { messages, input, handleInputChange, handleSubmit } = useChat(); return (
{messages.map(m => (
{m.role === 'user' ? 'User: ' : 'AI: '} {m.content}
))}
); } ``` This page utilizes the `useChat` hook, which will, by default, use the `POST` API route you created earlier (`/api/chat`). The hook provides functions and state for handling user input and form submission. The `useChat` hook provides multiple utility functions and state variables: - `messages` - the current chat messages (an array of objects with `id`, `role`, and `content` properties). - `input` - the current value of the user's input field. - `handleInputChange` and `handleSubmit` - functions to handle user interactions (typing into the input field and submitting the form, respectively). - `isLoading` - boolean that indicates whether the API request is in progress. ## Running Your Application With that, you have built everything you need for your chatbot! To start your application, use the command: Head to your browser and open http://localhost:3000. You should see an input field. Test it out by entering a message and see the AI chatbot respond in real-time! The AI SDK makes it fast and easy to build AI chat interfaces with Next.js. ## Enhance Your Chatbot with Tools While large language models (LLMs) have incredible generation capabilities, they struggle with discrete tasks (e.g. mathematics) and interacting with the outside world (e.g. getting the weather). This is where [tools](/docs/ai-sdk-core/tools-and-tool-calling) come in. Tools are actions that an LLM can invoke. The results of these actions can be reported back to the LLM to be considered in the next response. For example, if a user asks about the current weather, without tools, the model would only be able to provide general information based on its training data. But with a weather tool, it can fetch and provide up-to-date, location-specific weather information. Let's enhance your chatbot by adding a simple weather tool. ### Update Your Route Handler Modify your `app/api/chat/route.ts` file to include the new weather tool: ```tsx filename="app/api/chat/route.ts" highlight="2,13-27" import { openai } from '@ai-sdk/openai'; import { streamText, tool } from 'ai'; import { z } from 'zod'; export const maxDuration = 30; export async function POST(req: Request) { const { messages } = await req.json(); const result = streamText({ model: openai('gpt-4o'), messages, tools: { weather: tool({ description: 'Get the weather in a location (fahrenheit)', parameters: z.object({ location: z.string().describe('The location to get the weather for'), }), execute: async ({ location }) => { const temperature = Math.round(Math.random() * (90 - 32) + 32); return { location, temperature, }; }, }), }, }); return result.toDataStreamResponse(); } ``` In this updated code: 1. You import the `tool` function from the `ai` package and `z` from `zod` for schema validation. 2. You define a `tools` object with a `weather` tool. This tool: - Has a description that helps the model understand when to use it. - Defines parameters using a Zod schema, specifying that it requires a `location` string to execute this tool. The model will attempt to extract this parameter from the context of the conversation. If it can't, it will ask the user for the missing information. - Defines an `execute` function that simulates getting weather data (in this case, it returns a random temperature). This is an asynchronous function running on the server so you can fetch real data from an external API. Now your chatbot can "fetch" weather information for any location the user asks about. When the model determines it needs to use the weather tool, it will generate a tool call with the necessary parameters. The `execute` function will then be automatically run, and you can access the results via `toolInvocations` that is available on the message object. Try asking something like "What's the weather in New York?" and see how the model uses the new tool. Notice the blank response in the UI? This is because instead of generating a text response, the model generated a tool call. You can access the tool call and subsequent tool result in the `toolInvocations` key of the message object. ### Update the UI To display the tool invocations in your UI, update your `pages/index.tsx` file: ```tsx filename="pages/index.tsx" highlight="11-15" import { useChat } from 'ai/react'; export default function Chat() { const { messages, input, handleInputChange, handleSubmit } = useChat(); return (
{messages.map(m => (
{m.role === 'user' ? 'User: ' : 'AI: '} {m.toolInvocations ? (
{JSON.stringify(m.toolInvocations, null, 2)}
) : (

{m.content}

)}
))}
); } ``` With this change, you check each message for any tool calls (`toolInvocations`). These tool calls will be displayed as stringified JSON. Otherwise, you show the message content as before. Now, when you ask about the weather, you'll see the tool invocation and its result displayed in your chat interface. ## Enabling Multi-Step Tool Calls You may have noticed that while the tool results are visible in the chat interface, the model isn't using this information to answer your original query. This is because once the model generates a tool call, it has technically completed its generation. To solve this, you can enable multi-step tool calls using the `maxSteps` option in your `useChat` hook. This feature will automatically send tool results back to the model to trigger an additional generation. In this case, you want the model to answer your question using the results from the weather tool. ### Update Your Client-Side Code Modify your `pages/index.tsx` file to include the `maxSteps` option: ```tsx filename="pages/index.tsx" highlight="6" import { useChat } from 'ai/react'; export default function Chat() { const { messages, input, handleInputChange, handleSubmit } = useChat({ maxSteps: 5, }); // ... rest of your component code } ``` Head back to the browser and ask about the weather in a location. You should now see the model using the weather tool results to answer your question. By setting `maxSteps` to 5, you're allowing the model to use up to 5 "steps" for any given generation. This enables more complex interactions and allows the model to gather and process information over several steps if needed. You can see this in action by adding another tool to convert the temperature from Fahrenheit to Celsius. ### Update Your Route Handler Update your `app/api/chat/route.ts` file to add a new tool to convert the temperature from Fahrenheit to Celsius: ```tsx filename="app/api/chat/route.ts" highlight="27-40" import { openai } from '@ai-sdk/openai'; import { streamText, tool } from 'ai'; import { z } from 'zod'; export const maxDuration = 30; export async function POST(req: Request) { const { messages } = await req.json(); const result = streamText({ model: openai('gpt-4o'), messages, tools: { weather: tool({ description: 'Get the weather in a location (fahrenheit)', parameters: z.object({ location: z.string().describe('The location to get the weather for'), }), execute: async ({ location }) => { const temperature = Math.round(Math.random() * (90 - 32) + 32); return { location, temperature, }; }, }), convertFahrenheitToCelsius: tool({ description: 'Convert a temperature in fahrenheit to celsius', parameters: z.object({ temperature: z .number() .describe('The temperature in fahrenheit to convert'), }), execute: async ({ temperature }) => { const celsius = Math.round((temperature - 32) * (5 / 9)); return { celsius, }; }, }), }, }); return result.toDataStreamResponse(); } ``` Now, when you ask "What's the weather in New York in celsius?", you should see a more complete interaction: 1. The model will call the weather tool for New York. 2. You'll see the tool result displayed. 3. It will then call the temperature conversion tool to convert the temperature from Fahrenheit to Celsius. 4. The model will then use that information to provide a natural language response about the weather in New York. This multi-step approach allows the model to gather information and use it to provide more accurate and contextual responses, making your chatbot considerably more useful. This simple example demonstrates how tools can expand your model's capabilities. You can create more complex tools to integrate with real APIs, databases, or any other external systems, allowing the model to access and process real-world data in real-time. Tools bridge the gap between the model's knowledge cutoff and current information. ## Where to Next? You've built an AI chatbot using the AI SDK! From here, you have several paths to explore: - To learn more about the AI SDK, read through the [documentation](/docs). - If you're interested in diving deeper with guides, check out the [RAG (retrieval-augmented generation)](/docs/guides/rag-chatbot) and [multi-modal chatbot](/docs/guides/multi-modal-chatbot) guides. - To jumpstart your first AI project, explore available [templates](https://vercel.com/templates?type=ai). --- title: Svelte description: Welcome to the AI SDK quickstart guide for Svelte! --- # Svelte Quickstart The AI SDK is a powerful Typescript library designed to help developers build AI-powered applications. In this quickstart tutorial, you'll build a simple AI-chatbot with a streaming user interface. Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects. If you are unfamiliar with the concepts of [Prompt Engineering](/docs/advanced/prompt-engineering) and [HTTP Streaming](/docs/advanced/why-streaming), you can optionally read these documents first. ## Prerequisites To follow this quickstart, you'll need: - Node.js 18+ and pnpm installed on your local development machine. - An OpenAI API key. If you haven't obtained your OpenAI API key, you can do so by [signing up](https://platform.openai.com/signup/) on the OpenAI website. ## Setup Your Application Start by creating a new SvelteKit application. This command will create a new directory named `my-ai-app` and set up a basic SvelteKit application inside it. Navigate to the newly created directory: ### Install Dependencies Install `ai` and `@ai-sdk/openai`, the AI SDK's OpenAI provider. The AI SDK is designed to be a unified interface to interact with any large language model. This means that you can change model and providers with just one line of code! Learn more about [available providers](/providers) and [building custom providers](/providers/community-providers/custom-providers) in the [providers](/providers) section.
Make sure you are using `ai` version 3.1 or higher. ### Configure OpenAI API Key Create a `.env.local` file in your project root and add your OpenAI API Key. This key is used to authenticate your application with the OpenAI service. Edit the `.env.local` file: ```env filename=".env.local" OPENAI_API_KEY=xxxxxxxxx ``` Replace `xxxxxxxxx` with your actual OpenAI API key. The AI SDK's OpenAI Provider will default to using the `OPENAI_API_KEY` environment variable. ## Create an API route Create a SvelteKit Endpoint, `src/routes/api/chat/+server.ts` and add the following code: ```tsx filename="src/routes/api/chat/+server.ts" import { createOpenAI } from '@ai-sdk/openai'; import { streamText } from 'ai'; import type { RequestHandler } from './$types'; import { env } from '$env/dynamic/private'; const openai = createOpenAI({ apiKey: env.OPENAI_API_KEY ?? '', }); export const POST = (async ({ request }) => { const { messages } = await request.json(); const result = streamText({ model: openai('gpt-4o'), messages, }); return result.toDataStreamResponse(); }) satisfies RequestHandler; ``` You may see an error with the `./$types` import. This will be resolved as soon as you run the dev server. Let's take a look at what is happening in this code: 1. Create an OpenAI provider instance with the `createOpenAI` function from the `@ai-sdk/openai` package. 2. Define a `POST` request handler and extract `messages` from the body of the request. The `messages` variable contains a history of the conversation with you and the chatbot and will provide the chatbot with the necessary context to make the next generation. 3. Call [`streamText`](/docs/reference/ai-sdk-core/stream-text), which is imported from the `ai` package. This function accepts a configuration object that contains a `model` provider (defined in step 1) and `messages` (defined in step 2). You can pass additional [settings](/docs/ai-sdk-core/settings) to further customise the model's behaviour. 4. The `streamText` function returns a [`StreamTextResult`](/docs/reference/ai-sdk-core/stream-text#result-object). This result object contains the [ `toDataStreamResponse` ](/docs/reference/ai-sdk-core/stream-text#to-data-stream-response) function which converts the result to a streamed response object. 5. Return the result to the client to stream the response. ## Wire up the UI Now that you have an API route that can query an LLM, it's time to setup your frontend. The AI SDK's [ UI ](/docs/ai-sdk-ui) package abstract the complexity of a chat interface into one hook, [`useChat`](/docs/reference/ai-sdk-ui/use-chat). Update your root page (`src/routes/+page.svelte`) with the following code to show a list of chat messages and provide a user message input: ```svelte filename="src/routes/+page.svelte"
    {#each $messages as message}
  • {message.role}: {message.content}
  • {/each}
``` This page utilizes the `useChat` hook, which will, by default, use the `POST` route handler you created earlier. The hook provides functions and state for handling user input and form submission. The `useChat` hook provides multiple utility functions and state variables: - `messages` - the current chat messages (an array of objects with `id`, `role`, and `content` properties). - `input` - the current value of the user's input field. - `handleSubmit` - function to handle form submission. ## Running Your Application With that, you have built everything you need for your chatbot! To start your application, use the command: Head to your browser and open http://localhost:5173. You should see an input field. Test it out by entering a message and see the AI chatbot respond in real-time! The AI SDK makes it fast and easy to build AI chat interfaces with Svelte. ## Enhance Your Chatbot with Tools While large language models (LLMs) have incredible generation capabilities, they struggle with discrete tasks (e.g. mathematics) and interacting with the outside world (e.g. getting the weather). This is where [tools](/docs/ai-sdk-core/tools-and-tool-calling) come in. Tools are actions that an LLM can invoke. The results of these actions can be reported back to the LLM to be considered in the next response. For example, if a user asks about the current weather, without tools, the model would only be able to provide general information based on its training data. But with a weather tool, it can fetch and provide up-to-date, location-specific weather information. Let's enhance your chatbot by adding a simple weather tool. ### Update Your API Route Modify your `src/routes/api/chat/+server.ts` file to include the new weather tool: ```tsx filename="src/routes/api/chat/+server.ts" highlight="2,4,18-32" import { createOpenAI } from '@ai-sdk/openai'; import { streamText, tool } from 'ai'; import type { RequestHandler } from './$types'; import { z } from 'zod'; import { env } from '$env/dynamic/private'; const openai = createOpenAI({ apiKey: env.OPENAI_API_KEY ?? '', }); export const POST = (async ({ request }) => { const { messages } = await request.json(); const result = streamText({ model: openai('gpt-4o'), messages, tools: { weather: tool({ description: 'Get the weather in a location (fahrenheit)', parameters: z.object({ location: z.string().describe('The location to get the weather for'), }), execute: async ({ location }) => { const temperature = Math.round(Math.random() * (90 - 32) + 32); return { location, temperature, }; }, }), }, }); return result.toDataStreamResponse(); }) satisfies RequestHandler; ``` In this updated code: 1. You import the `tool` function from the `ai` package and `z` from `zod` for schema validation. 2. You define a `tools` object with a `weather` tool. This tool: - Has a description that helps the model understand when to use it. - Defines parameters using a Zod schema, specifying that it requires a `location` string to execute this tool. The model will attempt to extract this parameter from the context of the conversation. If it can't, it will ask the user for the missing information. - Defines an `execute` function that simulates getting weather data (in this case, it returns a random temperature). This is an asynchronous function running on the server so you can fetch real data from an external API. Now your chatbot can "fetch" weather information for any location the user asks about. When the model determines it needs to use the weather tool, it will generate a tool call with the necessary parameters. The `execute` function will then be automatically run, and you can access the results via `toolInvocations` that is available on the message object. Try asking something like "What's the weather in New York?" and see how the model uses the new tool. Notice the blank response in the UI? This is because instead of generating a text response, the model generated a tool call. You can access the tool call and subsequent tool result in the `toolInvocations` key of the message object. ### Update the UI To display the tool invocations in your UI, update your `src/routes/+page.svelte` file: ```svelte filename="src/routes/+page.svelte"
    {#each $messages as message}
  • {message.role}: {#if message.toolInvocations}
    {JSON.stringify(message.toolInvocations, null, 2)}
    {:else} {message.content} {/if}
  • {/each}
``` With this change, you check each message for any tool calls (`toolInvocations`). These tool calls will be displayed as stringified JSON. Otherwise, you show the message content as before. Now, when you ask about the weather, you'll see the tool invocation and its result displayed in your chat interface. ## Enabling Multi-Step Tool Calls You may have noticed that while the tool results are visible in the chat interface, the model isn't using this information to answer your original query. This is because once the model generates a tool call, it has technically completed its generation. To solve this, you can enable multi-step tool calls using the `maxSteps` option in your `useChat` hook. This feature will automatically send tool results back to the model to trigger an additional generation. In this case, you want the model to answer your question using the results from the weather tool. ### Update Your UI Modify your `src/routes/+page.svelte` file to include the `maxSteps` option: ```svelte filename="src/routes/+page.svelte" highlight="4" ``` Head back to the browser and ask about the weather in a location. You should now see the model using the weather tool results to answer your question. By setting `maxSteps` to 5, you're allowing the model to use up to 5 "steps" for any given generation. This enables more complex interactions and allows the model to gather and process information over several steps if needed. You can see this in action by adding another tool to convert the temperature from Fahrenheit to Celsius. ### Update Your API Route Update your `src/routes/api/chat/+server.ts` file to add a new tool to convert the temperature from Fahrenheit to Celsius: ```tsx filename="src/routes/api/chat/+server.ts" highlight="32-45" import { createOpenAI } from '@ai-sdk/openai'; import { streamText, tool } from 'ai'; import type { RequestHandler } from './$types'; import { z } from 'zod'; import { env } from '$env/dynamic/private'; const openai = createOpenAI({ apiKey: env.OPENAI_API_KEY ?? '', }); export const POST = (async ({ request }) => { const { messages } = await request.json(); const result = streamText({ model: openai('gpt-4o'), messages, tools: { weather: tool({ description: 'Get the weather in a location (fahrenheit)', parameters: z.object({ location: z.string().describe('The location to get the weather for'), }), execute: async ({ location }) => { const temperature = Math.round(Math.random() * (90 - 32) + 32); return { location, temperature, }; }, }), convertFahrenheitToCelsius: tool({ description: 'Convert a temperature in fahrenheit to celsius', parameters: z.object({ temperature: z .number() .describe('The temperature in fahrenheit to convert'), }), execute: async ({ temperature }) => { const celsius = Math.round((temperature - 32) * (5 / 9)); return { celsius, }; }, }), }, }); return result.toDataStreamResponse(); }) satisfies RequestHandler; ``` Now, when you ask "What's the weather in New York in celsius?", you should see a more complete interaction: 1. The model will call the weather tool for New York. 2. You'll see the tool result displayed. 3. It will then call the temperature conversion tool to convert the temperature from Fahrenheit to Celsius. 4. The model will then use that information to provide a natural language response about the weather in New York. This multi-step approach allows the model to gather information and use it to provide more accurate and contextual responses, making your chatbot considerably more useful. This simple example demonstrates how tools can expand your model's capabilities. You can create more complex tools to integrate with real APIs, databases, or any other external systems, allowing the model to access and process real-world data in real-time. Tools bridge the gap between the model's knowledge cutoff and current information. ## Where to Next? You've built an AI chatbot using the AI SDK! From here, you have several paths to explore: - To learn more about the AI SDK, read through the [documentation](/docs). - If you're interested in diving deeper with guides, check out the [RAG (retrieval-augmented generation)](/docs/guides/rag-chatbot) and [multi-modal chatbot](/docs/guides/multi-modal-chatbot) guides. - To jumpstart your first AI project, explore available [templates](https://vercel.com/templates?type=ai). --- title: Nuxt description: Welcome to the AI SDK quickstart guide for Nuxt! --- # Nuxt Quickstart The AI SDK is a powerful Typescript library designed to help developers build AI-powered applications. In this quickstart tutorial, you'll build a simple AI-chatbot with a streaming user interface. Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects. If you are unfamiliar with the concepts of [Prompt Engineering](/docs/advanced/prompt-engineering) and [HTTP Streaming](/docs/advanced/why-streaming), you can optionally read these documents first. ## Prerequisites To follow this quickstart, you'll need: - Node.js 18+ and pnpm installed on your local development machine. - An OpenAI API key. If you haven't obtained your OpenAI API key, you can do so by [signing up](https://platform.openai.com/signup/) on the OpenAI website. ## Setup Your Application Start by creating a new Nuxt application. This command will create a new directory named `my-ai-app` and set up a basic Nuxt application inside it. Navigate to the newly created directory: ### Install dependencies Install `ai` and `@ai-sdk/openai`, the AI SDK's OpenAI provider. The AI SDK is designed to be a unified interface to interact with any large language model. This means that you can change model and providers with just one line of code! Learn more about [available providers](/providers) and [building custom providers](/providers/community-providers/custom-providers) in the [providers](/providers) section.
Make sure you are using `ai` version 3.1 or higher. ### Configure OpenAI API key Create a `.env` file in your project root and add your OpenAI API Key. This key is used to authenticate your application with the OpenAI service. Edit the `.env` file: ```env filename=".env" OPENAI_API_KEY=xxxxxxxxx ``` Replace `xxxxxxxxx` with your actual OpenAI API key and configure the environment variable in `nuxt.config.ts`: ```ts filename="nuxt.config.ts" export default defineNuxtConfig({ // rest of your nuxt config runtimeConfig: { openaiApiKey: process.env.OPENAI_API_KEY, }, }); ``` The AI SDK's OpenAI Provider will default to using the `OPENAI_API_KEY` environment variable. ## Create an API route Create an API route, `server/api/chat.ts` and add the following code: ```typescript filename="server/api/chat.ts" import { streamText } from 'ai'; import { createOpenAI } from '@ai-sdk/openai'; export default defineLazyEventHandler(async () => { const apiKey = useRuntimeConfig().openaiApiKey; if (!apiKey) throw new Error('Missing OpenAI API key'); const openai = createOpenAI({ apiKey: apiKey, }); return defineEventHandler(async (event: any) => { const { messages } = await readBody(event); const result = streamText({ model: openai('gpt-4o'), messages, }); return result.toDataStreamResponse(); }); }); ``` Let's take a look at what is happening in this code: 1. Create an OpenAI provider instance with the `createOpenAI` function from the `@ai-sdk/openai` package. 2. Define an Event Handler and extract `messages` from the body of the request. The `messages` variable contains a history of the conversation with you and the chatbot and will provide the chatbot with the necessary context to make the next generation. 3. Call [`streamText`](/docs/reference/ai-sdk-core/stream-text), which is imported from the `ai` package. This function accepts a configuration object that contains a `model` provider (defined in step 1) and `messages` (defined in step 2). You can pass additional [settings](/docs/ai-sdk-core/settings) to further customise the model's behaviour. 4. The `streamText` function returns a [`StreamTextResult`](/docs/reference/ai-sdk-core/stream-text#result). This result object contains the [ `toDataStreamResponse` ](/docs/reference/ai-sdk-core/stream-text#to-data-stream-response) function which converts the result to a streamed response object. 5. Return the result to the client to stream the response. ## Wire up the UI Now that you have an API route that can query an LLM, it's time to setup your frontend. The AI SDK's [ UI ](/docs/ai-sdk-ui/overview) package abstract the complexity of a chat interface into one hook, [`useChat`](/docs/reference/ai-sdk-ui/use-chat). Update your root page (`pages/index.vue`) with the following code to show a list of chat messages and provide a user message input: ```typescript filename="pages/index.vue" ``` If your project has `app.vue` instead of `pages/index.vue`, delete the `app.vue` file and create a new `pages/index.vue` file with the code above. This page utilizes the `useChat` hook, which will, by default, use the API route you created earlier (`/api/chat`). The hook provides functions and state for handling user input and form submission. The `useChat` hook provides multiple utility functions and state variables: - `messages` - the current chat messages (an array of objects with `id`, `role`, and `content` properties). - `input` - the current value of the user's input field. - `handleSubmit` - function to handle form submission. ## Running Your Application With that, you have built everything you need for your chatbot! To start your application, use the command: Head to your browser and open http://localhost:3000. You should see an input field. Test it out by entering a message and see the AI chatbot respond in real-time! The AI SDK makes it fast and easy to build AI chat interfaces with Nuxt. ## Enhance Your Chatbot with Tools While large language models (LLMs) have incredible generation capabilities, they struggle with discrete tasks (e.g. mathematics) and interacting with the outside world (e.g. getting the weather). This is where [tools](/docs/ai-sdk-core/tools-and-tool-calling) come in. Tools are actions that an LLM can invoke. The results of these actions can be reported back to the LLM to be considered in the next response. For example, if a user asks about the current weather, without tools, the model would only be able to provide general information based on its training data. But with a weather tool, it can fetch and provide up-to-date, location-specific weather information. Let's enhance your chatbot by adding a simple weather tool. ### Update Your API Route Modify your `server/api/chat.ts` file to include the new weather tool: ```typescript filename="server/api/chat.ts" highlight="1,18-34" import { streamText, tool } from 'ai'; import { createOpenAI } from '@ai-sdk/openai'; import { z } from 'zod'; export default defineLazyEventHandler(async () => { const apiKey = useRuntimeConfig().openaiApiKey; if (!apiKey) throw new Error('Missing OpenAI API key'); const openai = createOpenAI({ apiKey: apiKey, }); return defineEventHandler(async (event: any) => { const { messages } = await readBody(event); const result = streamText({ model: openai('gpt-4o'), messages, tools: { weather: tool({ description: 'Get the weather in a location (fahrenheit)', parameters: z.object({ location: z .string() .describe('The location to get the weather for'), }), execute: async ({ location }) => { const temperature = Math.round(Math.random() * (90 - 32) + 32); return { location, temperature, }; }, }), }, }); return result.toDataStreamResponse(); }); }); ``` In this updated code: 1. You import the `tool` function from the `ai` package and `z` from `zod` for schema validation. 2. You define a `tools` object with a `weather` tool. This tool: - Has a description that helps the model understand when to use it. - Defines parameters using a Zod schema, specifying that it requires a `location` string to execute this tool. The model will attempt to extract this parameter from the context of the conversation. If it can't, it will ask the user for the missing information. - Defines an `execute` function that simulates getting weather data (in this case, it returns a random temperature). This is an asynchronous function running on the server so you can fetch real data from an external API. Now your chatbot can "fetch" weather information for any location the user asks about. When the model determines it needs to use the weather tool, it will generate a tool call with the necessary parameters. The `execute` function will then be automatically run, and you can access the results via `toolInvocations` that is available on the message object. Try asking something like "What's the weather in New York?" and see how the model uses the new tool. Notice the blank response in the UI? This is because instead of generating a text response, the model generated a tool call. You can access the tool call and subsequent tool result in the `toolInvocations` key of the message object. ### Update the UI To display the tool invocations in your UI, update your `pages/index.vue` file: ```typescript filename="pages/index.vue" highlight="11-15" ``` With this change, you check each message for any tool calls (`toolInvocations`). These tool calls will be displayed as stringified JSON. Otherwise, you show the message content as before. Now, when you ask about the weather, you'll see the tool invocation and its result displayed in your chat interface. ## Enabling Multi-Step Tool Calls You may have noticed that while the tool results are visible in the chat interface, the model isn't using this information to answer your original query. This is because once the model generates a tool call, it has technically completed its generation. To solve this, you can enable multi-step tool calls using the `maxSteps` option in your `useChat` hook. This feature will automatically send tool results back to the model to trigger an additional generation. In this case, you want the model to answer your question using the results from the weather tool. ### Update Your Client-Side Code Modify your `pages/index.vue` file to include the `maxSteps` option: ```typescript filename="pages/index.vue" highlight="4" ``` Head back to the browser and ask about the weather in a location. You should now see the model using the weather tool results to answer your question. By setting `maxSteps` to 5, you're allowing the model to use up to 5 "steps" for any given generation. This enables more complex interactions and allows the model to gather and process information over several steps if needed. You can see this in action by adding another tool to convert the temperature from Fahrenheit to Celsius. ### Update Your API Route Update your `server/api/chat.ts` file to add a new tool to convert the temperature from Fahrenheit to Celsius: ```typescript filename="server/api/chat.ts" highlight="34-47" import { streamText, tool } from 'ai'; import { createOpenAI } from '@ai-sdk/openai'; import { z } from 'zod'; export default defineLazyEventHandler(async () => { const apiKey = useRuntimeConfig().openaiApiKey; if (!apiKey) throw new Error('Missing OpenAI API key'); const openai = createOpenAI({ apiKey: apiKey, }); return defineEventHandler(async (event: any) => { const { messages } = await readBody(event); const result = streamText({ model: openai('gpt-4o-preview'), messages, tools: { weather: tool({ description: 'Get the weather in a location (fahrenheit)', parameters: z.object({ location: z .string() .describe('The location to get the weather for'), }), execute: async ({ location }) => { const temperature = Math.round(Math.random() * (90 - 32) + 32); return { location, temperature, }; }, }), convertFahrenheitToCelsius: tool({ description: 'Convert a temperature in fahrenheit to celsius', parameters: z.object({ temperature: z .number() .describe('The temperature in fahrenheit to convert'), }), execute: async ({ temperature }) => { const celsius = Math.round((temperature - 32) * (5 / 9)); return { celsius, }; }, }), }, }); return result.toDataStreamResponse(); }); }); ``` Now, when you ask "What's the weather in New York in celsius?", you should see a more complete interaction: 1. The model will call the weather tool for New York. 2. You'll see the tool result displayed. 3. It will then call the temperature conversion tool to convert the temperature from Fahrenheit to Celsius. 4. The model will then use that information to provide a natural language response about the weather in New York. This multi-step approach allows the model to gather information and use it to provide more accurate and contextual responses, making your chatbot considerably more useful. This simple example demonstrates how tools can expand your model's capabilities. You can create more complex tools to integrate with real APIs, databases, or any other external systems, allowing the model to access and process real-world data in real-time. Tools bridge the gap between the model's knowledge cutoff and current information. ## Where to Next? You've built an AI chatbot using the AI SDK! From here, you have several paths to explore: - To learn more about the AI SDK, read through the [documentation](/docs). - If you're interested in diving deeper with guides, check out the [RAG (retrieval-augmented generation)](/docs/guides/rag-chatbot) and [multi-modal chatbot](/docs/guides/multi-modal-chatbot) guides. - To jumpstart your first AI project, explore available [templates](https://vercel.com/templates?type=ai). --- title: Node.js description: Welcome to the AI SDK quickstart guide for Node.js! --- # Node.js Quickstart In this quickstart tutorial, you'll build a simple AI chatbot with a streaming user interface. Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects. If you are unfamiliar with the concepts of [Prompt Engineering](/docs/advanced/prompt-engineering) and [HTTP Streaming](/docs/advanced/why-streaming), you can optionally read these documents first. ## Prerequisites To follow this quickstart, you'll need: - Node.js 18+ and pnpm installed on your local development machine. - An OpenAI API key. If you haven't obtained your OpenAI API key, you can do so by [signing up](https://platform.openai.com/signup/) on the OpenAI website. ## Setup Your Application Start by creating a new directory using the `mkdir` command. Change into your new directory and then run the `pnpm init` command. This will create a `package.json` in your new directory. ```bash mkdir my-ai-app cd my-ai-app pnpm init ``` ### Install Dependencies Install `ai` and `@ai-sdk/openai`, the AI SDK's OpenAI provider, along with other necessary dependencies. The AI SDK is designed to be a unified interface to interact with any large language model. This means that you can change model and providers with just one line of code! Learn more about [available providers](/providers) and [building custom providers](/providers/community-providers/custom-providers) in the [providers](/providers) section. ```bash pnpm add ai @ai-sdk/openai zod dotenv pnpm add -D @types/node tsx typescript ``` Make sure you are using `ai` version 3.1 or higher. The `ai` and `@ai-sdk/openai` packages contain the AI SDK and the [ AI SDK OpenAI provider](/providers/ai-sdk-providers/openai), respectively. You will use `zod` to define type-safe schemas that you will pass to the large language model (LLM). You will use `dotenv` to access environment variables (your OpenAI key) within your application. There are also three development dependencies, installed with the `-D` flag, that are necessary to run your Typescript code. ### Configure OpenAI API key Create a `.env` file in your project root and add your OpenAI API Key. This key is used to authenticate your application with the OpenAI service. Edit the `.env` file: ```env filename=".env" OPENAI_API_KEY=xxxxxxxxx ``` Replace `xxxxxxxxx` with your actual OpenAI API key. The AI SDK's OpenAI Provider will default to using the `OPENAI_API_KEY` environment variable. ## Create Your Application Create an `index.ts` file in the root of your project and add the following code: ```ts filename="index.ts" import { openai } from '@ai-sdk/openai'; import { CoreMessage, streamText } from 'ai'; import dotenv from 'dotenv'; import * as readline from 'node:readline/promises'; dotenv.config(); const terminal = readline.createInterface({ input: process.stdin, output: process.stdout, }); const messages: CoreMessage[] = []; async function main() { while (true) { const userInput = await terminal.question('You: '); messages.push({ role: 'user', content: userInput }); const result = streamText({ model: openai('gpt-4o'), messages, }); let fullResponse = ''; process.stdout.write('\nAssistant: '); for await (const delta of result.textStream) { fullResponse += delta; process.stdout.write(delta); } process.stdout.write('\n\n'); messages.push({ role: 'assistant', content: fullResponse }); } } main().catch(console.error); ``` Let's take a look at what is happening in this code: 1. Set up a readline interface for taking input from the terminal, enabling interactive sessions directly from the command line. 2. Initialize an array called `messages` to store the history of your conversation. This history allows the model to maintain context in ongoing dialogues. 3. In the `main` function: - Prompt for and capture user input, storing it in `userInput`. - Add user input to the `messages` array as a user message. - Call [`streamText`](/docs/reference/ai-sdk-core/stream-text), which is imported from the `ai` package. This function accepts a configuration object that contains a `model` provider and `messages`. - Iterate over the text stream returned by the `streamText` function (`result.textStream`) and print the contents of the stream to the terminal. - Add the assistant's response to the `messages` array. ## Running Your Application With that, you have built everything you need for your chatbot! To start your application, use the command: You should see a prompt in your terminal. Test it out by entering a message and see the AI chatbot respond in real-time! The AI SDK makes it fast and easy to build AI chat interfaces with Node.js. ## Enhance Your Chatbot with Tools While large language models (LLMs) have incredible generation capabilities, they struggle with discrete tasks (e.g. mathematics) and interacting with the outside world (e.g. getting the weather). This is where [tools](/docs/ai-sdk-core/tools-and-tool-calling) come in. Tools are actions that an LLM can invoke. The results of these actions can be reported back to the LLM to be considered in the next response. For example, if a user asks about the current weather, without tools, the model would only be able to provide general information based on its training data. But with a weather tool, it can fetch and provide up-to-date, location-specific weather information. Let's enhance your chatbot by adding a simple weather tool. ### Update Your Application Modify your `index.ts` file to include the new weather tool: ```ts filename="index.ts" highlight="2,4,25-36" import { openai } from '@ai-sdk/openai'; import { CoreMessage, streamText, tool } from 'ai'; import dotenv from 'dotenv'; import { z } from 'zod'; import * as readline from 'node:readline/promises'; dotenv.config(); const terminal = readline.createInterface({ input: process.stdin, output: process.stdout, }); const messages: CoreMessage[] = []; async function main() { while (true) { const userInput = await terminal.question('You: '); messages.push({ role: 'user', content: userInput }); const result = streamText({ model: openai('gpt-4o'), messages, tools: { weather: tool({ description: 'Get the weather in a location (in Celsius)', parameters: z.object({ location: z .string() .describe('The location to get the weather for'), }), execute: async ({ location }) => ({ location, temperature: Math.round((Math.random() * 30 + 5) * 10) / 10, // Random temp between 5°C and 35°C }), }), }, }); let fullResponse = ''; process.stdout.write('\nAssistant: '); for await (const delta of result.textStream) { fullResponse += delta; process.stdout.write(delta); } process.stdout.write('\n\n'); messages.push({ role: 'assistant', content: fullResponse }); } } main().catch(console.error); ``` In this updated code: 1. You import the `tool` function from the `ai` package. 2. You define a `tools` object with a `weather` tool. This tool: - Has a description that helps the model understand when to use it. - Defines parameters using a Zod schema, specifying that it requires a `location` string to execute this tool. - Defines an `execute` function that simulates getting weather data (in this case, it returns a random temperature). This is an asynchronous function, so you could fetch real data from an external API. Now your chatbot can "fetch" weather information for any location the user asks about. When the model determines it needs to use the weather tool, it will generate a tool call with the necessary parameters. The `execute` function will then be automatically run, and the results will be used by the model to generate its response. Try asking something like "What's the weather in New York?" and see how the model uses the new tool. Notice the blank "assistant" response? This is because instead of generating a text response, the model generated a tool call. You can access the tool call and subsequent tool result in the `toolCall` and `toolResult` keys of the result object. ```typescript highlight="47-48" import { openai } from '@ai-sdk/openai'; import { CoreMessage, streamText, tool } from 'ai'; import dotenv from 'dotenv'; import { z } from 'zod'; import * as readline from 'node:readline/promises'; dotenv.config(); const terminal = readline.createInterface({ input: process.stdin, output: process.stdout, }); const messages: CoreMessage[] = []; async function main() { while (true) { const userInput = await terminal.question('You: '); messages.push({ role: 'user', content: userInput }); const result = streamText({ model: openai('gpt-4o'), messages, tools: { weather: tool({ description: 'Get the weather in a location (in Celsius)', parameters: z.object({ location: z .string() .describe('The location to get the weather for'), }), execute: async ({ location }) => ({ location, temperature: Math.round((Math.random() * 30 + 5) * 10) / 10, // Random temp between 5°C and 35°C }), }), }, }); let fullResponse = ''; process.stdout.write('\nAssistant: '); for await (const delta of result.textStream) { fullResponse += delta; process.stdout.write(delta); } process.stdout.write('\n\n'); console.log(await result.toolCalls); console.log(await result.toolResults); messages.push({ role: 'assistant', content: fullResponse }); } } main().catch(console.error); ``` Now, when you ask about the weather, you'll see the tool call and its result displayed in your chat interface. ## Enabling Multi-Step Tool Calls You may have noticed that while the tool results are visible in the chat interface, the model isn't using this information to answer your original query. This is because once the model generates a tool call, it has technically completed its generation. To solve this, you can enable multi-step tool calls using `maxSteps`. This feature will automatically send tool results back to the model to trigger an additional generation. In this case, you want the model to answer your question using the results from the weather tool. ### Update Your Application Modify your `index.ts` file to include the `maxSteps` option: ```ts filename="index.ts" highlight="37-40" import { openai } from '@ai-sdk/openai'; import { CoreMessage, streamText, tool } from 'ai'; import dotenv from 'dotenv'; import { z } from 'zod'; import * as readline from 'node:readline/promises'; dotenv.config(); const terminal = readline.createInterface({ input: process.stdin, output: process.stdout, }); const messages: CoreMessage[] = []; async function main() { while (true) { const userInput = await terminal.question('You: '); messages.push({ role: 'user', content: userInput }); const result = streamText({ model: openai('gpt-4o'), messages, tools: { weather: tool({ description: 'Get the weather in a location (in Celsius)', parameters: z.object({ location: z .string() .describe('The location to get the weather for'), }), execute: async ({ location }) => ({ location, temperature: Math.round((Math.random() * 30 + 5) * 10) / 10, // Random temp between 5°C and 35°C }), }), }, maxSteps: 5, onStepFinish: step => { console.log(JSON.stringify(step, null, 2)); }, }); let fullResponse = ''; process.stdout.write('\nAssistant: '); for await (const delta of result.textStream) { fullResponse += delta; process.stdout.write(delta); } process.stdout.write('\n\n'); messages.push({ role: 'assistant', content: fullResponse }); } } main().catch(console.error); ``` In this updated code: 1. You set `maxSteps` to 5, allowing the model to use up to 5 "steps" for any given generation. 2. You add an `onStepFinish` callback to log each step of the interaction, helping you understand the model's tool usage. This means we can also delete the `toolCall` and `toolResult` `console.log` statements from the previous example. Now, when you ask about the weather in a location, you should see the model using the weather tool results to answer your question. By setting `maxSteps` to 5, you're allowing the model to use up to 5 "steps" for any given generation. This enables more complex interactions and allows the model to gather and process information over several steps if needed. You can see this in action by adding another tool to convert the temperature from Celsius to Fahrenheit. ### Adding a second tool Update your `index.ts` file to add a new tool to convert the temperature from Celsius to Fahrenheit: ```ts filename="index.ts" highlight="36-45" import { openai } from '@ai-sdk/openai'; import { CoreMessage, streamText, tool } from 'ai'; import dotenv from 'dotenv'; import { z } from 'zod'; import * as readline from 'node:readline/promises'; dotenv.config(); const terminal = readline.createInterface({ input: process.stdin, output: process.stdout, }); const messages: CoreMessage[] = []; async function main() { while (true) { const userInput = await terminal.question('You: '); messages.push({ role: 'user', content: userInput }); const result = streamText({ model: openai('gpt-4o'), messages, tools: { weather: tool({ description: 'Get the weather in a location (in Celsius)', parameters: z.object({ location: z .string() .describe('The location to get the weather for'), }), execute: async ({ location }) => ({ location, temperature: Math.round((Math.random() * 30 + 5) * 10) / 10, // Random temp between 5°C and 35°C }), }), convertCelsiusToFahrenheit: tool({ description: 'Convert a temperature from Celsius to Fahrenheit', parameters: z.object({ celsius: z .number() .describe('The temperature in Celsius to convert'), }), execute: async ({ celsius }) => { const fahrenheit = (celsius * 9) / 5 + 32; return { fahrenheit: Math.round(fahrenheit * 100) / 100 }; }, }), }, maxSteps: 5, onStepFinish: step => { console.log(JSON.stringify(step, null, 2)); }, }); let fullResponse = ''; process.stdout.write('\nAssistant: '); for await (const delta of result.textStream) { fullResponse += delta; process.stdout.write(delta); } process.stdout.write('\n\n'); messages.push({ role: 'assistant', content: fullResponse }); } } main().catch(console.error); ``` Now, when you ask "What's the weather in New York in Celsius?", you should see a more complete interaction: 1. The model will call the weather tool for New York. 2. You'll see the tool result logged. 3. It will then call the temperature conversion tool to convert the temperature from Celsius to Fahrenheit. 4. The model will then use that information to provide a natural language response about the weather in New York. This multi-step approach allows the model to gather information and use it to provide more accurate and contextual responses, making your chatbot considerably more useful. This example shows how tools can expand the model's capabilities. You can create more complex tools to integrate with real APIs, databases, or any other external systems, allowing the model to access and process real-world data in real-time. Tools bridge the gap between the model's knowledge cutoff and current information. ## Where to Next? You've built an AI chatbot using the AI SDK! From here, you have several paths to explore: - To learn more about the AI SDK, read through the [documentation](/docs). - If you're interested in diving deeper with guides, check out the [RAG (retrieval-augmented generation)](/docs/guides/rag-chatbot) and [multi-modal chatbot](/docs/guides/multi-modal-chatbot) guides. - To jumpstart your first AI project, explore available [templates](https://vercel.com/templates?type=ai). --- title: Expo description: Welcome to the AI SDK quickstart guide for Expo! --- # Expo Quickstart In this quick start tutorial, you'll build a simple AI-chatbot with a streaming user interface with [Expo](https://expo.dev/). Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects. Check out [Prompt Engineering](/docs/advanced/prompt-engineering) and [HTTP Streaming](/docs/advanced/why-streaming) if you haven't heard of them. ## Prerequisites To follow this quickstart, you'll need: - Node.js 18+ and pnpm installed on your local development machine. - An OpenAI API key. If you haven't obtained your OpenAI API key, you can do so by [signing up](https://platform.openai.com/signup/) on the OpenAI website. ## Create Your Application Start by creating a new Expo application. This command will create a new directory named `my-ai-app` and set up a basic Expo application inside it. Navigate to the newly created directory: This guide requires Expo 52 or higher. ### Install dependencies Install `ai`, `@ai-sdk/react` and `@ai-sdk/openai`, the AI package, the AI React package and AI SDK's [ OpenAI provider ](/providers/ai-sdk-providers/openai) respectively. The AI SDK is designed to be a unified interface to interact with any large language model. This means that you can change model and providers with just one line of code! Learn more about [available providers](/providers) and [building custom providers](/providers/community-providers/custom-providers) in the [providers](/providers) section.
Make sure you are using `ai` version 3.1 or higher. ### Configure OpenAI API key Create a `.env.local` file in your project root and add your OpenAI API Key. This key is used to authenticate your application with the OpenAI service. Edit the `.env.local` file: ```env filename=".env.local" OPENAI_API_KEY=xxxxxxxxx ``` Replace `xxxxxxxxx` with your actual OpenAI API key. The AI SDK's OpenAI Provider will default to using the `OPENAI_API_KEY` environment variable. ## Create an API Route Create a route handler, `app/api/chat+api.ts` and add the following code: ```tsx filename="app/api/chat+api.ts" import { openai } from '@ai-sdk/openai'; import { streamText } from 'ai'; export async function POST(req: Request) { const { messages } = await req.json(); const result = streamText({ model: openai('gpt-4o'), messages, }); return result.toDataStreamResponse(); } ``` Let's take a look at what is happening in this code: 1. Define an asynchronous `POST` request handler and extract `messages` from the body of the request. The `messages` variable contains a history of the conversation between you and the chatbot and provides the chatbot with the necessary context to make the next generation. 2. Call [`streamText`](/docs/reference/ai-sdk-core/stream-text), which is imported from the `ai` package. This function accepts a configuration object that contains a `model` provider (imported from `@ai-sdk/openai`) and `messages` (defined in step 1). You can pass additional [settings](/docs/ai-sdk-core/settings) to further customise the model's behaviour. 3. The `streamText` function returns a [`StreamTextResult`](/docs/reference/ai-sdk-core/stream-text#result-object). This result object contains the [ `toDataStreamResponse` ](/docs/reference/ai-sdk-core/stream-text#to-data-stream-response) function which converts the result to a streamed response object. 4. Finally, return the result to the client to stream the response. This API route creates a POST request endpoint at `/api/chat`. If you are experiencing issues with choppy/delayed streams on iOS, you can add the `Content-Type`: `application/octet-stream` header to the response. For more information, check out [this GitHub issue](https://github.com/vercel/ai/issues/3946). ## Wire up the UI Now that you have an API route that can query an LLM, it's time to setup your frontend. The AI SDK's [ UI ](/docs/ai-sdk-ui) package abstracts the complexity of a chat interface into one hook, [`useChat`](/docs/reference/ai-sdk-ui/use-chat). Update your root page (`app/(tabs)/index.tsx`) with the following code to show a list of chat messages and provide a user message input: ```tsx filename="app/(tabs)/index.tsx" import { generateAPIUrl } from '@/utils'; import { useChat } from '@ai-sdk/react'; import { fetch as expoFetch } from 'expo/fetch'; import { View, TextInput, ScrollView, Text, SafeAreaView } from 'react-native'; export default function App() { const { messages, error, handleInputChange, input, handleSubmit } = useChat({ fetch: expoFetch as unknown as typeof globalThis.fetch, api: generateAPIUrl('/api/chat'), onError: error => console.error(error, 'ERROR'), }); if (error) return {error.message}; return ( {messages.map(m => ( {m.role} {m.content} ))} handleInputChange({ ...e, target: { ...e.target, value: e.nativeEvent.text, }, } as unknown as React.ChangeEvent) } onSubmitEditing={e => { handleSubmit(e); e.preventDefault(); }} autoFocus={true} /> ); } ``` This page utilizes the `useChat` hook, which will, by default, use the `POST` API route you created earlier (`/api/chat`). The hook provides functions and state for handling user input and form submission. The `useChat` hook provides multiple utility functions and state variables: - `messages` - the current chat messages (an array of objects with `id`, `role`, and `content` properties). - `input` - the current value of the user's input field. - `handleInputChange` and `handleSubmit` - functions to handle user interactions (typing into the input field and submitting the form, respectively). You use the expo/fetch function instead of the native node fetch to enable streaming of chat responses. This requires Expo 52 or higher. ### Create the API URL Generator Because you're using expo/fetch for streaming responses instead of the native fetch function, you'll need an API URL generator to ensure you are using the correct base url and format depending on the client environment (e.g. web or mobile). Create a new file called `utils.ts` in the root of your project and add the following code: ```ts filename="utils.ts" import Constants from 'expo-constants'; export const generateAPIUrl = (relativePath: string) => { const origin = Constants.experienceUrl.replace('exp://', 'http://'); const path = relativePath.startsWith('/') ? relativePath : `/${relativePath}`; if (process.env.NODE_ENV === 'development') { return origin.concat(path); } if (!process.env.EXPO_PUBLIC_API_BASE_URL) { throw new Error( 'EXPO_PUBLIC_API_BASE_URL environment variable is not defined', ); } return process.env.EXPO_PUBLIC_API_BASE_URL.concat(path); }; ``` This utility function handles URL generation for both development and production environments, ensuring your API calls work correctly across different devices and configurations. Before deploying to production, you must set the `EXPO_PUBLIC_API_BASE_URL` environment variable in your production environment. This variable should point to the base URL of your API server. ## Running Your Application With that, you have built everything you need for your chatbot! To start your application, use the command: Head to your browser and open http://localhost:8081. You should see an input field. Test it out by entering a message and see the AI chatbot respond in real-time! The AI SDK makes it fast and easy to build AI chat interfaces with Expo. ## Enhance Your Chatbot with Tools While large language models (LLMs) have incredible generation capabilities, they struggle with discrete tasks (e.g. mathematics) and interacting with the outside world (e.g. getting the weather). This is where [tools](/docs/ai-sdk-core/tools-and-tool-calling) come in. Tools are actions that an LLM can invoke. The results of these actions can be reported back to the LLM to be considered in the next response. For example, if a user asks about the current weather, without tools, the model would only be able to provide general information based on its training data. But with a weather tool, it can fetch and provide up-to-date, location-specific weather information. Let's enhance your chatbot by adding a simple weather tool. ### Update Your API route Modify your `app/api/chat+api.ts` file to include the new weather tool: ```tsx filename="app/api/chat+api.ts" highlight="2,13-27" import { openai } from '@ai-sdk/openai'; import { streamText, tool } from 'ai'; import { z } from 'zod'; export async function POST(req: Request) { const { messages } = await req.json(); const result = streamText({ model: openai('gpt-4o'), messages, tools: { weather: tool({ description: 'Get the weather in a location (fahrenheit)', parameters: z.object({ location: z.string().describe('The location to get the weather for'), }), execute: async ({ location }) => { const temperature = Math.round(Math.random() * (90 - 32) + 32); return { location, temperature, }; }, }), }, }); return result.toDataStreamResponse(); } ``` In this updated code: 1. You import the `tool` function from the `ai` package and `z` from `zod` for schema validation. 2. You define a `tools` object with a `weather` tool. This tool: - Has a description that helps the model understand when to use it. - Defines parameters using a Zod schema, specifying that it requires a `location` string to execute this tool. The model will attempt to extract this parameter from the context of the conversation. If it can't, it will ask the user for the missing information. - Defines an `execute` function that simulates getting weather data (in this case, it returns a random temperature). This is an asynchronous function running on the server so you can fetch real data from an external API. Now your chatbot can "fetch" weather information for any location the user asks about. When the model determines it needs to use the weather tool, it will generate a tool call with the necessary parameters. The `execute` function will then be automatically run, and you can access the results via `toolInvocations` that is available on the message object. You may need to restart your development server for the changes to take effect. Try asking something like "What's the weather in New York?" and see how the model uses the new tool. Notice the blank response in the UI? This is because instead of generating a text response, the model generated a tool call. You can access the tool call and subsequent tool result in the `toolInvocations` key of the message object. ### Update the UI To display the tool invocations in your UI, update your `app/(tabs)/index.tsx` file: ```tsx filename="app/(tabs)/index.tsx" highlight="31-35" import { generateAPIUrl } from '@/utils'; import { useChat } from '@ai-sdk/react'; import { fetch as expoFetch } from 'expo/fetch'; import { View, TextInput, ScrollView, Text, SafeAreaView } from 'react-native'; export default function App() { const { messages, error, handleInputChange, input, handleSubmit } = useChat({ fetch: expoFetch as unknown as typeof globalThis.fetch, api: generateAPIUrl('/api/chat'), onError: error => console.error(error, 'ERROR'), }); if (error) return {error.message}; return ( {messages.map(m => ( {m.role} {m.toolInvocations ? ( {JSON.stringify(m.toolInvocations, null, 2)} ) : ( {m.content} )} ))} handleInputChange({ ...e, target: { ...e.target, value: e.nativeEvent.text, }, } as unknown as React.ChangeEvent) } onSubmitEditing={e => { handleSubmit(e); e.preventDefault(); }} autoFocus={true} /> ); } ``` You may need to restart your development server for the changes to take effect. With this change, you check each message for any tool calls (`toolInvocations`). These tool calls will be displayed as stringified JSON. Otherwise, you show the message content as before. Now, when you ask about the weather, you'll see the tool invocation and its result displayed in your chat interface. ## Enabling Multi-Step Tool Calls You may have noticed that while the tool results are visible in the chat interface, the model isn't using this information to answer your original query. This is because once the model generates a tool call, it has technically completed its generation. To solve this, you can enable multi-step tool calls using the `maxSteps` option in your `useChat` hook. This feature will automatically send tool results back to the model to trigger an additional generation. In this case, you want the model to answer your question using the results from the weather tool. ### Update Your Client-Side Code Modify your `app/(tabs)/index.tsx` file to include the `maxSteps` option: ```tsx filename="app/(tabs)/index.tsx" highlight="9" import { useChat } from '@ai-sdk/react'; // ... rest of your imports export default function App() { const { messages, error, handleInputChange, input, handleSubmit } = useChat({ fetch: expoFetch as unknown as typeof globalThis.fetch, api: generateAPIUrl('/api/chat'), onError: error => console.error(error, 'ERROR'), maxSteps: 5, }); // ... rest of your component code } ``` You may need to restart your development server for the changes to take effect. Head back to the browser and ask about the weather in a location. You should now see the model using the weather tool results to answer your question. By setting `maxSteps` to 5, you're allowing the model to use up to 5 "steps" for any given generation. This enables more complex interactions and allows the model to gather and process information over several steps if needed. You can see this in action by adding another tool to convert the temperature from Fahrenheit to Celsius. ### Update Your API Route Update your `app/api/chat+api.ts` file to add a new tool to convert the temperature from Fahrenheit to Celsius: ```tsx filename="app/api/chat+api.ts" highlight="27-40" import { openai } from '@ai-sdk/openai'; import { streamText, tool } from 'ai'; import { z } from 'zod'; export async function POST(req: Request) { const { messages } = await req.json(); const result = streamText({ model: openai('gpt-4o'), messages, tools: { weather: tool({ description: 'Get the weather in a location (fahrenheit)', parameters: z.object({ location: z.string().describe('The location to get the weather for'), }), execute: async ({ location }) => { const temperature = Math.round(Math.random() * (90 - 32) + 32); return { location, temperature, }; }, }), convertFahrenheitToCelsius: tool({ description: 'Convert a temperature in fahrenheit to celsius', parameters: z.object({ temperature: z .number() .describe('The temperature in fahrenheit to convert'), }), execute: async ({ temperature }) => { const celsius = Math.round((temperature - 32) * (5 / 9)); return { celsius, }; }, }), }, }); return result.toDataStreamResponse(); } ``` You may need to restart your development server for the changes to take effect. Now, when you ask "What's the weather in New York in celsius?", you should see a more complete interaction: 1. The model will call the weather tool for New York. 2. You'll see the tool result displayed. 3. It will then call the temperature conversion tool to convert the temperature from Fahrenheit to Celsius. 4. The model will then use that information to provide a natural language response about the weather in New York. This multi-step approach allows the model to gather information and use it to provide more accurate and contextual responses, making your chatbot considerably more useful. This simple example demonstrates how tools can expand your model's capabilities. You can create more complex tools to integrate with real APIs, databases, or any other external systems, allowing the model to access and process real-world data in real-time. Tools bridge the gap between the model's knowledge cutoff and current information. ## Where to Next? You've built an AI chatbot using the AI SDK! From here, you have several paths to explore: - To learn more about the AI SDK, read through the [documentation](/docs). - If you're interested in diving deeper with guides, check out the [RAG (retrieval-augmented generation)](/docs/guides/rag-chatbot) and [multi-modal chatbot](/docs/guides/multi-modal-chatbot) guides. - To jumpstart your first AI project, explore available [templates](https://vercel.com/templates?type=ai). --- title: Getting Started description: Welcome to the AI SDK documentation! --- # Getting Started The following guides are intended to provide you with an introduction to some of the core features provided by the AI SDK. ## Backend Framework Examples You can also use [AI SDK Core](/docs/ai-sdk-core/overview) and [AI SDK UI](/docs/ai-sdk-ui/overview) with the following backend frameworks: --- title: RAG Chatbot description: Learn how to build a RAG Chatbot with the AI SDK and Next.js --- # RAG Chatbot Guide In this guide, you will learn how to build a retrieval-augmented generation (RAG) chatbot application.