AI SDK UIStoring Messages

Storing Messages

The ability to store message history is essential for chatbot use cases. The Vercel AI SDK simplifies the process of storing chat history through the onFinish callback of the streamText function.

onFinish is called after the model's response and all tool executions have completed. It provides the final text, tool calls, tool results, and usage information, making it an ideal place to e.g. store the chat history in a database.

Implementing Persistent Chat History

To implement persistent chat storage, you can utilize the onFinish callback on the streamText function. This callback is triggered upon the completion of the model's response and all tool executions, making it an ideal place to handle the storage of each interaction.

API Route Example

import { openai } from '@ai-sdk/openai';
import { streamText, convertToCoreMessages } from 'ai';
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await streamText({
model: openai('gpt-4-turbo'),
messages: convertToCoreMessages(messages),
async onFinish({ text, toolCalls, toolResults, usage, finishReason }) {
// implement your own storage logic:
await saveChat({ text, toolCalls, toolResults });
},
});
return result.toAIStreamResponse();
}

Server Action Example

'use server';
import { openai } from '@ai-sdk/openai';
import { CoreMessage, streamText } from 'ai';
export async function continueConversation(messages: CoreMessage[]) {
const result = await streamText({
model: openai('gpt-4-turbo'),
messages,
async onFinish({ text, toolCalls, toolResults, finishReason, usage }) {
// implement your own storage logic:
await saveChat({ text, toolCalls, toolResults });
},
});
return result.toAIStreamResponse();
}