Text generation can sometimes take a long time to finish, especially when the response is big. In such cases, it is useful to stream the chat completion to the client in real-time. This allows the client to display the new message as it is being generated by the model, rather than have users wait for it to finish.
import { streamText } from 'ai';import { openai } from '@ai-sdk/openai';
const result = streamText({ model: openai('gpt-3.5-turbo'), maxTokens: 1024, system: 'You are a helpful chatbot.', messages: [ { role: 'user', content: 'Hello!', }, { role: 'assistant', content: 'Hello! How can I help you today?', }, { role: 'user', content: 'I need help with my computer.', }, ],});
for await (const textPart of result.textStream) { console.log(textPart);}