BasicsStream Text Generation

Stream Text Generation

Text generation can sometimes take a long time to complete, especially when you're generating a couple of paragraphs. In such cases, it is useful to stream the text generation process to the client in real-time. This allows the client to display the generated text as it is being generated, rather than have users wait for it to complete before displaying the result.



Let's create a simple React component that will call the getAnswer function when a button is clicked. The getAnswer function will call the streamText function, which will then generate text based on the input prompt. To consume the stream of text in the client, we will use the readStreamableValue function from the ai/rsc module.

'use client';
import { useState } from 'react';
import { generate } from './actions';
import { readStreamableValue } from 'ai/rsc';
// Force the page to be dynamic and allow streaming responses up to 30 seconds
export const dynamic = 'force-dynamic';
export const maxDuration = 30;
export default function Home() {
const [generation, setGeneration] = useState<string>('');
return (
onClick={async () => {
const { output } = await generate('Why is the sky blue?');
for await (const delta of readStreamableValue(output)) {
setGeneration(currentGeneration => `${currentGeneration}${delta}`);


On the server side, we need to implement the getAnswer function, which will call the streamText function. The streamText function will generate text based on the input prompt. In order to stream the text generation to the client, we will use createStreambleValue that can wrap any changable value and stream it to the client.

Using DevTools, we can see the text generation being streamed to the client in real-time.

'use server';
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { createStreamableValue } from 'ai/rsc';
export async function generate(input: string) {
'use server';
const stream = createStreamableValue('');
(async () => {
const { textStream } = await streamText({
model: openai('gpt-3.5-turbo'),
prompt: input,
for await (const delta of textStream) {
return { output: stream.value };