Stream Object Generation
Object generation can sometimes take a long time to complete, especially when you're generating a large schema. In such cases, it is useful to stream the object generation process to the client in real-time. This allows the client to display the generated object as it is being generated, rather than have users wait for it to complete before displaying the result.
Schema
It is helpful to set up the schema in a separate file that is imported on both the client and server.
import { z } from 'zod';
// define a schema for the notificationsexport const notificationSchema = z.object({ notifications: z.array( z.object({ name: z.string().describe('Name of a fictional person.'), message: z.string().describe('Message. Do not use emojis or links.'), }), ),});
Client
The client uses useObject
to stream the object generation process.
The results are partial and are displayed as they are received.
Please note the code for handling undefined
values in the JSX.
'use client';
import { experimental_useObject as useObject } from 'ai/react';import { notificationSchema } from './api/use-object/schema';
export default function Page() { const { object, submit } = useObject({ api: '/api/use-object', schema: notificationSchema, });
return ( <div> <button onClick={() => submit('Messages during finals week.')}> Generate notifications </button>
{object?.notifications?.map((notification, index) => ( <div key={index}> <p>{notification?.name}</p> <p>{notification?.message}</p> </div> ))} </div> );}
Server
On the server, we use streamObject
to stream the object generation process.
import { openai } from '@ai-sdk/openai';import { streamObject } from 'ai';import { notificationSchema } from './schema';
// Allow streaming responses up to 30 secondsexport const maxDuration = 30;
export async function POST(req: Request) { const context = await req.json();
const result = streamObject({ model: openai('gpt-4-turbo'), schema: notificationSchema, prompt: `Generate 3 notifications for a messages app in this context:` + context, });
return result.toTextStreamResponse();}
Loading State and Stopping the Stream
You can use the loading
state to display a loading indicator while the object is being generated.
You can also use the stop
function to stop the object generation process.
'use client';
import { experimental_useObject as useObject } from 'ai/react';import { notificationSchema } from './api/use-object/schema';
export default function Page() { const { object, submit, isLoading, stop } = useObject({ api: '/api/use-object', schema: notificationSchema, });
return ( <div> <button onClick={() => submit('Messages during finals week.')} disabled={isLoading} > Generate notifications </button>
{isLoading && ( <div> <div>Loading...</div> <button type="button" onClick={() => stop()}> Stop </button> </div> )}
{object?.notifications?.map((notification, index) => ( <div key={index}> <p>{notification?.name}</p> <p>{notification?.message}</p> </div> ))} </div> );}