Completion
The useCompletion
hook allows you to create a user interface to handle text completions in your application. It enables the streaming of text completions from your AI provider, manages the state for chat input, and updates the UI automatically as new messages are received.
In this guide, you will learn how to use the useCompletion
hook in your application to generate text completions and stream them in real-time to your users.
Example
'use client';
import { useCompletion } from 'ai/react';
export default function Page() { const { completion, input, handleInputChange, handleSubmit } = useCompletion({ api: '/api/completion', });
return ( <form onSubmit={handleSubmit}> <input name="prompt" value={input} onChange={handleInputChange} id="input" /> <button type="submit">Submit</button> <div>{completion}</div> </form> );}
import { streamText } from 'ai';import { openai } from '@ai-sdk/openai';
// Allow streaming responses up to 30 secondsexport const maxDuration = 30;
export async function POST(req: Request) { const { prompt }: { prompt: string } = await req.json();
const result = streamText({ model: openai('gpt-3.5-turbo'), prompt, });
return result.toDataStreamResponse();}
In the Page
component, the useCompletion
hook will request to your AI provider endpoint whenever the user submits a message. The completion is then streamed back in real-time and displayed in the UI.
This enables a seamless text completion experience where the user can see the AI response as soon as it is available, without having to wait for the entire response to be received.
Customized UI
useCompletion
also provides ways to manage the prompt via code, show loading and error states, and update messages without being triggered by user interactions.
Loading and error states
To show a loading spinner while the chatbot is processing the user's message, you can use the isLoading
state returned by the useCompletion
hook:
const { isLoading, ... } = useCompletion()
return( <> {isLoading ? <Spinner /> : null} </>)
Similarly, the error
state reflects the error object thrown during the fetch request. It can be used to display an error message, or show a toast notification:
const { error, ... } = useCompletion()
useEffect(() => { if (error) { toast.error(error.message) }}, [error])
// Or display the error message in the UI:return ( <> {error ? <div>{error.message}</div> : null} </>)
Controlled input
In the initial example, we have handleSubmit
and handleInputChange
callbacks that manage the input changes and form submissions. These are handy for common use cases, but you can also use uncontrolled APIs for more advanced scenarios such as form validation or customized components.
The following example demonstrates how to use more granular APIs like setInput
with your custom input and submit button components:
const { input, setInput } = useCompletion();
return ( <> <MyCustomInput value={input} onChange={value => setInput(value)} /> </>);
Cancelation
It's also a common use case to abort the response message while it's still streaming back from the AI provider. You can do this by calling the stop
function returned by the useCompletion
hook.
const { stop, isLoading, ... } = useCompletion()
return ( <> <button onClick={stop} disabled={!isLoading}>Stop</button> </>)
When the user clicks the "Stop" button, the fetch request will be aborted. This avoids consuming unnecessary resources and improves the UX of your application.
Throttling UI Updates
By default, the useCompletion
hook will trigger a render every time a new chunk is received.
You can throttle the UI updates with the experimental_throttle
option.
const { completion, ... } = useCompletion({ // Throttle the completion and data updates to 50ms: experimental_throttle: 50})
Event Callbacks
useCompletion
also provides optional event callbacks that you can use to handle different stages of the chatbot lifecycle. These callbacks can be used to trigger additional actions, such as logging, analytics, or custom UI updates.
const { ... } = useCompletion({ onResponse: (response: Response) => { console.log('Received response from server:', response) }, onFinish: (message: Message) => { console.log('Finished streaming message:', message) }, onError: (error: Error) => { console.error('An error occurred:', error) },})
It's worth noting that you can abort the processing by throwing an error in the onResponse
callback. This will trigger the onError
callback and stop the message from being appended to the chat UI. This can be useful for handling unexpected responses from the AI provider.
Configure Request Options
By default, the useCompletion
hook sends a HTTP POST request to the /api/completion
endpoint with the prompt as part of the request body. You can customize the request by passing additional options to the useCompletion
hook:
const { messages, input, handleInputChange, handleSubmit } = useCompletion({ api: '/api/custom-completion', headers: { Authorization: 'your_token', }, body: { user_id: '123', }, credentials: 'same-origin',});
In this example, the useCompletion
hook sends a POST request to the /api/completion
endpoint with the specified headers, additional body fields, and credentials for that fetch request. On your server side, you can handle the request with these additional information.