AI SDK RSCstreamUI
streamUI
A helper function to create a streamable UI from LLM providers. This function is similar to AI SDK Core APIs and supports the same model interfaces.
Import
import { streamUI } from "ai/rsc"
Parameters
model:
The language model to use. Example: openai("gpt-4-turbo")
initial?:
The initial UI to render.
system:
The system prompt to use that specifies the behavior of the model.
prompt:
The input prompt to generate the text from.
messages:
A list of messages that represent a conversation.
CoreSystemMessage
role:
The role for the system message.
content:
The content of the message.
CoreUserMessage
role:
The role for the user message.
content:
The content of the message.
TextPart
type:
The type of the message part.
text:
The text content of the message part.
ImagePart
type:
The type of the message part.
image:
The image content of the message part. String are either base64 encoded content, base64 data URLs, or http(s) URLs.
CoreAssistantMessage
role:
The role for the assistant message.
content:
The content of the message.
TextPart
type:
The type of the message part.
text:
The text content of the message part.
ToolCallPart
type:
The type of the message part.
toolCallId:
The id of the tool call.
toolName:
The name of the tool, which typically would be the name of the function.
args:
Parameters generated by the model to be used by the tool.
CoreToolMessage
role:
The role for the assistant message.
content:
The content of the message.
ToolResultPart
type:
The type of the message part.
toolCallId:
The id of the tool call the result corresponds to.
toolName:
The name of the tool the result corresponds to.
result:
The result returned by the tool after execution.
isError?:
Whether the result is an error or an error message.
maxTokens?:
Maximum number of tokens to generate.
temperature?:
Temperature setting. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either `temperature` or `topP`, but not both.
topP?:
Nucleus sampling. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either `temperature` or `topP`, but not both.
topK?:
Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature.
presencePenalty?:
Presence penalty setting. It affects the likelihood of the model to repeat information that is already in the prompt. The value is passed through to the provider. The range depends on the provider and model.
frequencyPenalty?:
Frequency penalty setting. It affects the likelihood of the model to repeatedly use the same words or phrases. The value is passed through to the provider. The range depends on the provider and model.
stopSequences?:
Sequences that will stop the generation of the text. If the model generates any of these sequences, it will stop generating further text.
seed?:
The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.
maxRetries?:
Maximum number of retries. Set to 0 to disable retries. Default: 2.
abortSignal?:
An optional abort signal that can be used to cancel the call.
headers?:
Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.
tools:
Tools that are accessible to and can be called by the model.
Tool
description?:
Information about the purpose of the tool including details on how and when it can be used by the model.
parameters:
The typed schema that describes the parameters of the tool that can also be used to validation and error handling.
generate?:
A function or a generator function that is called with the arguments from the tool call and yields React nodes as the UI.
toolChoice?:
The tool choice setting. It specifies how tools are selected for execution. The default is "auto". "none" disables tool execution. "required" requires tools to be executed. { "type": "tool", "toolName": string } specifies a specific tool to execute.
text?:
Callback to handle the generated tokens from the model.
Text
content:
The full content of the completion.
delta:
The delta.
done:
Is it done?
onFinish?:
Callback that is called when the LLM response and all request tool executions (for tools that have a `generate` function) are finished.
OnFinishResult
usage:
The token usage of the generated text.
TokenUsage
promptTokens:
The total number of tokens in the prompt.
completionTokens:
The total number of tokens in the completion.
totalTokens:
The total number of tokens generated.
value:
The final ui node that was generated.
warnings:
Warnings from the model provider (e.g. unsupported settings).
rawResponse:
Optional raw response data.
RawResponse
headers:
Response headers.
Returns
value:
The user interface based on the stream output.
text:
The full text that has been generated. Resolved when the response is finished.
toolCalls:
The tool calls that have been executed. Resolved when the response is finished.
toolResults:
The tool results that have been generated. Resolved when the all tool executions are finished.
finishReason:
The reason why the generation finished. Resolved when the response is finished.
usage:
The token usage of the generated text. Resolved when the response is finished.
TokenUsage
promptTokens:
The total number of tokens in the prompt.
completionTokens:
The total number of tokens in the completion.
totalTokens:
The total number of tokens generated.
rawResponse:
Optional raw response data.
RawResponse
headers:
Response headers.
warnings:
Warnings from the model provider (e.g. unsupported settings).
textStream:
A text stream that returns only the generated text deltas. You can use it as either an AsyncIterable or a ReadableStream. When an error occurs, the stream will throw the error.
fullStream:
A stream with all events, including text deltas, tool calls, tool results, and errors. You can use it as either an AsyncIterable or a ReadableStream. When an error occurs, the stream will throw the error.
TextStreamPart
type:
The type to identify the object as text delta.
textDelta:
The text delta.
TextStreamPart
type:
The type to identify the object as tool call.
toolCallId:
The id of the tool call.
toolName:
The name of the tool, which typically would be the name of the function.
args:
Parameters generated by the model to be used by the tool.
TextStreamPart
type:
The type to identify the object as tool result.
toolCallId:
The id of the tool call.
toolName:
The name of the tool, which typically would be the name of the function.
args:
Parameters generated by the model to be used by the tool.
result:
The result returned by the tool after execution has completed.
TextStreamPart
type:
The type to identify the object as error.
error:
Describes the error that may have occurred during execution.
TextStreamPart
type:
The type to identify the object as finish.
finishReason:
The reason the model finished generating the text.
usage:
The token usage of the generated text.
TokenUsage
promptTokens:
The total number of tokens in the prompt.
completionTokens:
The total number of tokens in the completion.
totalTokens:
The total number of tokens generated.
toAIStream:
Converts the result to an `AIStream` object that is compatible with `StreamingTextResponse`. It can be used with the `useChat` and `useCompletion` hooks.
pipeAIStreamToResponse:
Writes stream data output to a Node.js response-like object. It sets a `Content-Type` header to `text/plain; charset=utf-8` and writes each stream data part as a separate chunk.
pipeTextStreamToResponse:
Writes text delta output to a Node.js response-like object. It sets a `Content-Type` header to `text/plain; charset=utf-8` and writes each text delta as a separate chunk.
toAIStreamResponse:
Converts the result to a streamed response object with a stream data part stream. It can be used with the `useChat` and `useCompletion` hooks.
toTextStreamResponse:
Creates a simple text stream response. Each text delta is encoded as UTF-8 and sent as a separate chunk. Non-text-delta events are ignored.