AI SDK RSCstreamUI

streamUI

A helper function to create a streamable UI from LLM providers. This function is similar to AI SDK Core APIs and supports the same model interfaces.

Import

import { streamUI } from "ai/rsc"

Parameters

model:

LanguageModel
The language model to use. Example: openai("gpt-4-turbo")

initial?:

ReactNode
The initial UI to render.

system:

string
The system prompt to use that specifies the behavior of the model.

prompt:

string
The input prompt to generate the text from.

messages:

Array<CoreSystemMessage | CoreUserMessage | CoreAssistantMessage | CoreToolMessage>
A list of messages that represent a conversation.
CoreSystemMessage

role:

'system'
The role for the system message.

content:

string
The content of the message.
CoreUserMessage

role:

'user'
The role for the user message.

content:

string | Array<TextPart | ImagePart>
The content of the message.
TextPart

type:

'text'
The type of the message part.

text:

string
The text content of the message part.
ImagePart

type:

'image'
The type of the message part.

image:

string | Uint8Array | Buffer | ArrayBuffer | URL
The image content of the message part. String are either base64 encoded content, base64 data URLs, or http(s) URLs.
CoreAssistantMessage

role:

'assistant'
The role for the assistant message.

content:

string | Array<TextPart | ToolCallPart>
The content of the message.
TextPart

type:

'text'
The type of the message part.

text:

string
The text content of the message part.
ToolCallPart

type:

'tool-call'
The type of the message part.

toolCallId:

string
The id of the tool call.

toolName:

string
The name of the tool, which typically would be the name of the function.

args:

object based on zod schema
Parameters generated by the model to be used by the tool.
CoreToolMessage

role:

'tool'
The role for the assistant message.

content:

Array<ToolResultPart>
The content of the message.
ToolResultPart

type:

'tool-result'
The type of the message part.

toolCallId:

string
The id of the tool call the result corresponds to.

toolName:

string
The name of the tool the result corresponds to.

result:

unknown
The result returned by the tool after execution.

isError?:

boolean
Whether the result is an error or an error message.

maxTokens?:

number
Maximum number of tokens to generate.

temperature?:

number
Temperature setting. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either `temperature` or `topP`, but not both.

topP?:

number
Nucleus sampling. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either `temperature` or `topP`, but not both.

topK?:

number
Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature.

presencePenalty?:

number
Presence penalty setting. It affects the likelihood of the model to repeat information that is already in the prompt. The value is passed through to the provider. The range depends on the provider and model.

frequencyPenalty?:

number
Frequency penalty setting. It affects the likelihood of the model to repeatedly use the same words or phrases. The value is passed through to the provider. The range depends on the provider and model.

stopSequences?:

string[]
Sequences that will stop the generation of the text. If the model generates any of these sequences, it will stop generating further text.

seed?:

number
The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.

maxRetries?:

number
Maximum number of retries. Set to 0 to disable retries. Default: 2.

abortSignal?:

AbortSignal
An optional abort signal that can be used to cancel the call.

headers?:

Record<string, string>
Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.

tools:

Record<string, Tool>
Tools that are accessible to and can be called by the model.
Tool

description?:

string
Information about the purpose of the tool including details on how and when it can be used by the model.

parameters:

zod schema
The typed schema that describes the parameters of the tool that can also be used to validation and error handling.

generate?:

(async (parameters) => ReactNode) | AsyncGenerator<ReactNode, ReactNode, void>
A function or a generator function that is called with the arguments from the tool call and yields React nodes as the UI.

toolChoice?:

"auto" | "none" | "required" | { "type": "tool", "toolName": string }
The tool choice setting. It specifies how tools are selected for execution. The default is "auto". "none" disables tool execution. "required" requires tools to be executed. { "type": "tool", "toolName": string } specifies a specific tool to execute.

text?:

(Text) => ReactNode
Callback to handle the generated tokens from the model.
Text

content:

string
The full content of the completion.

delta:

string
The delta.

done:

boolean
Is it done?

onFinish?:

(result: OnFinishResult) => void
Callback that is called when the LLM response and all request tool executions (for tools that have a `generate` function) are finished.
OnFinishResult

usage:

TokenUsage
The token usage of the generated text.
TokenUsage

promptTokens:

number
The total number of tokens in the prompt.

completionTokens:

number
The total number of tokens in the completion.

totalTokens:

number
The total number of tokens generated.

value:

ReactNode
The final ui node that was generated.

warnings:

Warning[] | undefined
Warnings from the model provider (e.g. unsupported settings).

rawResponse:

RawResponse
Optional raw response data.
RawResponse

headers:

Record<string, string>
Response headers.

Returns

value:

ReactNode
The user interface based on the stream output.

text:

Promise<string>
The full text that has been generated. Resolved when the response is finished.

toolCalls:

Promise<ToolCall[]>
The tool calls that have been executed. Resolved when the response is finished.

toolResults:

Promise<ToolResult[]>
The tool results that have been generated. Resolved when the all tool executions are finished.

finishReason:

Promise<'stop' | 'length' | 'content-filter' | 'tool-calls' | 'error' | 'other' | 'unknown'>
The reason why the generation finished. Resolved when the response is finished.

usage:

Promise<TokenUsage>
The token usage of the generated text. Resolved when the response is finished.
TokenUsage

promptTokens:

number
The total number of tokens in the prompt.

completionTokens:

number
The total number of tokens in the completion.

totalTokens:

number
The total number of tokens generated.

rawResponse:

RawResponse
Optional raw response data.
RawResponse

headers:

Record<string, string>
Response headers.

warnings:

Warning[] | undefined
Warnings from the model provider (e.g. unsupported settings).

textStream:

AsyncIterable<string> & ReadableStream<string>
A text stream that returns only the generated text deltas. You can use it as either an AsyncIterable or a ReadableStream. When an error occurs, the stream will throw the error.

fullStream:

AsyncIterable<TextStreamPart> & ReadableStream<TextStreamPart>
A stream with all events, including text deltas, tool calls, tool results, and errors. You can use it as either an AsyncIterable or a ReadableStream. When an error occurs, the stream will throw the error.
TextStreamPart

type:

'text-delta'
The type to identify the object as text delta.

textDelta:

string
The text delta.
TextStreamPart

type:

'tool-call'
The type to identify the object as tool call.

toolCallId:

string
The id of the tool call.

toolName:

string
The name of the tool, which typically would be the name of the function.

args:

object based on zod schema
Parameters generated by the model to be used by the tool.
TextStreamPart

type:

'tool-result'
The type to identify the object as tool result.

toolCallId:

string
The id of the tool call.

toolName:

string
The name of the tool, which typically would be the name of the function.

args:

object based on zod schema
Parameters generated by the model to be used by the tool.

result:

any
The result returned by the tool after execution has completed.
TextStreamPart

type:

'error'
The type to identify the object as error.

error:

Error
Describes the error that may have occurred during execution.
TextStreamPart

type:

'finish'
The type to identify the object as finish.

finishReason:

'stop' | 'length' | 'content-filter' | 'tool-calls' | 'error' | 'other' | 'unknown'
The reason the model finished generating the text.

usage:

TokenUsage
The token usage of the generated text.
TokenUsage

promptTokens:

number
The total number of tokens in the prompt.

completionTokens:

number
The total number of tokens in the completion.

totalTokens:

number
The total number of tokens generated.

toAIStream:

(callbacks?: AIStreamCallbacksAndOptions) => AIStream
Converts the result to an `AIStream` object that is compatible with `StreamingTextResponse`. It can be used with the `useChat` and `useCompletion` hooks.

pipeAIStreamToResponse:

(response: ServerResponse, init?: { headers?: Record<string, string>; status?: number } => void
Writes stream data output to a Node.js response-like object. It sets a `Content-Type` header to `text/plain; charset=utf-8` and writes each stream data part as a separate chunk.

pipeTextStreamToResponse:

(response: ServerResponse, init?: { headers?: Record<string, string>; status?: number } => void
Writes text delta output to a Node.js response-like object. It sets a `Content-Type` header to `text/plain; charset=utf-8` and writes each text delta as a separate chunk.

toAIStreamResponse:

(init?: ResponseInit) => Response
Converts the result to a streamed response object with a stream data part stream. It can be used with the `useChat` and `useCompletion` hooks.

toTextStreamResponse:

(init?: ResponseInit) => Response
Creates a simple text stream response. Each text delta is encoded as UTF-8 and sent as a separate chunk. Non-text-delta events are ignored.

Examples