AI SDK RSCstreamUI

streamUI

A helper function to create a streamable UI from LLM providers. This function is similar to AI SDK Core APIs and supports the same model interfaces.

Import

import { streamUI } from "ai/rsc"

Object Parameter

model:

LanguageModelV1
The language model to use. Example: openai('gpt-4-turbo')

initial?:

ReactNode
The initial UI to render.

system:

string
The system prompt to use that specifies the behavior of the model.

prompt:

string
The input prompt to generate the text from.

messages:

Array<UserMessage | AssistantMessage | ToolMessage>
A list of messages that represent a conversation.
UserMessage

role:

'user'
The role for the user message.

content:

string | Array<TextPart | ImagePart>
The content of the message.
TextPart

type:

'text'
The type of the message part.

text:

string
The text content of the message part.
ImagePart

type:

'image'
The type of the message part.

image:

ArrayBuffer | Uint8Array | Buffer | URL
The image content of the message part.
AssistantMessage

role:

'assistant'
The role for the assistant message.

content:

string | Array<TextPart | ToolCallPart>
The content of the message.
TextPart

type:

'text'
The type of the message part.

text:

string
The text content of the message part.
ToolCallPart

type:

'tool-call'
The type of the message part.

toolCallId:

string
The id of the tool call.

toolName:

string
The name of the tool, which typically would be the name of the function.

args:

object based on zod schema
Parameters generated by the model to be used by the tool.
ToolMessage

role:

'tool'
The role for the assistant message.

content:

Array<ToolResultPart>
The content of the message.
ToolResultPart

type:

'tool-result'
The type of the message part.

toolCallId:

string
The id of the tool call the result corresponds to.

toolName:

string
The name of the tool the result corresponds to.

result:

any
The result returned by the tool after execution.

isError?:

boolean
Whether the result is an error or an error message.

maxTokens?:

number
Maximum number of tokens to generate.

temperature?:

number
A number between 0 and 1 that affects the randomness of the model.

topP?:

number
A number between 0 and 1 that affects the likelihood of tokens with the top n % probability to be considered.

presencePenalty?:

number
A number between -1 and 1 that affects the likelihood of the model to repeat information that is already in the prompt.

frequencyPenalty?:

number
A number between -1 and 1 that affects the likelihood of the model to repeatedly use the same words or phrases.

seed?:

number
The seed to use for random sampling that affects the deterministic output of the model.

maxRetries?:

number
The maximum number of retries to attempt.

abortSignal?:

() => void
Cancels a call that is in progress.

tools:

Record<string, Tool>
Tools that are accessible to and can be called by the model.
Tool

description?:

string
Information about the purpose of the tool including details on how and when it can be used by the model.

parameters:

zod schema
The typed schema that describes the parameters of the tool that can also be used to validation and error handling.

generate?:

(async (parameters) => ReactNode) | AsyncGenerator<ReactNode, ReactNode, void>
A function or a generator function that is called with the arguments from the tool call and yields React nodes as the UI.

text?:

(Text) => ReactNode
Callback to handle the generated tokens from the model.
Text

content:

string
The full content of the completion.

delta:

string
The delta.

done:

boolean
Is it done?

Returns

It can return any valid ReactNode.

Examples