AI SDK CoreGenerating Text

Generating Text

Large language models (LLMs) can generate text in response to a prompt, which can contain instructions and information to process. For example, you can ask a model to come up with a recipe, draft an email, or summarize a document.

You can generate text using the generateText function. This function is ideal for non-interactive use cases where you need to write text (e.g. drafting email or summarizing web pages) and for agents that use tools.

import { generateText } from 'ai';
const { text } = await generateText({
model,
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});

You can use more advanced prompts to generate text with more complex instructions and content:

import { generateText } from 'ai';
const { text } = await generateText({
model,
system:
'You are a professional writer. You write simple, clear, and concise content.',
prompt: `summarize the following article in 3-5 sentences:\n${article}`,
});

Streaming Text

Depending on your model and prompt, it can take a large language model (LLM) up to a minute to finish generating it's response. This delay can be unacceptable for interactive use cases such as chatbots or real-time applications, where users expect immediate responses.

Vercel AI SDK Core provides the streamText function which simplifies streaming text from LLMs.

import { streamText } from 'ai';
const result = await streamText({
model,
prompt: 'Invent a new holiday and describe its traditions.',
});
// use textStream as an async iterable:
for await (const textPart of result.textStream) {
console.log(textPart);
}

result.textStream is also a ReadableStream, so you can use it in a browser or Node.js environment.

import { streamText } from 'ai';
const result = await streamText({
model,
prompt: 'Invent a new holiday and describe its traditions.',
});
// use textStream as a ReadableStream:
const reader = result.textStream.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) {
break;
}
process.stdout.write(value);
}

Advanced LLM features such as tool calling and structured data generation are built on top of text generation.