Skip to content
Docs
AI Core (Experimental)
Settings

Settings

All AI functions (generateText, streamText, generateObject, streamObject) support the following common settings in addition to the model and the prompt:

  • maxTokens - Maximum number of tokens to generate.
  • temperature - Temperature setting. This is a number between 0 (almost no randomness) and 1 (very random). It is recommended to set either temperature or topP, but not both.
  • topP - Nucleus sampling. This is a number between 0 and 1. E.g. 0.1 would mean that only tokens with the top 10% probability mass are considered. It is recommended to set either temperature or topP, but not both.
  • presencePenalty - Presence penalty setting. It affects the likelihood of the model to repeat information that is already in the prompt. The presence penalty is a number between -1 (increase repetition) and 1 (maximum penalty, decrease repetition). 0 means no penalty.
  • frequencyPenalty - Frequency penalty setting. It affects the likelihood of the model to repeatedly use the same words or phrases. The frequency penalty is a number between -1 (increase repetition) and 1 (maximum penalty, decrease repetition). 0 means no penalty.
  • seed - The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.
  • maxRetries - Maximum number of retries. Set to 0 to disable retries. Default: 2.
  • abortSignal - An optional abort signal that can be used to cancel the call.

Some providers do not support all common settings. If you use a setting with a provider that does not support it, a warning will be included in the AI function result object.

Example

const result = await experimental_generateText({
  model,
  maxTokens: 512,
  temperature: 0.3,
  maxRetries: 5,
  prompt: 'Invent a new holiday and describe its traditions.',
});

© 2023 Vercel Inc.