AI SDK CoreSettings

Settings

Large language models (LLMs) typically provide settings to augment their output.

All Vercel AI SDK functions support the following common settings in addition to the model and the prompt:

const result = await generateText({
model,
maxTokens: 512,
temperature: 0.3,
maxRetries: 5,
prompt: 'Invent a new holiday and describe its traditions.',
});

Some providers do not support all common settings. If you use a setting with a provider that does not support it, a warning will be included in the AI function result object.

maxTokens

Maximum number of tokens to generate.

temperature

Temperature setting.

The value is passed through to the provider. The range depends on the provider and model. For most providers, 0 means almost deterministic results, and higher values mean more randomness.

It is recommended to set either temperature or topP, but not both.

topP

Nucleus sampling.

The value is passed through to the provider. The range depends on the provider and model. For most providers, nucleus sampling is a number between 0 and 1. E.g. 0.1 would mean that only tokens with the top 10% probability mass are considered.

It is recommended to set either temperature or topP, but not both.

presencePenalty

The presence penalty affects the likelihood of the model to repeat information that is already in the prompt.

The value is passed through to the provider. The range depends on the provider and model. For most providers, 0 means no penalty.

frequencyPenalty

The frequency penalty affects the likelihood of the model to repeatedly use the same words or phrases.

The value is passed through to the provider. The range depends on the provider and model. For most providers, 0 means no penalty.

seed

It is the seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.

maxRetries

Maximum number of retries. Set to 0 to disable retries. Default: 2.

abortSignal

An optional abort signal that can be used to cancel the call.