OpenAI Compatible Providers

You can use the OpenAI Compatible Provider package to use language model providers that implement the OpenAI API.

Below we focus on the general setup and provider instance creation. You can also write a custom provider package leveraging the OpenAI Compatible package.

We provide detailed documentation for the following OpenAI compatible providers:

The general setup and provider instance creation is the same for all of these providers.

Setup

The OpenAI Compatible provider is available via the @ai-sdk/openai-compatible module. You can install it with:

pnpm
npm
yarn
pnpm add @ai-sdk/openai-compatible

Provider Instance

To use an OpenAI compatible provider, you can create a custom provider instance with the createOpenAICompatible function from @ai-sdk/openai-compatible:

import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
const provider = createOpenAICompatible({
name: 'provider-name',
apiKey: process.env.PROVIDER_API_KEY,
baseURL: 'https://api.provider.com/v1',
});

You can use the following optional settings to customize the provider instance:

  • baseURL string

    Set the URL prefix for API calls.

  • apiKey string

    API key for authenticating requests. If specified, adds an Authorization header to request headers with the value Bearer <apiKey>. This will be added before any headers potentially specified in the headers option.

  • headers Record<string,string>

    Optional custom headers to include in requests. These will be added to request headers after any headers potentially added by use of the apiKey option.

  • queryParams Record<string,string>

    Optional custom url query parameters to include in request urls.

  • fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>

    Custom fetch implementation. Defaults to the global fetch function. You can use it as a middleware to intercept requests, or to provide a custom fetch implementation for e.g. testing.

Language Models

You can create provider models using a provider instance. The first argument is the model id, e.g. model-id.

const model = provider('model-id');

Example

You can use provider language models to generate text with the generateText function:

import { createOpenAICompatible } from '@ai-sdk/openai-compatible'
import { generateText } from 'ai'
const provider = createOpenAICompatible({
name: 'provider-name',
apiKey: process.env.PROVIDER_API_KEY,
baseURL: 'https://api.provider.com/v1'
})
const { text } = await generateText({
model: provider('model-id')
prompt: 'Write a vegetarian lasagna recipe for 4 people.'
})

Including model ids for auto-completion

import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
import { generateText } from 'ai';
type ExampleChatModelIds =
| 'meta-llama/Llama-3-70b-chat-hf'
| 'meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo'
| (string & {});
type ExampleCompletionModelIds =
| 'codellama/CodeLlama-34b-Instruct-hf'
| 'Qwen/Qwen2.5-Coder-32B-Instruct'
| (string & {});
type ExampleEmbeddingModelIds =
| 'BAAI/bge-large-en-v1.5'
| 'bert-base-uncased'
| (string & {});
const model = createOpenAICompatible<
ExampleChatModelIds,
ExampleCompletionModelIds,
ExampleEmbeddingModelIds
>({
name: 'example',
apiKey: process.env.PROVIDER_API_KEY,
baseURL: 'https://api.example.com/v1',
});
// Subsequent calls to e.g. `model.chatModel` will auto-complete the model id
// from the list of `ExampleChatModelIds` while still allowing free-form
// strings as well.
const { text } = await generateText({
model: model.chatModel('meta-llama/Llama-3-70b-chat-hf'),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});

Custom query parameters

Some providers may require custom query parameters. An example is the Azure AI Model Inference API which requires an api-version query parameter.

You can set these via the optional queryParams provider setting. These will be added to all requests made by the provider.

import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
const provider = createOpenAICompatible({
name: 'provider-name',
apiKey: process.env.PROVIDER_API_KEY,
baseURL: 'https://api.provider.com/v1',
queryParams: {
'api-version': '1.0.0',
},
});

For example, with the above configuration, API requests would include the query parameter in the URL like: https://api.provider.com/v1/chat/completions?api-version=1.0.0.