DeepInfra Provider
The DeepInfra provider contains support for state-of-the-art models through the DeepInfra API, including Llama 3, Mixtral, Qwen, and many other popular open-source models.
Setup
The DeepInfra provider is available via the @ai-sdk/deepinfra
module. You can install it with:
pnpm add @ai-sdk/deepinfra
Provider Instance
You can import the default provider instance deepinfra
from @ai-sdk/deepinfra
:
import { deepinfra } from '@ai-sdk/deepinfra';
If you need a customized setup, you can import createDeepInfra
from @ai-sdk/deepinfra
and create a provider instance with your settings:
import { createDeepInfra } from '@ai-sdk/deepinfra';
const deepinfra = createDeepInfra({ apiKey: process.env.DEEPINFRA_API_KEY ?? '',});
You can use the following optional settings to customize the DeepInfra provider instance:
-
baseURL string
Use a different URL prefix for API calls, e.g. to use proxy servers. The default prefix is
https://api.deepinfra.com/v1/openai
. -
apiKey string
API key that is being sent using the
Authorization
header. It defaults to theDEEPINFRA_API_KEY
environment variable. -
headers Record<string,string>
Custom headers to include in the requests.
-
fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>
Custom fetch implementation. Defaults to the global
fetch
function. You can use it as a middleware to intercept requests, or to provide a custom fetch implementation for e.g. testing.
Language Models
You can create language models using a provider instance. The first argument is the model ID, for example:
import { deepinfra } from '@ai-sdk/deepinfra';import { generateText } from 'ai';
const { text } = await generateText({ model: deepinfra('meta-llama/Meta-Llama-3.1-70B-Instruct'), prompt: 'Write a vegetarian lasagna recipe for 4 people.',});
DeepInfra language models can also be used in the streamText
and streamUI
functions (see AI SDK Core and AI SDK RSC).
Model Capabilities
Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
---|---|---|---|---|
meta-llama/Llama-3.3-70B-Instruct-Turbo | ||||
meta-llama/Llama-3.3-70B-Instruct | ||||
meta-llama/Meta-Llama-3.1-405B-Instruct | ||||
meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo | ||||
meta-llama/Meta-Llama-3.1-70B-Instruct | ||||
meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo | ||||
meta-llama/Meta-Llama-3.1-8B-Instruct | ||||
meta-llama/Llama-3.2-11B-Vision-Instruct | ||||
meta-llama/Llama-3.2-90B-Vision-Instruct | ||||
mistralai/Mixtral-8x7B-Instruct-v0.1 | ||||
nvidia/Llama-3.1-Nemotron-70B-Instruct | ||||
Qwen/Qwen2-7B-Instruct | ||||
Qwen/Qwen2.5-72B-Instruct | ||||
Qwen/Qwen2.5-Coder-32B-Instruct | ||||
Qwen/QwQ-32B-Preview | ||||
google/codegemma-7b-it | ||||
google/gemma-2-9b-it | ||||
microsoft/WizardLM-2-8x22B |
The table above lists popular models. Please see the DeepInfra docs for a full list of available models. The table above lists popular models. You can also pass any available provider model ID as a string if needed.