LM Studio Provider
LM Studio is user interface for running local models.
It contains an OpenAI compatible API server that you can use with the AI SDK. You can start the local server under the Local Server tab in the LM Studio UI ("Start Server" button).
Setup
The LM Studio provider is available via the @ai-sdk/openai
module as it is compatible with the OpenAI API.
You can install it with
pnpm add @ai-sdk/openai
Provider Instance
To use LM Studio, you can create a custom provider instance with the createOpenAI
function from @ai-sdk/openai
:
import { createOpenAI } from '@ai-sdk/openai';
const lmstudio = createOpenAI({ name: 'lmstudio', baseURL: 'http://localhost:1234/v1', maxRetries: 1, // immediately error if the server is not running});
LM Studio uses port 1234
by default, but you can change in the app's Local
Server tab.
Language Models
You can interact with local LLMs in LM Studio using a provider instance.
The first argument is the model id, e.g. llama-3.2-1b
.
const model = lmstudio('llama-3.2-1b');
To be able to use a model, you need to download it first.
Example
You can use LM Studio language models to generate text with the generateText
function:
import { createOpenAI } from '@ai-sdk/openai';import { generateText } from 'ai';
const lmstudio = createOpenAI({ name: 'lmstudio', apiKey: 'not-needed', baseURL: 'https://localhost:1234/v1',});
const { text } = await generateText({ model: lmstudio('llama-3.2-1b'), prompt: 'Write a vegetarian lasagna recipe for 4 people.',});
LM Studio language models can also be used with streamText
.
Embedding Models
You can create models that call the LM Studio embeddings API
using the .embedding()
factory method.
const model = lmstudio.embedding('text-embedding-nomic-embed-text-v1.5');
Example - Embedding a Single Value
import { createOpenAI } from '@ai-sdk/openai';import { embed } from 'ai';
const lmstudio = createOpenAI({ name: 'lmstudio', apiKey: 'not-needed', baseURL: 'https://localhost:1234/v1',});
// 'embedding' is a single embedding object (number[])const { embedding } = await embed({ model: lmstudio.embedding('text-embedding-nomic-embed-text-v1.5'), value: 'sunny day at the beach',});
Example - Embedding Many Values
When loading data, e.g. when preparing a data store for retrieval-augmented generation (RAG), it is often useful to embed many values at once (batch embedding).
The AI SDK provides the embedMany
function for this purpose.
Similar to embed
, you can use it with embeddings models,
e.g. lmstudio.embedding('text-embedding-nomic-embed-text-v1.5')
or lmstudio.embedding('text-embedding-bge-small-en-v1.5')
.
import { createOpenAI } from '@ai-sdk/openai';import { embedMany } from 'ai';
// 'embeddings' is an array of embedding objects (number[][]).// It is sorted in the same order as the input values.const { embeddings } = await embedMany({ model: lmstudio.embedding('text-embedding-nomic-embed-text-v1.5'), values: [ 'sunny day at the beach', 'rainy afternoon in the city', 'snowy night in the mountains', ],});