Qwen Provider

younis-ahmed/qwen-ai-provider is a community provider that uses Qwen to provide language model support for the AI SDK.

Setup

The Qwen provider is available in the qwen-ai-provider module. You can install it with

pnpm
npm
yarn
pnpm add qwen-ai-provider

Provider Instance

You can import the default provider instance qwen from qwen-ai-provider:

import { qwen } from 'qwen-ai-provider';

If you need a customized setup, you can import createQwen from qwen-ai-provider and create a provider instance with your settings:

import { createQwen } from 'qwen-ai-provider';
const qwen = createQwen({
// optional settings, e.g.
// baseURL: 'https://qwen/api/v1',
});

You can use the following optional settings to customize the Qwen provider instance:

  • baseURL string

    Use a different URL prefix for API calls, e.g. to use proxy servers. The default prefix is https://dashscope-intl.aliyuncs.com/compatible-mode/v1.

  • apiKey string

    API key that is being sent using the Authorization header. It defaults to the DASHSCOPE_API_KEY environment variable.

  • headers Record<string,string>

    Custom headers to include in the requests.

  • fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>

    Custom fetch implementation. Defaults to the global fetch function. You can use it as a middleware to intercept requests, or to provide a custom fetch implementation for e.g. testing.

Language Models

You can create models that call the Qwen chat API using a provider instance. The first argument is the model id, e.g. qwen-plus. Some Qwen chat models support tool calls.

const model = qwen('qwen-plus');

Example

You can use Qwen language models to generate text with the generateText function:

import { qwen } from 'qwen-ai-provider';
import { generateText } from 'ai';
const { text } = await generateText({
model: qwen('qwen-plus'),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});

Qwen language models can also be used in the streamText, generateObject, streamObject, and streamUI functions (see AI SDK Core and AI SDK RSC).

Model Capabilities

ModelImage InputObject GenerationTool UsageTool Streaming
qwen-vl-max
qwen-plus-latest
qwen-max
qwen2.5-72b-instruct
qwen2.5-14b-instruct-1m
qwen2.5-vl-72b-instruct

The table above lists popular models. Please see the Qwen docs for a full list of available models. The table above lists popular models. You can also pass any available provider model ID as a string if needed.

Embedding Models

You can create models that call the Qwen embeddings API using the .textEmbeddingModel() factory method.

const model = qwen.textEmbeddingModel('text-embedding-v3');

Model Capabilities

ModelDefault DimensionsMaximum number of rowsMaximum tokens per row
text-embedding-v3102468,192