Mistral
The legacy Mistral integration is not compatible with the AI SDK 3.1 functions. It is recommended to use the AI SDK Mistral Provider instead.
The AI SDK provides a set of utilities to make it easy to use Mistral's APIs and models. In this guide, we'll walk through how to use the utilities to create a chat bot and a text completion app.
Guide: Mistral Chatbot
Create a Next.js app
Create a Next.js application and install ai
and @mistralai/mistralai
, the AI SDK and Mistral API client respectively.
pnpm dlx create-next-app my-ai-appcd my-ai-apppnpm add ai @mistralai/mistralai
Add your Mistral API Key to .env
Create a .env
file in your project root and add your Mistral API Key:
MISTRAL_API_KEY=xxxxxxxxx
Create a Route Handler
Create a Next.js Route Handler that we'll use to generate a chat completion via Mistral that we'll then stream back to our Next.js.
For this example, we'll create a route handler at app/api/chat/route.ts
that accepts a POST
request with a messages
array of strings:
import { MistralStream, StreamingTextResponse } from 'ai';
import MistralClient from '@mistralai/mistralai';
const mistral = new MistralClient(process.env.MISTRAL_API_KEY || '');
export async function POST(req: Request) { // Extract the `messages` from the body of the request const { messages } = await req.json();
const response = mistral.chatStream({ model: 'mistral-small', maxTokens: 1000, messages, });
// Convert the response into a friendly text-stream. The Mistral client responses are // compatible with the AI SDK MistralStream adapter. const stream = MistralStream(response);
// Respond with the stream return new StreamingTextResponse(stream);}
The AI SDK provides 2 utility helpers to make the above seamless: First, we
pass the streaming response
we receive from Mistral to the
MistralStream
. This method
decodes/extracts the text tokens in the response and then re-encodes them
properly for simple consumption. We can then pass that new stream directly to
StreamingTextResponse
.
This is another utility class that extends the normal Node/Edge Runtime
Response
class with the default headers you probably want (hint:
'Content-Type': 'text/plain; charset=utf-8'
is already set for you).
Wire up the UI
Create a Client component with a form that we'll use to gather the prompt from the user and then stream back the completion from.
By default, the useChat
hook will use the POST
Route Handler we created above (it defaults to /api/chat
). You can override this by passing a api
prop to useChat({ api: '...'})
.
'use client';import { useChat } from 'ai/react';export default function Chat() { const { messages, input, handleInputChange, handleSubmit } = useChat(); return ( <div className="mx-auto w-full max-w-md py-24 flex flex-col stretch"> {messages.map(m => ( <div key={m.id} className="whitespace-pre-wrap"> {m.role === 'user' ? 'User: ' : 'AI: '} {m.content} </div> ))} <form onSubmit={handleSubmit}> <input className="fixed bottom-0 w-full max-w-md p-2 mb-8 border border-gray-300 rounded shadow-xl" value={input} placeholder="Say something..." onChange={handleInputChange} /> </form> </div> );}
Guide: Text Completion
Use the Completion API
Similar to the Chatbot example above, we'll create a Next.js Route Handler that generates a text completion via the Mistral api that we'll then stream back to our Next.js. It accepts a POST
request with a prompt
string:
import MistralClient from '@mistralai/mistralai';import { MistralStream, StreamingTextResponse } from 'ai';
const mistral = new MistralClient(process.env.MISTRAL_API_KEY || '');
export async function POST(req: Request) { // Extract the `prompt` from the body of the request const { prompt } = await req.json();
// Ask Mistral for a streaming completion given the prompt const response = mistral.chatStream({ model: 'mistral-small', maxTokens: 1000, messages: [{ role: 'user', content: prompt }], });
// Convert the response into a friendly text-stream const stream = MistralStream(response);
// Respond with the stream return new StreamingTextResponse(stream);}
Wire up the UI
We can use the useCompletion
hook to make it easy to wire up the UI. By default, the useCompletion
hook will use the POST
Route Handler we created above (it defaults to /api/completion
). You can override this by passing a api
prop to useCompletion({ api: '...'})
.
'use client';
import { useCompletion } from 'ai/react';
export default function Completion() { const { completion, input, stop, isLoading, handleInputChange, handleSubmit, error, } = useCompletion({ api: '/api/completion', }); return ( <div className="mx-auto w-full max-w-md py-24 flex flex-col stretch"> <h4 className="text-xl font-bold text-gray-900 md:text-xl pb-4"> useCompletion Example </h4> {error && ( <div className="fixed top-0 left-0 w-full p-4 text-center bg-red-500 text-white"> {error.message} </div> )} <output>{completion}</output> <form onSubmit={handleSubmit} className="fixed w-full max-w-xl bottom-0 mb-8 items-stretch flex" > <input className="border border-gray-300 rounded m-2 shadow-xl p-2 flex-grow" value={input} placeholder="Say something..." onChange={handleInputChange} /> <button disabled={isLoading} type="submit" className="inline-block bg-gray-100 hover:bg-gray-300 text-gray-700 font-semibold hover:text-white py-2 px-4 border border-gray-300 hover:border-transparent rounded m-2 disabled:opacity-50" > Send </button> <button disabled={!isLoading} type="button" onClick={stop} className="inline-block bg-gray-100 hover:bg-gray-300 text-gray-700 font-semibold hover:text-white py-2 px-4 border border-gray-300 hover:border-transparent rounded m-2 disabled:opacity-50" > Stop </button> </form> </div> );}
Guide: Save to Database After Completion
It’s common to want to save the result of a completion to a database after streaming it back to the user. The MistralStream
adapter accepts a couple of optional callbacks that can be used to do this.
export async function POST(req: Request) { // ... // Convert the response into a friendly text-stream const stream = MistralStream(response, { onStart: async () => { // This callback is called when the stream starts // You can use this to save the prompt to your database await savePromptToDatabase(prompt); }, onToken: async (token: string) => { // This callback is called for each token in the stream // You can use this to debug the stream or save the tokens to your database console.log(token); }, onCompletion: async (completion: string) => { // This callback is called when the stream completes // You can use this to save the final completion to your database await saveCompletionToDatabase(completion); }, }); // Respond with the stream return new StreamingTextResponse(stream);}