MCP Tools

The AI SDK supports Model Context Protocol (MCP) tools by offering a lightweight client that exposes a tools method for retrieving tools from a MCP server. After use, the client should always be closed to release resources.

Server

Let's create a route handler for /api/completion that will generate text based on the input prompt and MCP tools that can be called at any time during a generation. The route will call the streamText function from the ai module, which will then generate text based on the input prompt and stream it to the client.

app/api/completion/route.ts
import { experimental_createMCPClient, streamText } from 'ai';
import { Experimental_StdioMCPTransport } from 'ai/mcp-stdio';
import { openai } from '@ai-sdk/openai';
export async function POST(req: Request) {
const { prompt }: { prompt: string } = await req.json();
try {
// Initialize an MCP client to connect to a `stdio` MCP server:
const transport = new Experimental_StdioMCPTransport({
command: 'node',
args: ['src/stdio/dist/server.js'],
});
const stdioClient = await experimental_createMCPClient({
transport,
});
// Alternatively, you can connect to a Server-Sent Events (SSE) MCP server:
const sseClient = await experimental_createMCPClient({
transport: {
type: 'sse',
url: 'https://actions.zapier.com/mcp/[YOUR_KEY]/sse',
},
});
// Similarly to the stdio example, you can pass in your own custom transport as long as it implements the `MCPTransport` interface:
const transport = new MyCustomTransport({
// ...
});
const customTransportClient = await experimental_createMCPClient({
transport,
});
const toolSetOne = await stdioClient.tools();
const toolSetTwo = await sseClient.tools();
const toolSetThree = await customTransportClient.tools();
const tools = {
...toolSetOne,
...toolSetTwo,
...toolSetThree, // note: this approach causes subsequent tool sets to override tools with the same name
};
const response = await streamText({
model: openai('gpt-4o'),
tools,
prompt,
// When streaming, the client should be closed after the response is finished:
onFinish: async () => {
await stdioClient.close();
await sseClient.close();
await customTransportClient.close();
},
});
return response.toDataStreamResponse();
} catch (error) {
return new Response('Internal Server Error', { status: 500 });
}
}

Client

Let's create a simple React component that imports the useCompletion hook from the @ai-sdk/react module. The useCompletion hook will call the /api/completion endpoint when a button is clicked. The endpoint will generate text based on the input prompt and stream it to the client.

app/page.tsx
'use client';
import { useCompletion } from '@ai-sdk/react';
export default function Page() {
const { completion, complete } = useCompletion({
api: '/api/completion',
});
return (
<div>
<div
onClick={async () => {
await complete(
'Please schedule a call with Sonny and Robby for tomorrow at 10am ET for me!',
);
}}
>
Schedule a call
</div>
{completion}
</div>
);
}