Get started with OpenAI o3-mini
With the release of OpenAI's o3-mini model, there has never been a better time to start building AI applications, particularly those that require complex STEM reasoning capabilities.
The AI SDK is a powerful TypeScript toolkit for building AI applications with large language models (LLMs) like OpenAI o3-mini alongside popular frameworks like React, Next.js, Vue, Svelte, Node.js, and more.
OpenAI o3-mini
OpenAI recently released a new AI model optimized for STEM reasoning that excels in science, math, and coding tasks. o3-mini matches o1's performance in these domains while delivering faster responses and lower costs. The model supports tool calling, structured outputs, and system messages, making it a great option for a wide range of applications.
o3-mini offers three reasoning effort levels:
- [Low]: Optimized for speed while maintaining solid reasoning capabilities
- [Medium]: Balanced approach matching o1's performance levels
- [High]: Enhanced reasoning power exceeding o1 in many STEM domains
Model | Streaming | Tool Calling | Structured Output | Reasoning Effort | Image Input |
---|---|---|---|---|---|
o3-mini |
Benchmarks
OpenAI o3-mini demonstrates impressive performance across technical domains:
- 87.3% accuracy on AIME competition math questions
- 79.7% accuracy on PhD-level science questions (GPQA Diamond)
- 2130 Elo rating on competitive programming (Codeforces)
- 49.3% accuracy on verified software engineering tasks (SWE-bench)
Prompt Engineering for o3-mini
The o3-mini model performs best with straightforward prompts. Some prompt engineering techniques, like few-shot prompting or instructing the model to "think step by step," may not enhance performance and can sometimes hinder it. Here are some best practices:
- Keep prompts simple and direct: The model excels at understanding and responding to brief, clear instructions without the need for extensive guidance.
- Avoid chain-of-thought prompts: Since the model performs reasoning internally, prompting it to "think step by step" or "explain your reasoning" is unnecessary.
- Use delimiters for clarity: Use delimiters like triple quotation marks, XML tags, or section titles to clearly indicate distinct parts of the input.
Getting Started with the AI SDK
The AI SDK is the TypeScript toolkit designed to help developers build AI-powered applications with React, Next.js, Vue, Svelte, Node.js, and more. Integrating LLMs into applications is complicated and heavily dependent on the specific model provider you use.
The AI SDK abstracts away the differences between model providers, eliminates boilerplate code for building chatbots, and allows you to go beyond text output to generate rich, interactive components.
At the center of the AI SDK is AI SDK Core, which provides a unified API to call any LLM. The code snippet below is all you need to call OpenAI o3-mini with the AI SDK:
import { generateText } from 'ai';import { openai } from '@ai-sdk/openai';
const { text } = await generateText({ model: openai('o3-mini'), prompt: 'Explain the concept of quantum entanglement.',});
To use o3-mini, you must be using @ai-sdk/openai version 1.1.9 or greater.
Refining Reasoning Effort
You can control the amount of reasoning effort expended by o3-mini through the reasoningEffort
parameter.
This parameter can be set to low
, medium
, or high
to adjust how much time and computation the model spends on internal reasoning before producing a response.
import { generateText } from 'ai';import { openai } from '@ai-sdk/openai';
// Reduce reasoning effort for faster responsesconst { text } = await generateText({ model: openai('o3-mini'), prompt: 'Explain quantum entanglement briefly.', providerOptions: { openai: { reasoningEffort: 'low' }, },});
Generating Structured Data
While text generation can be useful, you might want to generate structured JSON data. For example, you might want to extract information from text, classify data, or generate synthetic data. AI SDK Core provides two functions (generateObject
and streamObject
) to generate structured data, allowing you to constrain model outputs to a specific schema.
import { generateObject } from 'ai';import { openai } from '@ai-sdk/openai';import { z } from 'zod';
const { object } = await generateObject({ model: openai('o3-mini'), schema: z.object({ recipe: z.object({ name: z.string(), ingredients: z.array(z.object({ name: z.string(), amount: z.string() })), steps: z.array(z.string()), }), }), prompt: 'Generate a lasagna recipe.',});
This code snippet will generate a type-safe recipe that conforms to the specified zod schema.
Using Tools with the AI SDK
o3-mini supports tool calling out of the box, allowing it to interact with external systems and perform discrete tasks. Here's an example of using tool calling with the AI SDK:
import { generateText, tool } from 'ai';import { openai } from '@ai-sdk/openai';
const { text } = await generateText({ model: openai('o3-mini'), prompt: 'What is the weather like today in San Francisco?', tools: { getWeather: tool({ description: 'Get the weather in a location', parameters: z.object({ location: z.string().describe('The location to get the weather for'), }), execute: async ({ location }) => ({ location, temperature: 72 + Math.floor(Math.random() * 21) - 10, }), }), },});
In this example, the getWeather
tool allows the model to fetch real-time weather data (simulated for simplicity), enhancing its ability to provide accurate and up-to-date information.
Building Interactive Interfaces
AI SDK Core can be paired with AI SDK UI, another powerful component of the AI SDK, to streamline the process of building chat, completion, and assistant interfaces with popular frameworks like Next.js, Nuxt, SvelteKit, and SolidStart.
AI SDK UI provides robust abstractions that simplify the complex tasks of managing chat streams and UI updates on the frontend, enabling you to develop dynamic AI-driven interfaces more efficiently.
With four main hooks — useChat
, useCompletion
, useObject
, and useAssistant
— you can incorporate real-time chat capabilities, text completions, streamed JSON, and interactive assistant features into your app.
Let's explore building a chatbot with Next.js, the AI SDK, and OpenAI o3-mini:
In a new Next.js application, first install the AI SDK and the DeepSeek provider:
npm install ai @ai-sdk/openai
Then, create a route handler for the chat endpoint:
import { openai } from '@ai-sdk/openai';import { streamText } from 'ai';
// Allow responses up to 5 minutesexport const maxDuration = 300;
export async function POST(req: Request) { const { messages } = await req.json();
const result = streamText({ model: openai('o3-mini'), messages, });
return result.toDataStreamResponse();}
Finally, update the root page (app/page.tsx
) to use the useChat
hook:
'use client';
import { useChat } from 'ai/react';
export default function Page() { const { messages, input, handleInputChange, handleSubmit, error } = useChat();
return ( <> {messages.map(message => ( <div key={message.id}> {message.role === 'user' ? 'User: ' : 'AI: '} {message.content} </div> ))} <form onSubmit={handleSubmit}> <input name="prompt" value={input} onChange={handleInputChange} /> <button type="submit">Submit</button> </form> </> );}
The useChat hook on your root page (app/page.tsx
) will make a request to your AI provider endpoint (app/api/chat/route.ts
) whenever the user submits a message. The messages are then displayed in the chat UI.
Get Started
Ready to get started? Here's how you can dive in:
- Explore the documentation at sdk.vercel.ai/docs to understand the full capabilities of the AI SDK.
- Check out our support for o3-mini in the OpenAI Provider.
- Check out practical examples at sdk.vercel.ai/examples to see the SDK in action and get inspired for your own projects.
- Dive deeper with advanced guides on topics like Retrieval-Augmented Generation (RAG) and multi-modal chat at sdk.vercel.ai/docs/guides.
- Check out ready-to-deploy AI templates at vercel.com/templates?type=ai.