GuidesGet started with OpenAI o1

Get started with OpenAI o1

With the release of OpenAI's o1 series models, there has never been a better time to start building AI applications, particularly those that require complex reasoning capabilities.

The Vercel AI SDK is a powerful TypeScript toolkit for building AI applications with large language models (LLMs) like OpenAI o1 alongside popular frameworks like React, Next.js, Vue, Svelte, Node.js, and more.

OpenAI o1 models are currently in beta with limited features. Access is restricted to developers in tier 5, with low rate limits (20 RPM). OpenAI is working on adding more features, increasing rate limits, and expanding access to more developers in the coming weeks.

OpenAI o1

OpenAI released a series of AI models designed to spend more time thinking before responding. They can reason through complex tasks and solve harder problems than previous models in science, coding, and math. These models, named the o1 series, are trained with reinforcement learning and can "think before they answer". As a result, they are able to produce a long internal chain of thought before responding to a prompt.

There are two reasoning models available in the API:

  1. o1-preview: An early preview of the o1 model, designed to reason about hard problems using broad general knowledge about the world.
  2. o1-mini: A faster and cheaper version of o1, particularly adept at coding, math, and science tasks where extensive general knowledge isn't required.

Benchmarks

OpenAI o1 models excel in scientific reasoning, with impressive performance across various domains:

  • Ranking in the 89th percentile on competitive programming questions (Codeforces)
  • Placing among the top 500 students in the US in a qualifier for the USA Math Olympiad (AIME)
  • Exceeding human PhD-level accuracy on a benchmark of physics, biology, and chemistry problems (GPQA)

Source

Prompt Engineering for o1 Models

The o1 models perform best with straightforward prompts. Some prompt engineering techniques, like few-shot prompting or instructing the model to "think step by step," may not enhance performance and can sometimes hinder it. Here are some best practices:

  1. Keep prompts simple and direct: The models excel at understanding and responding to brief, clear instructions without the need for extensive guidance.
  2. Avoid chain-of-thought prompts: Since these models perform reasoning internally, prompting them to "think step by step" or "explain your reasoning" is unnecessary.
  3. Use delimiters for clarity: Use delimiters like triple quotation marks, XML tags, or section titles to clearly indicate distinct parts of the input, helping the model interpret different sections appropriately.
  4. Limit additional context in retrieval-augmented generation (RAG): When providing additional context or documents, include only the most relevant information to prevent the model from overcomplicating its response.

Getting Started with the Vercel AI SDK

The Vercel AI SDK is the TypeScript toolkit designed to help developers build AI-powered applications with React, Next.js, Vue, Svelte, Node.js, and more. Integrating LLMs into applications is complicated and heavily dependent on the specific model provider you use.

The Vercel AI SDK abstracts away the differences between model providers, eliminates boilerplate code for building chatbots, and allows you to go beyond text output to generate rich, interactive components.

At the center of the Vercel AI SDK is AI SDK Core, which provides a unified API to call any LLM. The code snippet below is all you need to call OpenAI o1-mini with the Vercel AI SDK:

import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
const { text } = await generateText({
model: openai('o1-mini'),
prompt: 'Explain the concept of quantum entanglement.',
});

To use the o1 series of models, you must either be using @ai-sdk/openai version 0.0.59 or greater, or set temperature: 1.

AI SDK Core abstracts away the differences between model providers, allowing you to focus on building great applications. The unified interface also means that you can easily switch between models by changing just one line of code.

import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
const { text } = await generateText({
model: openai('o1-preview'),
prompt: 'Explain the concept of quantum entanglement.',
});

During the beta phase, access to most chat completions parameters is not supported for o1 models. Features like streaming, function calling, and image inputs are currently unavailable.

Building Interactive Interfaces

AI SDK Core can be paired with AI SDK UI, another powerful component of the Vercel AI SDK, to streamline the process of building chat, completion, and assistant interfaces with popular frameworks like Next.js, Nuxt, SvelteKit, and SolidStart.

AI SDK UI provides robust abstractions that simplify the complex tasks of managing chat streams and UI updates on the frontend, enabling you to develop dynamic AI-driven interfaces more efficiently.

With four main hooks — useChat, useCompletion, useObject, and useAssistant — you can incorporate real-time chat capabilities, text completions, streamed JSON, and interactive assistant features into your app.

Let's explore building a chatbot with Next.js, Vercel AI SDK, and OpenAI o1:

app/api/chat/route.ts
import { convertToCoreMessages, generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
// Allow responses up to 5 minutes
export const maxDuration = 300;
export async function POST(req: Request) {
const { messages } = await req.json();
const { text } = await generateText({
model: openai('o1-preview'),
messages: convertToCoreMessages(messages),
});
return new Response(text);
}
app/page.tsx
'use client';
import { useChat } from 'ai/react';
export default function Page() {
const { messages, input, handleInputChange, handleSubmit, error } = useChat({
streamProtocol: 'text',
});
return (
<>
{messages.map(message => (
<div key={message.id}>
{message.role === 'user' ? 'User: ' : 'AI: '}
{message.content}
</div>
))}
<form onSubmit={handleSubmit}>
<input name="prompt" value={input} onChange={handleInputChange} />
<button type="submit">Submit</button>
</form>
</>
);
}

The useChat hook on your root page (app/page.tsx) will make a request to your AI provider endpoint (app/api/chat/route.ts) whenever the user submits a message. The messages are then displayed in the chat UI.

Due to the current limitations of o1 models during the beta phase, real-time streaming is not supported. The response will be sent once the model completes its reasoning and generates the full output.

Get Started

Ready to get started? Here's how you can dive in:

  1. Explore the documentation at sdk.vercel.ai/docs to understand the full capabilities of the Vercel AI SDK.
  2. Check out practical examples at sdk.vercel.ai/examples to see the SDK in action and get inspired for your own projects.
  3. Dive deeper with advanced guides on topics like Retrieval-Augmented Generation (RAG) and multi-modal chat at sdk.vercel.ai/docs/guides.
  4. Check out ready-to-deploy AI templates at vercel.com/templates?type=ai.

Remember that OpenAI o1 models are currently in beta with limited features and access. Stay tuned for updates as OpenAI expands access and adds more features to these powerful reasoning models.