Skip to content
Docs
Frameworks
Next.js Pages Router

Next.js Pages Router

💡

The Next.js Pages Router's API Routes do not support streaming responses. You can use the pages/ directory for the frontend while using an app/ directory Route Handler for handling your LLM responses with streaming.

See how to use a Route Handler here (opens in a new tab).

API routes

Edge Runtime

The Edge Runtime supports the same Response types as in the App Router, so the code is equivalent between the two.

import { OpenAIStream, StreamingTextResponse } from 'ai';
import OpenAI from 'openai';
 
// Create an OpenAI API client (that's edge friendly!)
const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY || '',
});
 
// IMPORTANT! Set the runtime to edge
export const runtime = 'edge';
 
export default async function handler(req: Request, res: Response) {
  const { messages } = await req.json();
 
  // Ask OpenAI for a streaming chat completion given the prompt
  const response = await openai.chat.completions.create({
    model: 'gpt-3.5-turbo',
    stream: true,
    messages,
  });
 
  // Convert the response into a friendly text-stream
  const stream = OpenAIStream(response);
  // Respond with the stream
  return new StreamingTextResponse(stream);
}

Node.js Runtime

Node.js offers different Request and Response types compared to the Edge Runtime and App Router, so we need to use the streamToResponse helper function in place of StreamingTextResponse.

import { OpenAIStream, StreamingTextResponse, streamToResponse } from 'ai';
import { NextApiRequest, NextApiResponse } from 'next';
import OpenAI from 'openai';
 
// Create an OpenAI API client (that's edge friendly!)
const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY || '',
});
 
export default async function handler(
  req: NextApiRequest,
  res: NextApiResponse,
) {
  const { messages } = await req.body;
 
  // Ask OpenAI for a streaming chat completion given the prompt
  const response = await openai.chat.completions.create({
    model: 'gpt-3.5-turbo',
    stream: true,
    messages,
  });
 
  // Convert the response into a friendly text-stream
  const stream = OpenAIStream(response);
 
  /**
   * Converts the stream to a Node.js Response-like object.
   * Please note that this sends the response as one message once it's done.
   */
  return streamToResponse(stream, res);
}

Components

You can use the client useChat and useCompletion hooks the same between the Pages and App Router.

import { useChat } from 'ai/react';
 
// Optional but recommended: use the Edge Runtime. This can only be done at the page level, not inside nested components.
export const runtime = 'experimental-edge';
 
export default function IndexPage() {
  const { messages, handleSubmit, input, handleInputChange } = useChat();
 
  return (
    <form onSubmit={handleSubmit}>
      <label for="input">Prompt</label>
      <input
        name="prompt"
        value={input}
        onChange={handleInputChange}
        id="input"
      />
      <button type="submit">Submit</button>
      {messages.map((message, i) => (
        <div key={i}>{message.content}</div>
      ))}
    </form>
  );
}

© 2023 Vercel Inc.