OpenAIStream
OpenAIStream(res: Response, cb?: AIStreamCallbacks): ReadableStream
OpenAIStream
is a utility function that transforms the response from OpenAI's completion and chat models into a ReadableStream
. It uses AIStream
under the hood, applying a specific parser for the OpenAI's response data structure.
This works with a variety of OpenAI models, including:
gpt-4
,gpt-4-1106-preview
,gpt-4-32k
,gpt-3.5-turbo
,gpt-3.5-turbo-1106
,gpt-3.5-turbo-16k
gpt-3.5-turbo-instruct
It is designed to work with responses from either openai.completions.create
or openai.chat.completions.create
methods.
Note: Prior to v4, the official OpenAI API SDK does not support the Edge Runtime and only works in serverless environments. The openai-edge
package is based on fetch
instead of axios
(and thus works in the Edge Runtime) so we recommend using openai
v4+ or openai-edge
.
Parameters
res: Response
This is the response object returned by either openai.completions.create
or openai.chat.completions.create
methods.
cb?: AIStreamCallbacks
This is an optional parameter which is an object that contains callback functions for handling the start, completion, and each token of the AI response. If not provided, default behavior is applied.
Examples
Below are some examples of how to use OpenAIStream
with chat and completion models.
Chat Model Example
import OpenAI from 'openai';
import { OpenAIStream, StreamingTextResponse } from 'ai';
// Create an OpenAI API client (that's edge friendly!)
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
// IMPORTANT! Set the runtime to edge
export const runtime = 'edge';
export async function POST(req: Request) {
const { messages } = await req.json();
// Ask OpenAI for a streaming chat completion given the prompt
const response = await openai.chat.completions.create({
model: 'gpt-3.5-turbo',
stream: true,
messages,
});
// Convert the response into a friendly text-stream
const stream = OpenAIStream(response);
// Respond with the stream
return new StreamingTextResponse(stream);
}
Completion Model Example
import OpenAI from 'openai';
import { OpenAIStream, StreamingTextResponse } from 'ai';
// Create an OpenAI API client (that's edge friendly!)
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
// IMPORTANT! Set the runtime to edge
export const runtime = 'edge';
export async function POST(req: Request) {
// Extract the `prompt` from the body of the request
const { prompt } = await req.json();
// Ask OpenAI for a streaming completion given the prompt
const response = await openai.completions.create({
model: 'gpt-3.5-turbo-instruct',
max_tokens: 2000,
stream: true,
prompt,
});
// Convert the response into a friendly text-stream
const stream = OpenAIStream(response);
// Respond with the stream
return new StreamingTextResponse(stream);
}
In these examples, the OpenAIStream
function is used to transform the response from OpenAI API calls into a ReadableStream
, allowing the client to consume the AI output as it is generated, rather than waiting for the entire response.