Stream HelpersHuggingFaceStream

HuggingFaceStream

Converts the output from language models hosted on Hugging Face into a ReadableStream.

While HuggingFaceStream is compatible with most Hugging Face language models, the rapidly evolving landscape of models may result in certain new or niche models not being supported. If you encounter a model that isn't supported, we encourage you to open an issue.

To ensure that AI responses are comprised purely of text without any delimiters that could pose issues when rendering in chat or completion modes, we standardize and remove special end-of-response tokens. If your use case requires a different handling of responses, you can fork and modify this stream to meet your specific needs.

Currently, </s> and <|endoftext|> are recognized as end-of-stream tokens.

Import

React

import { HuggingFaceStream } from "ai"

Parameters

iter:

AsyncGenerator<any>
This parameter should be the generator function returned by the hf.textGenerationStream method in the Hugging Face Inference SDK.

callback?:

AIStreamCallbacks
An object containing callback functions to handle the start, each token, and completion of the AI response. In the absence of this parameter, default behavior is implemented.

Returns

A ReadableStream.