Google Vertex Provider
The Google Vertex provider for the AI SDK contains language model support for the Google Vertex AI APIs. This includes support for Google's Gemini models and Anthropic's Claude partner models.
The Google Vertex provider is compatible with both Node.js and Edge runtimes.
The Edge runtime is supported through the @ai-sdk/google-vertex/edge
sub-module. More details can be found in the Google Vertex Edge
Runtime and Google Vertex Anthropic Edge
Runtime sections below.
Setup
The Google Vertex and Google Vertex Anthropic providers are both available in the @ai-sdk/google-vertex
module. You can install it with
pnpm add @ai-sdk/google-vertex
Google Vertex Provider Usage
The Google Vertex provider instance is used to create model instances that call the Vertex AI API. The models available with this provider include Google's Gemini models. If you're looking to use Anthropic's Claude models, see the Google Vertex Anthropic Provider section below.
Provider Instance
You can import the default provider instance vertex
from @ai-sdk/google-vertex
:
import { vertex } from '@ai-sdk/google-vertex';
If you need a customized setup, you can import createVertex
from @ai-sdk/google-vertex
and create a provider instance with your settings:
import { createVertex } from '@ai-sdk/google-vertex';
const vertex = createVertex({ project: 'my-project', // optional location: 'us-central1', // optional});
Google Vertex supports two different authentication implementations depending on your runtime environment.
Node.js Runtime
The Node.js runtime is the default runtime supported by the AI SDK. It supports all standard Google Cloud authentication options through the google-auth-library
. Typical use involves setting a path to a json credentials file in the GOOGLE_APPLICATION_CREDENTIALS
environment variable. The credentials file can be obtained from the Google Cloud Console.
If you want to customize the Google authentication options you can pass them as options to the createVertex
function, for example:
import { createVertex } from '@ai-sdk/google-vertex';
const vertex = createVertex({ googleAuthOptions: { credentials: { client_email: 'my-email', private_key: 'my-private-key', }, },});
Optional Provider Settings
You can use the following optional settings to customize the provider instance:
-
project string
The Google Cloud project ID that you want to use for the API calls. It uses the
GOOGLE_VERTEX_PROJECT
environment variable by default. -
location string
The Google Cloud location that you want to use for the API calls, e.g.
us-central1
. It uses theGOOGLE_VERTEX_LOCATION
environment variable by default. -
googleAuthOptions object
Optional. The Authentication options used by the Google Auth Library. See also the GoogleAuthOptions interface.
-
authClient object An
AuthClient
to use. -
keyFilename string Path to a .json, .pem, or .p12 key file.
-
keyFile string Path to a .json, .pem, or .p12 key file.
-
credentials object Object containing client_email and private_key properties, or the external account client options.
-
clientOptions object Options object passed to the constructor of the client.
-
scopes string | string[] Required scopes for the desired API request.
-
projectId string Your project ID.
-
universeDomain string The default service domain for a given Cloud universe.
-
-
headers Resolvable<Record<string, string | undefined>>
Headers to include in the requests. Can be provided in multiple formats:
- A record of header key-value pairs:
Record<string, string | undefined>
- A function that returns headers:
() => Record<string, string | undefined>
- An async function that returns headers:
async () => Record<string, string | undefined>
- A promise that resolves to headers:
Promise<Record<string, string | undefined>>
- A record of header key-value pairs:
-
fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>
Custom fetch implementation. Defaults to the global
fetch
function. You can use it as a middleware to intercept requests, or to provide a custom fetch implementation for e.g. testing. -
baseURL string
Optional. Base URL for the Google Vertex API calls e.g. to use proxy servers. By default, it is constructed using the location and project:
https://${location}-aiplatform.googleapis.com/v1/projects/${project}/locations/${location}/publishers/google
Edge Runtime
Edge runtimes (like Vercel Edge Functions and Cloudflare Workers) are lightweight JavaScript environments that run closer to users at the network edge. They only provide a subset of the standard Node.js APIs. For example, direct file system access is not available, and many Node.js-specific libraries (including the standard Google Auth library) are not compatible.
The Edge runtime version of the Google Vertex provider supports Google's Application Default Credentials through environment variables. The values can be obtained from a json credentials file from the Google Cloud Console.
You can import the default provider instance vertex
from @ai-sdk/google-vertex/edge
:
import { vertex } from '@ai-sdk/google-vertex/edge';
The /edge
sub-module is included in the @ai-sdk/google-vertex
package, so
you don't need to install it separately. You must import from
@ai-sdk/google-vertex/edge
to differentiate it from the Node.js provider.
If you need a customized setup, you can import createVertex
from @ai-sdk/google-vertex/edge
and create a provider instance with your settings:
import { createVertex } from '@ai-sdk/google-vertex/edge';
const vertex = createVertex({ project: 'my-project', // optional location: 'us-central1', // optional});
For Edge runtime authentication, you'll need to set these environment variables from your Google Default Application Credentials JSON file:
GOOGLE_CLIENT_EMAIL
GOOGLE_PRIVATE_KEY
GOOGLE_PRIVATE_KEY_ID
(optional)
These values can be obtained from a service account JSON file from the Google Cloud Console.
Optional Provider Settings
You can use the following optional settings to customize the provider instance:
-
project string
The Google Cloud project ID that you want to use for the API calls. It uses the
GOOGLE_VERTEX_PROJECT
environment variable by default. -
location string
The Google Cloud location that you want to use for the API calls, e.g.
us-central1
. It uses theGOOGLE_VERTEX_LOCATION
environment variable by default. -
googleCredentials object
Optional. The credentials used by the Edge provider for authentication. These credentials are typically set through environment variables and are derived from a service account JSON file.
-
clientEmail string The client email from the service account JSON file. Defaults to the contents of the
GOOGLE_CLIENT_EMAIL
environment variable. -
privateKey string The private key from the service account JSON file. Defaults to the contents of the
GOOGLE_PRIVATE_KEY
environment variable. -
privateKeyId string The private key ID from the service account JSON file (optional). Defaults to the contents of the
GOOGLE_PRIVATE_KEY_ID
environment variable.
-
-
headers Resolvable<Record<string, string | undefined>>
Headers to include in the requests. Can be provided in multiple formats:
- A record of header key-value pairs:
Record<string, string | undefined>
- A function that returns headers:
() => Record<string, string | undefined>
- An async function that returns headers:
async () => Record<string, string | undefined>
- A promise that resolves to headers:
Promise<Record<string, string | undefined>>
- A record of header key-value pairs:
-
fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>
Custom fetch implementation. Defaults to the global
fetch
function. You can use it as a middleware to intercept requests, or to provide a custom fetch implementation for e.g. testing.
Language Models
You can create models that call the Vertex API using the provider instance.
The first argument is the model id, e.g. gemini-1.5-pro
.
const model = vertex('gemini-1.5-pro');
If you are using your own
models, the name
of your model needs to start with projects/
.
Google Vertex models support also some model specific settings that are not part of the standard call settings. You can pass them as an options argument:
const model = vertex('gemini-1.5-pro', { safetySettings: [ { category: 'HARM_CATEGORY_UNSPECIFIED', threshold: 'BLOCK_LOW_AND_ABOVE' }, ],});
The following optional settings are available for Google Vertex models:
-
structuredOutputs boolean
Optional. Enable structured output. Default is true.
This is useful when the JSON Schema contains elements that are not supported by the OpenAPI schema version that Google Vertex uses. You can use this to disable structured outputs if you need to.
See Troubleshooting: Schema Limitations for more details.
-
safetySettings Array<{ category: string; threshold: string }>
Optional. Safety settings for the model.
-
category string
The category of the safety setting. Can be one of the following:
HARM_CATEGORY_UNSPECIFIED
HARM_CATEGORY_HATE_SPEECH
HARM_CATEGORY_DANGEROUS_CONTENT
HARM_CATEGORY_HARASSMENT
HARM_CATEGORY_SEXUALLY_EXPLICIT
HARM_CATEGORY_CIVIC_INTEGRITY
-
threshold string
The threshold of the safety setting. Can be one of the following:
HARM_BLOCK_THRESHOLD_UNSPECIFIED
BLOCK_LOW_AND_ABOVE
BLOCK_MEDIUM_AND_ABOVE
BLOCK_ONLY_HIGH
BLOCK_NONE
-
-
useSearchGrounding boolean
Optional. When enabled, the model will use Google search to ground the response.
-
audioTimestamp boolean
Optional. Enables timestamp understanding for audio files. Defaults to false.
This is useful for generating transcripts with accurate timestamps. Consult Google's Documentation for usage details.
You can use Google Vertex language models to generate text with the generateText
function:
import { vertex } from '@ai-sdk/google-vertex';import { generateText } from 'ai';
const { text } = await generateText({ model: vertex('gemini-1.5-pro'), prompt: 'Write a vegetarian lasagna recipe for 4 people.',});
Google Vertex language models can also be used in the streamText
and streamUI
functions
(see AI SDK Core and AI SDK RSC).
File Inputs
The Google Vertex provider supports file inputs, e.g. PDF files.
import { vertex } from '@ai-sdk/google-vertex';import { generateText } from 'ai';
const { text } = await generateText({ model: vertex('gemini-1.5-pro'), messages: [ { role: 'user', content: [ { type: 'text', text: 'What is an embedding model according to this document?', }, { type: 'file', data: fs.readFileSync('./data/ai.pdf'), mimeType: 'application/pdf', }, ], }, ],});
The AI SDK will automatically download URLs if you pass them as data, except
for gs://
URLs. You can use the Google Cloud Storage API to upload larger
files to that location.
See File Parts for details on how to use files in prompts.
Search Grounding
With search grounding, the model has access to the latest information using Google search. Search grounding can be used to provide answers around current events:
import { vertex } from '@ai-sdk/google-vertex';import { GoogleGenerativeAIProviderMetadata } from '@ai-sdk/google';import { generateText } from 'ai';
const { text, experimental_providerMetadata } = await generateText({ model: vertex('gemini-1.5-pro', { useSearchGrounding: true, }), prompt: 'List the top 5 San Francisco news from the past week.' + 'You must include the date of each article.',});
// access the grounding metadata. Casting to the provider metadata type// is optional but provides autocomplete and type safety.const metadata = experimental_providerMetadata?.google as | GoogleGenerativeAIProviderMetadata | undefined;const groundingMetadata = metadata?.groundingMetadata;const safetyRatings = metadata?.safetyRatings;
The grounding metadata includes detailed information about how search results were used to ground the model's response. Here are the available fields:
-
webSearchQueries
(string[] | null
)- Array of search queries used to retrieve information
- Example:
["What's the weather in Chicago this weekend?"]
-
searchEntryPoint
({ renderedContent: string } | null
)- Contains the main search result content used as an entry point
- The
renderedContent
field contains the formatted content
-
groundingSupports
(Array of support objects | null)- Contains details about how specific response parts are supported by search results
- Each support object includes:
segment
: Information about the grounded text segmenttext
: The actual text segmentstartIndex
: Starting position in the responseendIndex
: Ending position in the response
groundingChunkIndices
: References to supporting search result chunksconfidenceScores
: Confidence scores (0-1) for each supporting chunk
Example response excerpt:
{ "groundingMetadata": { "retrievalQueries": ["What's the weather in Chicago this weekend?"], "searchEntryPoint": { "renderedContent": "..." }, "groundingSupports": [ { "segment": { "startIndex": 0, "endIndex": 65, "text": "Chicago weather changes rapidly, so layers let you adjust easily." }, "groundingChunkIndices": [0], "confidenceScores": [0.99] } ] }}
The safety ratings provide insight into how the model's response was grounded to search results. See Google Vertex AI documentation on configuring safety filters.
Example response excerpt:
{ "safetyRatings": [ { "category": "HARM_CATEGORY_HATE_SPEECH", "probability": "NEGLIGIBLE", "probabilityScore": 0.11027937, "severity": "HARM_SEVERITY_LOW", "severityScore": 0.28487435 }, { "category": "HARM_CATEGORY_DANGEROUS_CONTENT", "probability": "HIGH", "blocked": true, "probabilityScore": 0.95422274, "severity": "HARM_SEVERITY_MEDIUM", "severityScore": 0.43398145 }, { "category": "HARM_CATEGORY_HARASSMENT", "probability": "NEGLIGIBLE", "probabilityScore": 0.11085559, "severity": "HARM_SEVERITY_NEGLIGIBLE", "severityScore": 0.19027223 }, { "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT", "probability": "NEGLIGIBLE", "probabilityScore": 0.22901751, "severity": "HARM_SEVERITY_NEGLIGIBLE", "severityScore": 0.09089675 } ]}
The Google Vertex provider does not yet support dynamic retrieval mode and threshold.
For more details, see the Google Vertex AI documentation on grounding with Google Search.
Troubleshooting
Schema Limitations
The Google Vertex API uses a subset of the OpenAPI 3.0 schema, which does not support features such as unions. The errors that you get in this case look like this:
GenerateContentRequest.generation_config.response_schema.properties[occupation].type: must be specified
By default, structured outputs are enabled (and for tool calling they are required). You can disable structured outputs for object generation as a workaround:
const result = await generateObject({ model: vertex('gemini-1.5-pro', { structuredOutputs: false, }), schema: z.object({ name: z.string(), age: z.number(), contact: z.union([ z.object({ type: z.literal('email'), value: z.string(), }), z.object({ type: z.literal('phone'), value: z.string(), }), ]), }), prompt: 'Generate an example person for testing.',});
Model Capabilities
Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
---|---|---|---|---|
gemini-2.0-flash-exp | ||||
gemini-1.5-flash | ||||
gemini-1.5-pro |
The table above lists popular models. Please see the Google Vertex AI docs for a full list of available models. The table above lists popular models. You can also pass any available provider model ID as a string if needed.
Embedding Models
You can create models that call the Google Vertex AI embeddings API using the .textEmbeddingModel()
factory method:
const model = vertex.textEmbeddingModel('text-embedding-004');
Google Vertex AI embedding models support additional settings. You can pass them as an options argument:
const model = vertex.textEmbeddingModel('text-embedding-004', { outputDimensionality: 512, // optional, number of dimensions for the embedding});
The following optional settings are available for Google Vertex AI embedding models:
-
outputDimensionality: number
Optional reduced dimension for the output embedding. If set, excessive values in the output embedding are truncated from the end.
Model Capabilities
Model | Max Values Per Call | Parallel Calls |
---|---|---|
text-embedding-004 | 2048 |
The table above lists popular models. You can also pass any available provider model ID as a string if needed.
Image Models
You can create Imagen models that call the Imagen on Vertex AI API
using the .image()
factory method. For more on image generation with the AI SDK see generateImage().
Note that Imagen does not support an explicit size parameter. Instead, size is driven by the aspect ratio of the input image.
import { vertex } from '@ai-sdk/google-vertex';import { experimental_generateImage as generateImage } from 'ai';
const { image } = await generateImage({ model: vertex.image('imagen-3.0-generate-001'), prompt: 'A futuristic cityscape at sunset', providerOptions: { vertex: { aspectRatio: '16:9' }, },});
Model Capabilities
Model | Supported Sizes |
---|---|
imagen-3.0-generate-001 | See aspect ratios |
imagen-3.0-fast-generate-001 | See aspect ratios |
Google Vertex Anthropic Provider Usage
The Google Vertex Anthropic provider for the AI SDK offers support for Anthropic's Claude models through the Google Vertex AI APIs. This section provides details on how to set up and use the Google Vertex Anthropic provider.
Provider Instance
You can import the default provider instance vertexAnthropic
from @ai-sdk/google-vertex/anthropic
:
import { vertexAnthropic } from '@ai-sdk/google-vertex/anthropic';
If you need a customized setup, you can import createVertexAnthropic
from @ai-sdk/google-vertex/anthropic
and create a provider instance with your settings:
import { createVertexAnthropic } from '@ai-sdk/google-vertex/anthropic';
const vertexAnthropic = createVertexAnthropic({ project: 'my-project', // optional location: 'us-central1', // optional});
Node.js Runtime
For Node.js environments, the Google Vertex Anthropic provider supports all standard Google Cloud authentication options through the google-auth-library
. You can customize the authentication options by passing them to the createVertexAnthropic
function:
import { createVertexAnthropic } from '@ai-sdk/google-vertex/anthropic';
const vertexAnthropic = createVertexAnthropic({ googleAuthOptions: { credentials: { client_email: 'my-email', private_key: 'my-private-key', }, },});
Optional Provider Settings
You can use the following optional settings to customize the Google Vertex Anthropic provider instance:
-
project string
The Google Cloud project ID that you want to use for the API calls. It uses the
GOOGLE_VERTEX_PROJECT
environment variable by default. -
location string
The Google Cloud location that you want to use for the API calls, e.g.
us-central1
. It uses theGOOGLE_VERTEX_LOCATION
environment variable by default. -
googleAuthOptions object
Optional. The Authentication options used by the Google Auth Library. See also the GoogleAuthOptions interface.
-
authClient object An
AuthClient
to use. -
keyFilename string Path to a .json, .pem, or .p12 key file.
-
keyFile string Path to a .json, .pem, or .p12 key file.
-
credentials object Object containing client_email and private_key properties, or the external account client options.
-
clientOptions object Options object passed to the constructor of the client.
-
scopes string | string[] Required scopes for the desired API request.
-
projectId string Your project ID.
-
universeDomain string The default service domain for a given Cloud universe.
-
-
headers Resolvable<Record<string, string | undefined>>
Headers to include in the requests. Can be provided in multiple formats:
- A record of header key-value pairs:
Record<string, string | undefined>
- A function that returns headers:
() => Record<string, string | undefined>
- An async function that returns headers:
async () => Record<string, string | undefined>
- A promise that resolves to headers:
Promise<Record<string, string | undefined>>
- A record of header key-value pairs:
-
fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>
Custom fetch implementation. Defaults to the global
fetch
function. You can use it as a middleware to intercept requests, or to provide a custom fetch implementation for e.g. testing.
Edge Runtime
Edge runtimes (like Vercel Edge Functions and Cloudflare Workers) are lightweight JavaScript environments that run closer to users at the network edge. They only provide a subset of the standard Node.js APIs. For example, direct file system access is not available, and many Node.js-specific libraries (including the standard Google Auth library) are not compatible.
The Edge runtime version of the Google Vertex Anthropic provider supports Google's Application Default Credentials through environment variables. The values can be obtained from a json credentials file from the Google Cloud Console.
For Edge runtimes, you can import the provider instance from @ai-sdk/google-vertex/anthropic/edge
:
import { vertexAnthropic } from '@ai-sdk/google-vertex/anthropic/edge';
To customize the setup, use createVertexAnthropic
from the same module:
import { createVertexAnthropic } from '@ai-sdk/google-vertex/anthropic/edge';
const vertexAnthropic = createVertexAnthropic({ project: 'my-project', // optional location: 'us-central1', // optional});
For Edge runtime authentication, set these environment variables from your Google Default Application Credentials JSON file:
GOOGLE_CLIENT_EMAIL
GOOGLE_PRIVATE_KEY
GOOGLE_PRIVATE_KEY_ID
(optional)
Optional Provider Settings
You can use the following optional settings to customize the provider instance:
-
project string
The Google Cloud project ID that you want to use for the API calls. It uses the
GOOGLE_VERTEX_PROJECT
environment variable by default. -
location string
The Google Cloud location that you want to use for the API calls, e.g.
us-central1
. It uses theGOOGLE_VERTEX_LOCATION
environment variable by default. -
googleCredentials object
Optional. The credentials used by the Edge provider for authentication. These credentials are typically set through environment variables and are derived from a service account JSON file.
-
clientEmail string The client email from the service account JSON file. Defaults to the contents of the
GOOGLE_CLIENT_EMAIL
environment variable. -
privateKey string The private key from the service account JSON file. Defaults to the contents of the
GOOGLE_PRIVATE_KEY
environment variable. -
privateKeyId string The private key ID from the service account JSON file (optional). Defaults to the contents of the
GOOGLE_PRIVATE_KEY_ID
environment variable.
-
-
headers Resolvable<Record<string, string | undefined>>
Headers to include in the requests. Can be provided in multiple formats:
- A record of header key-value pairs:
Record<string, string | undefined>
- A function that returns headers:
() => Record<string, string | undefined>
- An async function that returns headers:
async () => Record<string, string | undefined>
- A promise that resolves to headers:
Promise<Record<string, string | undefined>>
- A record of header key-value pairs:
-
fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>
Custom fetch implementation. Defaults to the global
fetch
function. You can use it as a middleware to intercept requests, or to provide a custom fetch implementation for e.g. testing.
Computer Use
Anthropic provides three built-in tools that can be used to interact with external systems:
- Bash Tool: Allows running bash commands.
- Text Editor Tool: Provides functionality for viewing and editing text files.
- Computer Tool: Enables control of keyboard and mouse actions on a computer.
They are available via the tools
property of the provider instance.
For more background see Anthropic's Computer Use documentation.
Bash Tool
The Bash Tool allows running bash commands. Here's how to create and use it:
const bashTool = vertexAnthropic.tools.bash_20241022({ execute: async ({ command, restart }) => { // Implement your bash command execution logic here // Return the result of the command execution },});
Parameters:
command
(string): The bash command to run. Required unless the tool is being restarted.restart
(boolean, optional): Specifying true will restart this tool.
Text Editor Tool
The Text Editor Tool provides functionality for viewing and editing text files:
const textEditorTool = vertexAnthropic.tools.textEditor_20241022({ execute: async ({ command, path, file_text, insert_line, new_str, old_str, view_range, }) => { // Implement your text editing logic here // Return the result of the text editing operation },});
Parameters:
command
('view' | 'create' | 'str_replace' | 'insert' | 'undo_edit'): The command to run.path
(string): Absolute path to file or directory, e.g./repo/file.py
or/repo
.file_text
(string, optional): Required forcreate
command, with the content of the file to be created.insert_line
(number, optional): Required forinsert
command. The line number after which to insert the new string.new_str
(string, optional): New string forstr_replace
orinsert
commands.old_str
(string, optional): Required forstr_replace
command, containing the string to replace.view_range
(number[], optional): Optional forview
command to specify line range to show.
Computer Tool
The Computer Tool enables control of keyboard and mouse actions on a computer:
const computerTool = vertexAnthropic.tools.computer_20241022({ displayWidthPx: 1920, displayHeightPx: 1080, displayNumber: 0, // Optional, for X11 environments
execute: async ({ action, coordinate, text }) => { // Implement your computer control logic here // Return the result of the action
// Example code: switch (action) { case 'screenshot': { // multipart result: return { type: 'image', data: fs .readFileSync('./data/screenshot-editor.png') .toString('base64'), }; } default: { console.log('Action:', action); console.log('Coordinate:', coordinate); console.log('Text:', text); return `executed ${action}`; } } },
// map to tool result content for LLM consumption: experimental_toToolResultContent(result) { return typeof result === 'string' ? [{ type: 'text', text: result }] : [{ type: 'image', data: result.data, mimeType: 'image/png' }]; },});
Parameters:
action
('key' | 'type' | 'mouse_move' | 'left_click' | 'left_click_drag' | 'right_click' | 'middle_click' | 'double_click' | 'screenshot' | 'cursor_position'): The action to perform.coordinate
(number[], optional): Required formouse_move
andleft_click_drag
actions. Specifies the (x, y) coordinates.text
(string, optional): Required fortype
andkey
actions.
These tools can be used in conjunction with the claude-3-5-sonnet-v2@20241022
model to enable more complex interactions and tasks.
Model Capabilities
The latest Anthropic model list on Vertex AI is available here. See also Anthropic Model Comparison.
Model | Image Input | Object Generation | Tool Usage | Tool Streaming | Computer Use |
---|---|---|---|---|---|
claude-3-5-sonnet-v2@20241022 | |||||
claude-3-5-sonnet@20240620 | |||||
claude-3-5-haiku@20241022 | |||||
claude-3-sonnet@20240229 | |||||
claude-3-haiku@20240307 | |||||
claude-3-opus@20240229 |
The table above lists popular models. You can also pass any available provider model ID as a string if needed.