{"id":14155842,"url":"https://github.com/googleapis/nodejs-vertexai","last_synced_at":"2025-04-04T09:09:37.177Z","repository":{"id":211315737,"uuid":"720054069","full_name":"googleapis/nodejs-vertexai","owner":"googleapis","description":null,"archived":false,"fork":false,"pushed_at":"2024-08-26T22:48:22.000Z","size":470,"stargazers_count":105,"open_issues_count":30,"forks_count":35,"subscribers_count":17,"default_branch":"main","last_synced_at":"2024-08-28T04:34:26.510Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/googleapis.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":".github/CODEOWNERS","security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-11-17T13:35:34.000Z","updated_at":"2024-08-26T22:48:18.000Z","dependencies_parsed_at":"2023-12-11T14:41:06.629Z","dependency_job_id":"17c0fd5c-b22c-4884-9e97-1e07afecaa8a","html_url":"https://github.com/googleapis/nodejs-vertexai","commit_stats":null,"previous_names":["googleapis/nodejs-vertexai"],"tags_count":20,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/googleapis%2Fnodejs-vertexai","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/googleapis%2Fnodejs-vertexai/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/googleapis%2Fnodejs-vertexai/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/googleapis%2Fnodejs-vertexai/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/googleapis","download_url":"https://codeload.github.com/googleapis/nodejs-vertexai/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247149502,"owners_count":20891954,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-17T08:05:02.461Z","updated_at":"2025-04-04T09:09:37.154Z","avatar_url":"https://github.com/googleapis.png","language":"TypeScript","funding_links":[],"categories":["others"],"sub_categories":[],"readme":"[![NPM Downloads](https://img.shields.io/npm/dm/%40google-cloud%2Fvertexai)](https://www.npmjs.com/package/@google-cloud/vertexai)\n[![Node Current](https://img.shields.io/node/v/%40google-cloud%2Fvertexai)](https://www.npmjs.com/package/@google-cloud/vertexai)\n\n\u003e [!NOTE] A new Javascript/Typescript SDK, `@google/genai`\n\u003e ([github](https://github.com/googleapis/js-genai/tree/main)), is currently\n\u003e available in a *experimental preview launch* - designed to work with Gemini\n\u003e 2.0 features. and support both the Gemini API and the Vertex API.\n\n# Vertex AI SDK for Node.js quickstart\n\nThe Vertex AI SDK for Node.js lets you use the Vertex AI Gemini API to build\nAI-powered features and applications. Both TypeScript and JavaScript are supported.\nThe sample code in this document is written in JavaScript only.\n\nFor detailed samples using the Vertex AI Node.js SDK, see the\n[samples repository](https://github.com/GoogleCloudPlatform/nodejs-docs-samples/tree/main/generative-ai/snippets)\non GitHub.\n\nFor the latest list of available Gemini models on Vertex AI, see the\n[Model information](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/models#gemini-models)\npage in Vertex AI documentation.\n\n## Before you begin\n\n1.  Make sure your node.js version is 18 or above.\n1.  [Select](https://console.cloud.google.com/project) or [create](https://cloud.google.com/resource-manager/docs/creating-managing-projects#creating_a_project) a Google Cloud project.\n1.  [Enable billing for your project](https://cloud.google.com/billing/docs/how-to/modify-project).\n1.  [Enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n1.  [Install the gcloud CLI](https://cloud.google.com/sdk/docs/install).\n1.  [Initialize the gcloud CLI](https://cloud.google.com/sdk/docs/initializing).\n1.  Create local authentication credentials for your user account:\n\n    ```sh\n    gcloud auth application-default login\n    ```\nA list of accepted authentication options are listed in [GoogleAuthOptions](https://github.com/googleapis/google-auth-library-nodejs/blob/3ae120d0a45c95e36c59c9ac8286483938781f30/src/auth/googleauth.ts#L87) interface of google-auth-library-node.js GitHub repo.\n1.  Official documentation is available in the [Vertex AI SDK Overview](https://cloud.google.com/vertex-ai/generative-ai/docs/reference/nodejs/latest/overview) page. From here, a complete list of documentation on classes, interfaces, and enums are available.\n\n## Install the SDK\n\nInstall the Vertex AI SDK for Node.js by running the following command:\n\n```shell\nnpm install @google-cloud/vertexai\n```\n\n## Initialize the `VertexAI` class\n\nTo use the Vertex AI SDK for Node.js, create an instance of `VertexAI` by\npassing it your Google Cloud project ID and location. Then create an instance of\nthe GenerativeModel class using the VertexAI class methods.\n\n```javascript\nconst {\n  FunctionDeclarationSchemaType,\n  HarmBlockThreshold,\n  HarmCategory,\n  VertexAI\n} = require('@google-cloud/vertexai');\n\nconst project = 'your-cloud-project';\nconst location = 'us-central1';\nconst textModel =  'gemini-1.5-flash';\nconst visionModel = 'gemini-1.5-flash';\n\nconst vertexAI = new VertexAI({project: project, location: location});\n\n// Instantiate Gemini models\nconst generativeModel = vertexAI.getGenerativeModel({\n    model: textModel,\n    // The following parameters are optional\n    // They can also be passed to individual content generation requests\n    safetySettings: [{category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT, threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE}],\n    generationConfig: {maxOutputTokens: 256},\n    systemInstruction: {\n      role: 'system',\n      parts: [{\"text\": `For example, you are a helpful customer service agent.`}]\n    },\n});\n\nconst generativeVisionModel = vertexAI.getGenerativeModel({\n    model: visionModel,\n});\n\nconst generativeModelPreview = vertexAI.preview.getGenerativeModel({\n    model: textModel,\n});\n```\n\n## Send text prompt requests\n\nYou can send text prompt requests by using `generateContentStream` for streamed\nresponses, or `generateContent` for nonstreamed responses.\n\n### Get streamed text responses\n\nThe response is returned in chunks as it's being generated to reduce the\nperception of latency to a human reader.\n\n```typescript\nasync function streamGenerateContent() {\n  const request = {\n    contents: [{role: 'user', parts: [{text: 'How are you doing today?'}]}],\n  };\n  const streamingResult = await generativeModel.generateContentStream(request);\n  for await (const item of streamingResult.stream) {\n    console.log('stream chunk: ', JSON.stringify(item));\n  }\n  const aggregatedResponse = await streamingResult.response;\n  console.log('aggregated response: ', JSON.stringify(aggregatedResponse));\n};\n\nstreamGenerateContent();\n```\n\n### Get nonstreamed text responses\n\nThe response is returned all at once.\n\n```javascript\nasync function generateContent() {\n  const request = {\n    contents: [{role: 'user', parts: [{text: 'How are you doing today?'}]}],\n  };\n  const result = await generativeModel.generateContent(request);\n  const response = result.response;\n  console.log('Response: ', JSON.stringify(response));\n};\n\ngenerateContent();\n```\n\n## Send multiturn chat requests\n\nChat requests use previous messages as context when responding to new prompts.\nTo send multiturn chat requests, use `sendMessageStream` for streamed responses,\nor `sendMessage` for nonstreamed responses.\n\n### Get streamed chat responses\n\nThe response is returned in chunks as it's being generated to reduce the\nperception of latency to a human reader.\n\n```javascript\nasync function streamChat() {\n  const chat = generativeModel.startChat();\n  const chatInput = \"How can I learn more about Node.js?\";\n  const result = await chat.sendMessageStream(chatInput);\n  for await (const item of result.stream) {\n      console.log(\"Stream chunk: \", item.candidates[0].content.parts[0].text);\n  }\n  const aggregatedResponse = await result.response;\n  console.log('Aggregated response: ', JSON.stringify(aggregatedResponse));\n}\n\nstreamChat();\n```\n\n### Get nonstreamed chat responses\n\nThe response is returned all at once.\n\n```javascript\nasync function sendChat() {\n  const chat = generativeModel.startChat();\n  const chatInput = \"How can I learn more about Node.js?\";\n  const result = await chat.sendMessage(chatInput);\n  const response = result.response;\n  console.log('response: ', JSON.stringify(response));\n}\n\nsendChat();\n```\n\n## Include images or videos in your prompt request\n\nPrompt requests can include either an image or video in addition to text.\nFor more information, see\n[Send multimodal prompt requests](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/send-multimodal-prompts)\nin the Vertex AI documentation.\n\n### Include an image\n\nYou can include images in the prompt either by specifying the Cloud Storage URI\nwhere the image is located or by including a base64 encoding of the image.\n\n#### Specify a Cloud Storage URI of the image\n\nYou can specify the Cloud Storage URI of the image in `fileUri`.\n\n```javascript\nasync function multiPartContent() {\n    const filePart = {fileData: {fileUri: \"gs://generativeai-downloads/images/scones.jpg\", mimeType: \"image/jpeg\"}};\n    const textPart = {text: 'What is this picture about?'};\n    const request = {\n        contents: [{role: 'user', parts: [textPart, filePart]}],\n      };\n    const streamingResult = await generativeVisionModel.generateContentStream(request);\n    for await (const item of streamingResult.stream) {\n      console.log('stream chunk: ', JSON.stringify(item));\n    }\n    const aggregatedResponse = await streamingResult.response;\n    console.log(aggregatedResponse.candidates[0].content);\n}\n\nmultiPartContent();\n```\n\n#### Specify a base64 image encoding string\n\nYou can specify the base64 image encoding string in `data`.\n\n```javascript\nasync function multiPartContentImageString() {\n    // Replace this with your own base64 image string\n    const base64Image = 'iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mP8z8BQDwAEhQGAhKmMIQAAAABJRU5ErkJggg==';\n    const filePart = {inline_data: {data: base64Image, mimeType: 'image/jpeg'}};\n    const textPart = {text: 'What is this picture about?'};\n    const request = {\n        contents: [{role: 'user', parts: [textPart, filePart]}],\n      };\n    const streamingResult = await generativeVisionModel.generateContentStream(request);\n    const contentResponse = await streamingResult.response;\n    console.log(contentResponse.candidates[0].content.parts[0].text);\n}\n\nmultiPartContentImageString();\n```\n\n### Include a video\n\nYou can include videos in the prompt by specifying the Cloud Storage URI\nwhere the video is located in `fileUri`.\n\n```javascript\nasync function multiPartContentVideo() {\n    const filePart = {fileData: {fileUri: 'gs://cloud-samples-data/video/animals.mp4', mimeType: 'video/mp4'}};\n    const textPart = {text: 'What is in the video?'};\n    const request = {\n        contents: [{role: 'user', parts: [textPart, filePart]}],\n      };\n    const streamingResult = await generativeVisionModel.generateContentStream(request);\n    for await (const item of streamingResult.stream) {\n      console.log('stream chunk: ', JSON.stringify(item));\n    }\n    const aggregatedResponse = await streamingResult.response;\n    console.log(aggregatedResponse.candidates[0].content);\n}\n\nmultiPartContentVideo();\n```\n\n## Function calling\n\nThe Vertex AI SDK for Node.js supports\n[function calling](https://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/function-calling)\nin the `sendMessage`, `sendMessageStream`, `generateContent`, and\n`generateContentStream` methods. We recommend using it through the chat methods\n(`sendMessage` or `sendMessageStream`) but have included examples of both\napproaches below.\n\n### Declare a function\n\nThe following examples show you how to declare a function.\n\n```javascript\nconst functionDeclarations = [\n  {\n    functionDeclarations: [\n      {\n        name: \"get_current_weather\",\n        description: 'get weather in a given location',\n        parameters: {\n          type: FunctionDeclarationSchemaType.OBJECT,\n          properties: {\n            location: {type: FunctionDeclarationSchemaType.STRING},\n            unit: {\n              type: FunctionDeclarationSchemaType.STRING,\n              enum: ['celsius', 'fahrenheit'],\n            },\n          },\n          required: ['location'],\n        },\n      },\n    ],\n  },\n];\n\nconst functionResponseParts = [\n  {\n    functionResponse: {\n      name: \"get_current_weather\",\n      response:\n          {name: \"get_current_weather\", content: {weather: \"super nice\"}},\n    },\n  },\n];\n```\n\n### Function calling using `sendMessageStream`\n\nAfter the function is declared, you can pass it to the model in the\n`tools` parameter of the prompt request.\n\n```javascript\nasync function functionCallingChat() {\n  // Create a chat session and pass your function declarations\n  const chat = generativeModel.startChat({\n    tools: functionDeclarations,\n  });\n\n  const chatInput1 = 'What is the weather in Boston?';\n\n  // This should include a functionCall response from the model\n  const streamingResult1 = await chat.sendMessageStream(chatInput1);\n  for await (const item of streamingResult1.stream) {\n    console.log(item.candidates[0]);\n  }\n  const response1 = await streamingResult1.response;\n  console.log(\"first aggregated response: \", JSON.stringify(response1));\n\n  // Send a follow up message with a FunctionResponse\n  const streamingResult2 = await chat.sendMessageStream(functionResponseParts);\n  for await (const item of streamingResult2.stream) {\n    console.log(item.candidates[0]);\n  }\n\n  // This should include a text response from the model using the response content\n  // provided above\n  const response2 = await streamingResult2.response;\n  console.log(\"second aggregated response: \", JSON.stringify(response2));\n}\n\nfunctionCallingChat();\n```\n\n### Function calling using `generateContentStream`\n\n```javascript\nasync function functionCallingGenerateContentStream() {\n  const request = {\n    contents: [\n      {role: 'user', parts: [{text: 'What is the weather in Boston?'}]},\n      {role: 'model', parts: [{functionCall: {name: 'get_current_weather', args: {'location': 'Boston'}}}]},\n      {role: 'user', parts: functionResponseParts}\n    ],\n    tools: functionDeclarations,\n  };\n  const streamingResult =\n      await generativeModel.generateContentStream(request);\n  for await (const item of streamingResult.stream) {\n    console.log(item.candidates[0]);\n  }\n}\n\nfunctionCallingGenerateContentStream();\n```\n\n## Counting tokens\n\n```javascript\nasync function countTokens() {\n  const request = {\n      contents: [{role: 'user', parts: [{text: 'How are you doing today?'}]}],\n    };\n  const response = await generativeModel.countTokens(request);\n  console.log('count tokens response: ', JSON.stringify(response));\n}\n\ncountTokens();\n```\n\n\n## Grounding (Preview)\n\nGrounding is preview only feature.\n\nGrounding lets you connect model output to verifiable sources of information to\nreduce hallucination. You can specify Google Search or Vertex AI search as the\ndata source for grounding.\n\n### Grounding using Google Search (Preview)\n\n```javascript\nasync function generateContentWithGoogleSearchGrounding() {\n  const generativeModelPreview = vertexAI.preview.getGenerativeModel({\n    model: textModel,\n    // The following parameters are optional\n    // They can also be passed to individual content generation requests\n    safetySettings: [{category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT, threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE}],\n    generationConfig: {maxOutputTokens: 256},\n  });\n\n  const googleSearchRetrievalTool = {\n    googleSearchRetrieval: {\n      disableAttribution: false,\n    },\n  };\n  const result = await generativeModelPreview.generateContent({\n    contents: [{role: 'user', parts: [{text: 'Why is the sky blue?'}]}],\n    tools: [googleSearchRetrievalTool],\n  })\n  const response = result.response;\n  const groundingMetadata = response.candidates[0].groundingMetadata;\n  console.log(\"GroundingMetadata is: \", JSON.stringify(groundingMetadata));\n}\ngenerateContentWithGoogleSearchGrounding();\n\n```\n\n### Grounding using Vertex AI Search (Preview)\n\n```javascript\nasync function generateContentWithVertexAISearchGrounding() {\n  const generativeModelPreview = vertexAI.preview.getGenerativeModel({\n    model: textModel,\n    // The following parameters are optional\n    // They can also be passed to individual content generation requests\n    safetySettings: [{category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT, threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE}],\n    generationConfig: {maxOutputTokens: 256},\n  });\n\n  const vertexAIRetrievalTool = {\n    retrieval: {\n      vertexAiSearch: {\n        datastore: 'projects/.../locations/.../collections/.../dataStores/...',\n      },\n      disableAttribution: false,\n    },\n  };\n  const result = await generativeModelPreview.generateContent({\n    contents: [{role: 'user', parts: [{text: 'Why is the sky blue?'}]}],\n    tools: [vertexAIRetrievalTool],\n  })\n  const response = result.response;\n  const groundingMetadata = response.candidates[0].groundingMetadata;\n  console.log(\"Grounding metadata is: \", JSON.stringify(groundingMetadata));\n}\ngenerateContentWithVertexAISearchGrounding();\n\n```\n## System Instruction\n\nYou can include an optional system instruction when instantiating a generative model to provide additional context to the model.\n\nThe system instruction can also be passed to individual text prompt requests.\n\n### Include system instruction in generative model instantiation\n\n```javascript\nconst generativeModel = vertexAI.getGenerativeModel({\n    model: textModel,\n    // The following parameter is optional.\n    systemInstruction: {\n      role: 'system',\n      parts: [{\"text\": `For example, you are a helpful customer service agent.`}]\n    },\n});\n```\n\n### Include system instruction in text prompt request\n\n```javascript\nasync function generateContent() {\n  const request = {\n    contents: [{role: 'user', parts: [{text: 'How are you doing today?'}]}],\n    systemInstruction: { role: 'system', parts: [{ text: `For example, you are a helpful customer service agent.` }] },\n  };\n  const result = await generativeModel.generateContent(request);\n  const response = result.response;\n  console.log('Response: ', JSON.stringify(response));\n};\n\ngenerateContent();\n```\n## FAQ\n### What if I want to specify authentication options instead of using default options?\n\n**Step1**: Find a list of accepted authentication options in [GoogleAuthOptions](https://github.com/googleapis/google-auth-library-nodejs/blob/3ae120d0a45c95e36c59c9ac8286483938781f30/src/auth/googleauth.ts#L87) interface of google-auth-library-node.js GitHub repo.\n\n**Step2:** Instantiate the `VertexAI` class by passing in the `GoogleAuthOptions` interface as follows:\n\n\n```javascript\n\nconst { VertexAI } = require('@google-cloud/vertexai');\nconst { GoogleAuthOptions } = require('google-auth-library');\nconst vertexAI = new VertexAI(\n  {\n    googleAuthOptions: {\n      // your GoogleAuthOptions interface\n    }\n  }\n)\n```\n\n## License\n\nThe contents of this repository are licensed under the\n[Apache License, version 2.0](http://www.apache.org/licenses/LICENSE-2.0).","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgoogleapis%2Fnodejs-vertexai","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fgoogleapis%2Fnodejs-vertexai","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgoogleapis%2Fnodejs-vertexai/lists"}