{"id":25476230,"url":"https://github.com/googleapis/python-genai","last_synced_at":"2026-04-14T22:02:08.917Z","repository":{"id":267543542,"uuid":"899744518","full_name":"googleapis/python-genai","owner":"googleapis","description":"Google Gen AI Python SDK provides an interface for developers to integrate Google's generative models into their Python applications.","archived":false,"fork":false,"pushed_at":"2025-10-17T22:43:57.000Z","size":13446,"stargazers_count":2626,"open_issues_count":302,"forks_count":605,"subscribers_count":233,"default_branch":"main","last_synced_at":"2025-10-18T02:59:10.904Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://googleapis.github.io/python-genai/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/googleapis.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2024-12-06T23:16:52.000Z","updated_at":"2025-10-17T22:43:41.000Z","dependencies_parsed_at":"2024-12-11T00:29:34.757Z","dependency_job_id":"ec6f9909-b305-49de-a5c9-07de64a4915a","html_url":"https://github.com/googleapis/python-genai","commit_stats":null,"previous_names":["googleapis/python-genai"],"tags_count":57,"template":false,"template_full_name":null,"purl":"pkg:github/googleapis/python-genai","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/googleapis%2Fpython-genai","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/googleapis%2Fpython-genai/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/googleapis%2Fpython-genai/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/googleapis%2Fpython-genai/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/googleapis","download_url":"https://codeload.github.com/googleapis/python-genai/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/googleapis%2Fpython-genai/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":280119235,"owners_count":26275424,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-20T02:00:06.978Z","response_time":62,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-02-18T12:03:19.441Z","updated_at":"2026-03-18T02:28:46.545Z","avatar_url":"https://github.com/googleapis.png","language":"Python","funding_links":[],"categories":["Python","2. Libraries \u0026 Frameworks"],"sub_categories":["Python"],"readme":"# Google Gen AI SDK\n\n[![PyPI version](https://img.shields.io/pypi/v/google-genai.svg)](https://pypi.org/project/google-genai/)\n![Python support](https://img.shields.io/pypi/pyversions/google-genai)\n[![PyPI - Downloads](https://img.shields.io/pypi/dw/google-genai)](https://pypistats.org/packages/google-genai)\n\n--------\n**Documentation:** https://googleapis.github.io/python-genai/\n\n-----\n\nGoogle Gen AI Python SDK provides an interface for developers to integrate\nGoogle's generative models into their Python applications. It supports the\n[Gemini Developer API](https://ai.google.dev/gemini-api/docs) and\n[Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/overview)\nAPIs.\n\n## Code Generation\n\nGenerative models are often unaware of recent API and SDK updates and may suggest outdated or legacy code.\n\nWe recommend using our Code Generation instructions [`codegen_instructions.md`](https://raw.githubusercontent.com/googleapis/python-genai/refs/heads/main/codegen_instructions.md) when generating Google Gen AI SDK code to guide your model towards using the more recent SDK features. Copy and paste the instructions into your development environment to provide the model with the necessary context.\n\n## Installation\n\n```sh\npip install google-genai\n```\n\n\u003csmall\u003eWith `uv`:\u003c/small\u003e\n\n```sh\nuv pip install google-genai\n```\n\n## Imports\n\n```python\nfrom google import genai\nfrom google.genai import types\n```\n\n## Create a client\n\nPlease run one of the following code blocks to create a client for\ndifferent services ([Gemini Developer API](https://ai.google.dev/gemini-api/docs) or [Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/overview)).\n\n```python\nfrom google import genai\n\n# Only run this block for Gemini Developer API\nclient = genai.Client(api_key='GEMINI_API_KEY')\n```\n\n```python\nfrom google import genai\n\n# Only run this block for Vertex AI API\nclient = genai.Client(\n    vertexai=True, project='your-project-id', location='us-central1'\n)\n```\n\n## Using types\n\nAll API methods support Pydantic types and dictionaries, which you can access\nfrom `google.genai.types`. You can import the types module with the following:\n\n```python\nfrom google.genai import types\n```\n\nBelow is an example `generate_content()` call using types from the types module:\n\n```python\nresponse = client.models.generate_content(\n    model='gemini-2.5-flash',\n    contents=types.Part.from_text(text='Why is the sky blue?'),\n    config=types.GenerateContentConfig(\n        temperature=0,\n        top_p=0.95,\n        top_k=20,\n    ),\n)\n```\n\nAlternatively, you can accomplish the same request using dictionaries instead of\ntypes:\n\n```python\nresponse = client.models.generate_content(\n    model='gemini-2.5-flash',\n    contents={'text': 'Why is the sky blue?'},\n    config={\n        'temperature': 0,\n        'top_p': 0.95,\n        'top_k': 20,\n    },\n)\n```\n\n**(Optional) Using environment variables:**\n\nYou can create a client by configuring the necessary environment variables.\nConfiguration setup instructions depends on whether you're using the Gemini\nDeveloper API or the Gemini API in Vertex AI.\n\n**Gemini Developer API:** Set the `GEMINI_API_KEY` or `GOOGLE_API_KEY`.\nIt will automatically be picked up by the client. It's recommended that you\nset only one of those variables, but if both are set, `GOOGLE_API_KEY` takes\nprecedence.\n\n```bash\nexport GEMINI_API_KEY='your-api-key'\n```\n\n**Gemini API on Vertex AI:** Set `GOOGLE_GENAI_USE_VERTEXAI`,\n`GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION`, as shown below:\n\n```bash\nexport GOOGLE_GENAI_USE_VERTEXAI=true\nexport GOOGLE_CLOUD_PROJECT='your-project-id'\nexport GOOGLE_CLOUD_LOCATION='us-central1'\n```\n\n```python\nfrom google import genai\n\nclient = genai.Client()\n```\n\n## Close a client\n\nExplicitly close the sync client to ensure that resources, such as the\nunderlying HTTP connections, are properly cleaned up and closed.\n\n```python\nfrom google.genai import Client\n\nclient = Client()\nresponse_1 = client.models.generate_content(\n    model=MODEL_ID,\n    contents='Hello',\n)\nresponse_2 = client.models.generate_content(\n    model=MODEL_ID,\n    contents='Ask a question',\n)\n# Close the sync client to release resources.\nclient.close()\n```\n\nTo explicitly close the async client:\n\n```python\nfrom google.genai import Client\n\naclient = Client(\n    vertexai=True, project='my-project-id', location='us-central1'\n).aio\nresponse_1 = await aclient.models.generate_content(\n    model=MODEL_ID,\n    contents='Hello',\n)\nresponse_2 = await aclient.models.generate_content(\n    model=MODEL_ID,\n    contents='Ask a question',\n)\n# Close the async client to release resources.\nawait aclient.aclose()\n```\n\n## Client context managers\n\nBy using the sync client context manager, it will close the underlying\nsync client when exiting the with block and avoid httpx \"client has been closed\" error like [issues#1763](https://github.com/googleapis/python-genai/issues/1763).\n\n```python\nfrom google.genai import Client\n\nwith Client() as client:\n    response_1 = client.models.generate_content(\n        model=MODEL_ID,\n        contents='Hello',\n    )\n    response_2 = client.models.generate_content(\n        model=MODEL_ID,\n        contents='Ask a question',\n    )\n```\n\nBy using the async client context manager, it will close the underlying\n async client when exiting the with block.\n\n```python\nfrom google.genai import Client\n\nasync with Client().aio as aclient:\n    response_1 = await aclient.models.generate_content(\n        model=MODEL_ID,\n        contents='Hello',\n    )\n    response_2 = await aclient.models.generate_content(\n        model=MODEL_ID,\n        contents='Ask a question',\n    )\n```\n\n### API Selection\n\nBy default, the SDK uses the beta API endpoints provided by Google to support\npreview features in the APIs. The stable API endpoints can be selected by\nsetting the API version to `v1`.\n\nTo set the API version use `http_options`. For example, to set the API version\nto `v1` for Vertex AI:\n\n```python\nfrom google import genai\nfrom google.genai import types\n\nclient = genai.Client(\n    vertexai=True,\n    project='your-project-id',\n    location='us-central1',\n    http_options=types.HttpOptions(api_version='v1')\n)\n```\n\nTo set the API version to `v1alpha` for the Gemini Developer API:\n\n```python\nfrom google import genai\nfrom google.genai import types\n\nclient = genai.Client(\n    api_key='GEMINI_API_KEY',\n    http_options=types.HttpOptions(api_version='v1alpha')\n)\n```\n\n### Faster async client option: Aiohttp\n\nBy default we use httpx for both sync and async client implementations. In order\nto have faster performance, you may install `google-genai[aiohttp]`. In Gen AI\nSDK we configure `trust_env=True` to match with the default behavior of httpx.\nAdditional args of `aiohttp.ClientSession.request()` ([see `_RequestOptions` args](https://github.com/aio-libs/aiohttp/blob/v3.12.13/aiohttp/client.py#L170)) can be passed\nthrough the following way:\n\n```python\nhttp_options = types.HttpOptions(\n    async_client_args={'cookies': ..., 'ssl': ...},\n)\n\nclient=Client(..., http_options=http_options)\n```\n\n### Proxy\n\nBoth httpx and aiohttp libraries use `urllib.request.getproxies` from\nenvironment variables. Before client initialization, you may set proxy (and\noptional `SSL_CERT_FILE`) by setting the environment variables:\n\n```bash\nexport HTTPS_PROXY='http://username:password@proxy_uri:port'\nexport SSL_CERT_FILE='client.pem'\n```\n\nIf you need `socks5` proxy, httpx [supports](https://www.python-httpx.org/advanced/proxies/#socks) `socks5` proxy if you pass it via\nargs to `httpx.Client()`. You may install `httpx[socks]` to use it.\nThen, you can pass it through the following way:\n\n```python\nhttp_options = types.HttpOptions(\n    client_args={'proxy': 'socks5://user:pass@host:port'},\n    async_client_args={'proxy': 'socks5://user:pass@host:port'},\n)\n\nclient=Client(..., http_options=http_options)\n```\n\n### Custom base url\n\nIn some cases you might need a custom base url (for example, API gateway proxy\nserver) and bypass some authentication checks for project, location, or API key.\nYou may pass the custom base url like this:\n\n```python\nclient = Client(\n    vertexai=True,\n    http_options=types.HttpOptionsDict(\n        base_url='https://test-api-gateway-proxy.com',\n        base_url_resource_scope=types.ResourceScope.COLLECTION,\n    ),\n)\n\nresponse = client.models.generate_content(\n    model='gemini-3-pro-preview', contents='Why is the sky blue?'\n)\n```\n\nIf `base_url_resource_scope=types.ResourceScope.COLLECTION`, the resource name\nwill not include api version, project, or location.\n\nExpected request url will be:\nhttps://test-api-gateway-proxy.com/publishers/google/models/gemini-3-pro-preview\n\n\n## Types\n\nParameter types can be specified as either dictionaries(`TypedDict`) or\n[Pydantic Models](https://pydantic.readthedocs.io/en/stable/model.html).\nPydantic model types are available in the `types` module.\n\n## Models\n\nThe `client.models` module exposes model inferencing and model getters.\nSee the 'Create a client' section above to initialize a client.\n\n### Generate Content\n\n#### with text content input (text output)\n\n```python\nresponse = client.models.generate_content(\n    model='gemini-2.5-flash', contents='Why is the sky blue?'\n)\nprint(response.text)\n```\n\n#### with text content input (image output)\n\n```python\nfrom google.genai import types\n\nresponse = client.models.generate_content(\n    model='gemini-2.5-flash-image',\n    contents='A cartoon infographic for flying sneakers',\n    config=types.GenerateContentConfig(\n        response_modalities=[\"IMAGE\"],\n        image_config=types.ImageConfig(\n            aspect_ratio=\"9:16\",\n        ),\n    ),\n)\n\nfor part in response.parts:\n    if part.inline_data:\n        generated_image = part.as_image()\n        generated_image.show()\n```\n\n#### with uploaded file (Gemini Developer API only)\n\nDownload the file in console.\n\n```sh\n!wget -q https://storage.googleapis.com/generativeai-downloads/data/a11.txt\n```\n\npython code.\n\n```python\nfile = client.files.upload(file='a11.txt')\nresponse = client.models.generate_content(\n    model='gemini-2.5-flash',\n    contents=['Could you summarize this file?', file]\n)\nprint(response.text)\n```\n\n#### How to structure `contents` argument for `generate_content`\n\nThe SDK always converts the inputs to the `contents` argument into\n`list[types.Content]`.\nThe following shows some common ways to provide your inputs.\n\n##### Provide a `list[types.Content]`\n\nThis is the canonical way to provide contents, SDK will not do any conversion.\n\n##### Provide a `types.Content` instance\n\n```python\nfrom google.genai import types\n\ncontents = types.Content(\n    role='user',\n    parts=[types.Part.from_text(text='Why is the sky blue?')]\n)\n```\n\nSDK converts this to\n\n```python\n[\n    types.Content(\n        role='user',\n        parts=[types.Part.from_text(text='Why is the sky blue?')]\n    )\n]\n```\n\n##### Provide a string\n\n```python\ncontents='Why is the sky blue?'\n```\n\nThe SDK will assume this is a text part, and it converts this into the following:\n\n```python\n[\n    types.UserContent(\n        parts=[\n            types.Part.from_text(text='Why is the sky blue?')\n        ]\n    )\n]\n```\n\nWhere a `types.UserContent` is a subclass of `types.Content`, it sets the\n`role` field to be `user`.\n\n##### Provide a list of strings\n\n```python\ncontents=['Why is the sky blue?', 'Why is the cloud white?']\n```\n\nThe SDK assumes these are 2 text parts, it converts this into a single content,\nlike the following:\n\n```python\n[\n    types.UserContent(\n        parts=[\n            types.Part.from_text(text='Why is the sky blue?'),\n            types.Part.from_text(text='Why is the cloud white?'),\n        ]\n    )\n]\n```\n\nWhere a `types.UserContent` is a subclass of `types.Content`, the\n`role` field in `types.UserContent` is fixed to be `user`.\n\n##### Provide a function call part\n\n```python\nfrom google.genai import types\n\ncontents = types.Part.from_function_call(\n    name='get_weather_by_location',\n    args={'location': 'Boston'}\n)\n```\n\nThe SDK converts a function call part to a content with a `model` role:\n\n```python\n[\n    types.ModelContent(\n        parts=[\n            types.Part.from_function_call(\n                name='get_weather_by_location',\n                args={'location': 'Boston'}\n            )\n        ]\n    )\n]\n```\n\nWhere a `types.ModelContent` is a subclass of `types.Content`, the\n`role` field in `types.ModelContent` is fixed to be `model`.\n\n##### Provide a list of function call parts\n\n```python\nfrom google.genai import types\n\ncontents = [\n    types.Part.from_function_call(\n        name='get_weather_by_location',\n        args={'location': 'Boston'}\n    ),\n    types.Part.from_function_call(\n        name='get_weather_by_location',\n        args={'location': 'New York'}\n    ),\n]\n```\n\nThe SDK converts a list of function call parts to a content with a `model` role:\n\n```python\n[\n    types.ModelContent(\n        parts=[\n            types.Part.from_function_call(\n                name='get_weather_by_location',\n                args={'location': 'Boston'}\n            ),\n            types.Part.from_function_call(\n                name='get_weather_by_location',\n                args={'location': 'New York'}\n            )\n        ]\n    )\n]\n```\n\nWhere a `types.ModelContent` is a subclass of `types.Content`, the\n`role` field in `types.ModelContent` is fixed to be `model`.\n\n##### Provide a non function call part\n\n```python\nfrom google.genai import types\n\ncontents = types.Part.from_uri(\n    file_uri: 'gs://generativeai-downloads/images/scones.jpg',\n    mime_type: 'image/jpeg',\n)\n```\n\nThe SDK converts all non function call parts into a content with a `user` role.\n\n```python\n[\n    types.UserContent(parts=[\n        types.Part.from_uri(\n            file_uri: 'gs://generativeai-downloads/images/scones.jpg',\n            mime_type: 'image/jpeg',\n        )\n    ])\n]\n```\n\n##### Provide a list of non function call parts\n\n```python\nfrom google.genai import types\n\ncontents = [\n    types.Part.from_text('What is this image about?'),\n    types.Part.from_uri(\n        file_uri: 'gs://generativeai-downloads/images/scones.jpg',\n        mime_type: 'image/jpeg',\n    )\n]\n```\n\nThe SDK will convert the list of parts into a content with a `user` role\n\n```python\n[\n    types.UserContent(\n        parts=[\n            types.Part.from_text('What is this image about?'),\n            types.Part.from_uri(\n                file_uri: 'gs://generativeai-downloads/images/scones.jpg',\n                mime_type: 'image/jpeg',\n            )\n        ]\n    )\n]\n```\n\n##### Mix types in contents\n\nYou can also provide a list of `types.ContentUnion`. The SDK leaves items of\n`types.Content` as is, it groups consecutive non function call parts into a\nsingle `types.UserContent`, and it groups consecutive function call parts into\na single `types.ModelContent`.\n\nIf you put a list within a list, the inner list can only contain\n`types.PartUnion` items. The SDK will convert the inner list into a single\n`types.UserContent`.\n\n### System Instructions and Other Configs\n\nThe output of the model can be influenced by several optional settings\navailable in generate_content's config parameter. For example, increasing\n`max_output_tokens` is essential for longer model responses. To make a model more\ndeterministic, lowering the `temperature` parameter reduces randomness, with\nvalues near 0 minimizing variability. Capabilities and parameter defaults for\neach model is shown in the\n[Vertex AI docs](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-flash)\nand [Gemini API docs](https://ai.google.dev/gemini-api/docs/models) respectively.\n\n```python\nfrom google.genai import types\n\nresponse = client.models.generate_content(\n    model='gemini-2.5-flash',\n    contents='high',\n    config=types.GenerateContentConfig(\n        system_instruction='I say high, you say low',\n        max_output_tokens=3,\n        temperature=0.3,\n    ),\n)\nprint(response.text)\n```\n\n### List Base Models\n\nTo retrieve tuned models, see [list tuned models](#list-tuned-models).\n\n```python\nfor model in client.models.list():\n    print(model)\n```\n\n```python\npager = client.models.list(config={'page_size': 10})\nprint(pager.page_size)\nprint(pager[0])\npager.next_page()\nprint(pager[0])\n```\n\n#### List Base Models (Asynchronous)\n\n```python\nasync for job in await client.aio.models.list():\n    print(job)\n```\n\n```python\nasync_pager = await client.aio.models.list(config={'page_size': 10})\nprint(async_pager.page_size)\nprint(async_pager[0])\nawait async_pager.next_page()\nprint(async_pager[0])\n```\n\n### Safety Settings\n\n```python\nfrom google.genai import types\n\nresponse = client.models.generate_content(\n    model='gemini-2.5-flash',\n    contents='Say something bad.',\n    config=types.GenerateContentConfig(\n        safety_settings=[\n            types.SafetySetting(\n                category='HARM_CATEGORY_HATE_SPEECH',\n                threshold='BLOCK_ONLY_HIGH',\n            )\n        ]\n    ),\n)\nprint(response.text)\n```\n\n### Function Calling\n\n#### Automatic Python function Support\n\nYou can pass a Python function directly and it will be automatically\ncalled and responded by default.\n\n```python\nfrom google.genai import types\n\ndef get_current_weather(location: str) -\u003e str:\n    \"\"\"Returns the current weather.\n\n    Args:\n        location: The city and state, e.g. San Francisco, CA\n    \"\"\"\n    return 'sunny'\n\n\nresponse = client.models.generate_content(\n    model='gemini-2.5-flash',\n    contents='What is the weather like in Boston?',\n    config=types.GenerateContentConfig(tools=[get_current_weather]),\n)\n\nprint(response.text)\n```\n\n#### Disabling automatic function calling\n\nIf you pass in a python function as a tool directly, and do not want\nautomatic function calling, you can disable automatic function calling\nas follows:\n\n```python\nfrom google.genai import types\n\nresponse = client.models.generate_content(\n    model='gemini-2.5-flash',\n    contents='What is the weather like in Boston?',\n    config=types.GenerateContentConfig(\n        tools=[get_current_weather],\n        automatic_function_calling=types.AutomaticFunctionCallingConfig(\n            disable=True\n        ),\n    ),\n)\n```\n\nWith automatic function calling disabled, you will get a list of function call\nparts in the response:\n\n```python\nfunction_calls: Optional[List[types.FunctionCall]] = response.function_calls\n```\n\n#### Manually declare and invoke a function for function calling\n\nIf you don't want to use the automatic function support, you can manually\ndeclare the function and invoke it.\n\nThe following example shows how to declare a function and pass it as a tool.\nThen you will receive a function call part in the response.\n\n```python\nfrom google.genai import types\n\nfunction = types.FunctionDeclaration(\n    name='get_current_weather',\n    description='Get the current weather in a given location',\n    parameters_json_schema={\n        'type': 'object',\n        'properties': {\n            'location': {\n                'type': 'string',\n                'description': 'The city and state, e.g. San Francisco, CA',\n            }\n        },\n        'required': ['location'],\n    },\n)\n\ntool = types.Tool(function_declarations=[function])\n\nresponse = client.models.generate_content(\n    model='gemini-2.5-flash',\n    contents='What is the weather like in Boston?',\n    config=types.GenerateContentConfig(tools=[tool]),\n)\n\nprint(response.function_calls[0])\n```\n\nAfter you receive the function call part from the model, you can invoke the function\nand get the function response. And then you can pass the function response to\nthe model.\nThe following example shows how to do it for a simple function invocation.\n\n```python\nfrom google.genai import types\n\nuser_prompt_content = types.Content(\n    role='user',\n    parts=[types.Part.from_text(text='What is the weather like in Boston?')],\n)\nfunction_call_part = response.function_calls[0]\nfunction_call_content = response.candidates[0].content\n\n\ntry:\n    function_result = get_current_weather(\n        **function_call_part.function_call.args\n    )\n    function_response = {'result': function_result}\nexcept (\n    Exception\n) as e:  # instead of raising the exception, you can let the model handle it\n    function_response = {'error': str(e)}\n\n\nfunction_response_part = types.Part.from_function_response(\n    name=function_call_part.name,\n    response=function_response,\n)\nfunction_response_content = types.Content(\n    role='tool', parts=[function_response_part]\n)\n\nresponse = client.models.generate_content(\n    model='gemini-2.5-flash',\n    contents=[\n        user_prompt_content,\n        function_call_content,\n        function_response_content,\n    ],\n    config=types.GenerateContentConfig(\n        tools=[tool],\n    ),\n)\n\nprint(response.text)\n```\n\n#### Function calling with `ANY` tools config mode\n\nIf you configure function calling mode to be `ANY`, then the model will always\nreturn function call parts. If you also pass a python function as a tool, by\ndefault the SDK will perform automatic function calling until the remote calls exceed the\nmaximum remote call for automatic function calling (default to 10 times).\n\nIf you'd like to disable automatic function calling in `ANY` mode:\n\n```python\nfrom google.genai import types\n\ndef get_current_weather(location: str) -\u003e str:\n    \"\"\"Returns the current weather.\n\n    Args:\n        location: The city and state, e.g. San Francisco, CA\n    \"\"\"\n    return \"sunny\"\n\nresponse = client.models.generate_content(\n    model=\"gemini-2.5-flash\",\n    contents=\"What is the weather like in Boston?\",\n    config=types.GenerateContentConfig(\n        tools=[get_current_weather],\n        automatic_function_calling=types.AutomaticFunctionCallingConfig(\n            disable=True\n        ),\n        tool_config=types.ToolConfig(\n            function_calling_config=types.FunctionCallingConfig(mode='ANY')\n        ),\n    ),\n)\n```\n\nIf you'd like to set `x` number of automatic function call turns, you can\nconfigure the maximum remote calls to be `x + 1`.\nAssuming you prefer `1` turn for automatic function calling.\n\n```python\nfrom google.genai import types\n\ndef get_current_weather(location: str) -\u003e str:\n    \"\"\"Returns the current weather.\n\n    Args:\n        location: The city and state, e.g. San Francisco, CA\n    \"\"\"\n    return \"sunny\"\n\nresponse = client.models.generate_content(\n    model=\"gemini-2.5-flash\",\n    contents=\"What is the weather like in Boston?\",\n    config=types.GenerateContentConfig(\n        tools=[get_current_weather],\n        automatic_function_calling=types.AutomaticFunctionCallingConfig(\n            maximum_remote_calls=2\n        ),\n        tool_config=types.ToolConfig(\n            function_calling_config=types.FunctionCallingConfig(mode='ANY')\n        ),\n    ),\n)\n```\n\n#### Model Context Protocol (MCP) support (experimental)\n\nBuilt-in [MCP](https://modelcontextprotocol.io/introduction) support is an\nexperimental feature. You can pass a local MCP server as a tool directly.\n\n```python\nimport os\nimport asyncio\nfrom datetime import datetime\nfrom mcp import ClientSession, StdioServerParameters\nfrom mcp.client.stdio import stdio_client\nfrom google import genai\n\nclient = genai.Client()\n\n# Create server parameters for stdio connection\nserver_params = StdioServerParameters(\n    command=\"npx\",  # Executable\n    args=[\"-y\", \"@philschmid/weather-mcp\"],  # MCP Server\n    env=None,  # Optional environment variables\n)\n\nasync def run():\n    async with stdio_client(server_params) as (read, write):\n        async with ClientSession(read, write) as session:\n            # Prompt to get the weather for the current day in London.\n            prompt = f\"What is the weather in London in {datetime.now().strftime('%Y-%m-%d')}?\"\n\n            # Initialize the connection between client and server\n            await session.initialize()\n\n            # Send request to the model with MCP function declarations\n            response = await client.aio.models.generate_content(\n                model=\"gemini-2.5-flash\",\n                contents=prompt,\n                config=genai.types.GenerateContentConfig(\n                    temperature=0,\n                    tools=[session],  # uses the session, will automatically call the tool using automatic function calling\n                ),\n            )\n            print(response.text)\n\n# Start the asyncio event loop and run the main function\nasyncio.run(run())\n```\n\n### JSON Response Schema\n\nHowever you define your schema, don't duplicate it in your input prompt,\nincluding by giving examples of expected JSON output. If you do, the generated\noutput might be lower in quality.\n\n#### JSON Schema support\n\nSchemas can be provided as standard JSON schema.\n\n```python\nuser_profile = {\n    'properties': {\n        'age': {\n            'anyOf': [\n                {'maximum': 20, 'minimum': 0, 'type': 'integer'},\n                {'type': 'null'},\n            ],\n            'title': 'Age',\n        },\n        'username': {\n            'description': \"User's unique name\",\n            'title': 'Username',\n            'type': 'string',\n        },\n    },\n    'required': ['username', 'age'],\n    'title': 'User Schema',\n    'type': 'object',\n}\n\nresponse = client.models.generate_content(\n    model='gemini-2.5-flash',\n    contents='Give me a random user profile.',\n    config={\n        'response_mime_type': 'application/json',\n        'response_json_schema': user_profile\n    },\n)\nprint(response.text)\n```\n\n#### Pydantic Model Schema support\n\nSchemas can be provided as Pydantic Models.\n\n```python\nfrom pydantic import BaseModel\nfrom google.genai import types\n\n\nclass CountryInfo(BaseModel):\n    name: str\n    population: int\n    capital: str\n    continent: str\n    gdp: int\n    official_language: str\n    total_area_sq_mi: int\n\n\nresponse = client.models.generate_content(\n    model='gemini-2.5-flash',\n    contents='Give me information for the United States.',\n    config=types.GenerateContentConfig(\n        response_mime_type='application/json',\n        response_json_schema=CountryInfo.model_json_schema(),\n    ),\n)\nprint(response.text)\n```\n\n```python\nfrom google.genai import types\n\nresponse = client.models.generate_content(\n    model='gemini-2.5-flash',\n    contents='Give me information for the United States.',\n    config=types.GenerateContentConfig(\n        response_mime_type='application/json',\n        response_json_schema={\n            'required': [\n                'name',\n                'population',\n                'capital',\n                'continent',\n                'gdp',\n                'official_language',\n                'total_area_sq_mi',\n            ],\n            'properties': {\n                'name': {'type': 'STRING'},\n                'population': {'type': 'INTEGER'},\n                'capital': {'type': 'STRING'},\n                'continent': {'type': 'STRING'},\n                'gdp': {'type': 'INTEGER'},\n                'official_language': {'type': 'STRING'},\n                'total_area_sq_mi': {'type': 'INTEGER'},\n            },\n            'type': 'OBJECT',\n        },\n    ),\n)\nprint(response.text)\n```\n\n### Generate Content (Synchronous Streaming)\n\nGenerate content in a streaming format so that the model outputs streams back\nto you, rather than being returned as one chunk.\n\n#### Streaming for text content\n\n```python\nfor chunk in client.models.generate_content_stream(\n    model='gemini-2.5-flash', contents='Tell me a story in 300 words.'\n):\n    print(chunk.text, end='')\n```\n\n#### Streaming for image content\n\nIf your image is stored in [Google Cloud Storage](https://cloud.google.com/storage),\nyou can use the `from_uri` class method to create a `Part` object.\n\n```python\nfrom google.genai import types\n\nfor chunk in client.models.generate_content_stream(\n    model='gemini-2.5-flash',\n    contents=[\n        'What is this image about?',\n        types.Part.from_uri(\n            file_uri='gs://generativeai-downloads/images/scones.jpg',\n            mime_type='image/jpeg',\n        ),\n    ],\n):\n    print(chunk.text, end='')\n```\n\nIf your image is stored in your local file system, you can read it in as bytes\ndata and use the `from_bytes` class method to create a `Part` object.\n\n```python\nfrom google.genai import types\n\nYOUR_IMAGE_PATH = 'your_image_path'\nYOUR_IMAGE_MIME_TYPE = 'your_image_mime_type'\nwith open(YOUR_IMAGE_PATH, 'rb') as f:\n    image_bytes = f.read()\n\nfor chunk in client.models.generate_content_stream(\n    model='gemini-2.5-flash',\n    contents=[\n        'What is this image about?',\n        types.Part.from_bytes(data=image_bytes, mime_type=YOUR_IMAGE_MIME_TYPE),\n    ],\n):\n    print(chunk.text, end='')\n```\n\n### Generate Content (Asynchronous Non Streaming)\n\n`client.aio` exposes all the analogous [`async` methods](https://docs.python.org/3/library/asyncio.html)\nthat are available on `client`. Note that it applies to all the modules.\n\nFor example, `client.aio.models.generate_content` is the `async` version\nof `client.models.generate_content`\n\n```python\nresponse = await client.aio.models.generate_content(\n    model='gemini-2.5-flash', contents='Tell me a story in 300 words.'\n)\n\nprint(response.text)\n```\n\n### Generate Content (Asynchronous Streaming)\n\n```python\nasync for chunk in await client.aio.models.generate_content_stream(\n    model='gemini-2.5-flash', contents='Tell me a story in 300 words.'\n):\n    print(chunk.text, end='')\n```\n\n### Count Tokens and Compute Tokens\n\n```python\nresponse = client.models.count_tokens(\n    model='gemini-2.5-flash',\n    contents='why is the sky blue?',\n)\nprint(response)\n```\n\n#### Compute Tokens\n\nCompute tokens is only supported in Vertex AI.\n\n```python\nresponse = client.models.compute_tokens(\n    model='gemini-2.5-flash',\n    contents='why is the sky blue?',\n)\nprint(response)\n```\n\n##### Async\n\n```python\nresponse = await client.aio.models.count_tokens(\n    model='gemini-2.5-flash',\n    contents='why is the sky blue?',\n)\nprint(response)\n```\n\n#### Local Count Tokens\n\n```python\ntokenizer = genai.LocalTokenizer(model_name='gemini-2.5-flash')\nresult = tokenizer.count_tokens(\"What is your name?\")\n```\n\n#### Local Compute Tokens\n\n```python\ntokenizer = genai.LocalTokenizer(model_name='gemini-2.5-flash')\nresult = tokenizer.compute_tokens(\"What is your name?\")\n```\n\n### Embed Content\n\n```python\nresponse = client.models.embed_content(\n    model='gemini-embedding-001',\n    contents='why is the sky blue?',\n)\nprint(response)\n```\n\n```python\nfrom google.genai import types\n\nresponse = client.models.embed_content(\n    model='gemini-embedding-001',\n    contents=['why is the sky blue?', 'What is your age?'],\n    config=types.EmbedContentConfig(output_dimensionality=10),\n)\n\nprint(response)\n```\n\n### Imagen\n\n#### Generate Images\n\n```python\nfrom google.genai import types\n\nresponse1 = client.models.generate_images(\n    model='imagen-4.0-generate-001',\n    prompt='An umbrella in the foreground, and a rainy night sky in the background',\n    config=types.GenerateImagesConfig(\n        number_of_images=1,\n        include_rai_reason=True,\n        output_mime_type='image/jpeg',\n    ),\n)\nresponse1.generated_images[0].image.show()\n```\n\n#### Upscale Image\n\nUpscale image is only supported in Vertex AI.\n\n```python\nfrom google.genai import types\n\nresponse2 = client.models.upscale_image(\n    model='imagen-4.0-upscale-preview',\n    image=response1.generated_images[0].image,\n    upscale_factor='x2',\n    config=types.UpscaleImageConfig(\n        include_rai_reason=True,\n        output_mime_type='image/jpeg',\n    ),\n)\nresponse2.generated_images[0].image.show()\n```\n\n#### Edit Image\n\nEdit image uses a separate model from generate and upscale.\n\nEdit image is only supported in Vertex AI.\n\n```python\n# Edit the generated image from above\nfrom google.genai import types\nfrom google.genai.types import RawReferenceImage, MaskReferenceImage\n\nraw_ref_image = RawReferenceImage(\n    reference_id=1,\n    reference_image=response1.generated_images[0].image,\n)\n\n# Model computes a mask of the background\nmask_ref_image = MaskReferenceImage(\n    reference_id=2,\n    config=types.MaskReferenceConfig(\n        mask_mode='MASK_MODE_BACKGROUND',\n        mask_dilation=0,\n    ),\n)\n\nresponse3 = client.models.edit_image(\n    model='imagen-3.0-capability-001',\n    prompt='Sunlight and clear sky',\n    reference_images=[raw_ref_image, mask_ref_image],\n    config=types.EditImageConfig(\n        edit_mode='EDIT_MODE_INPAINT_INSERTION',\n        number_of_images=1,\n        include_rai_reason=True,\n        output_mime_type='image/jpeg',\n    ),\n)\nresponse3.generated_images[0].image.show()\n```\n\n### Veo\n\nSupport for generating videos is considered public preview\n\n#### Generate Videos (Text to Video)\n\n```python\nfrom google.genai import types\n\n# Create operation\noperation = client.models.generate_videos(\n    model='veo-3.1-generate-preview',\n    prompt='A neon hologram of a cat driving at top speed',\n    config=types.GenerateVideosConfig(\n        number_of_videos=1,\n        duration_seconds=5,\n        enhance_prompt=True,\n    ),\n)\n\n# Poll operation\nwhile not operation.done:\n    time.sleep(20)\n    operation = client.operations.get(operation)\n\nvideo = operation.response.generated_videos[0].video\nvideo.show()\n```\n\n#### Generate Videos (Image to Video)\n\n```python\nfrom google.genai import types\n\n# Read local image (uses mimetypes.guess_type to infer mime type)\nimage = types.Image.from_file(\"local/path/file.png\")\n\n# Create operation\noperation = client.models.generate_videos(\n    model='veo-3.1-generate-preview',\n    # Prompt is optional if image is provided\n    prompt='Night sky',\n    image=image,\n    config=types.GenerateVideosConfig(\n        number_of_videos=1,\n        duration_seconds=5,\n        enhance_prompt=True,\n        # Can also pass an Image into last_frame for frame interpolation\n    ),\n)\n\n# Poll operation\nwhile not operation.done:\n    time.sleep(20)\n    operation = client.operations.get(operation)\n\nvideo = operation.response.generated_videos[0].video\nvideo.show()\n```\n\n#### Generate Videos (Video to Video)\n\nCurrently, only Gemini Developer API supports video extension on Veo 3.1 for\npreviously generated videos. Vertex supports video extension on Veo 2.0.\n\n```python\nfrom google.genai import types\n\n# Read local video (uses mimetypes.guess_type to infer mime type)\nvideo = types.Video.from_file(\"local/path/video.mp4\")\n\n# Create operation\noperation = client.models.generate_videos(\n    model='veo-3.1-generate-preview',\n    # Prompt is optional if Video is provided\n    prompt='Night sky',\n    # Input video must be in GCS for Vertex or a URI for Gemini\n    video=types.Video(\n        uri=\"gs://bucket-name/inputs/videos/cat_driving.mp4\",\n    ),\n    config=types.GenerateVideosConfig(\n        number_of_videos=1,\n        duration_seconds=5,\n        enhance_prompt=True,\n    ),\n)\n\n# Poll operation\nwhile not operation.done:\n    time.sleep(20)\n    operation = client.operations.get(operation)\n\nvideo = operation.response.generated_videos[0].video\nvideo.show()\n```\n\n## Chats\n\nCreate a chat session to start a multi-turn conversations with the model. Then,\nuse `chat.send_message` function multiple times within the same chat session so\nthat it can reflect on its previous responses (i.e., engage in an ongoing\n conversation). See the 'Create a client' section above to initialize a client.\n\n### Send Message (Synchronous Non-Streaming)\n\n```python\nchat = client.chats.create(model='gemini-2.5-flash')\nresponse = chat.send_message('tell me a story')\nprint(response.text)\nresponse = chat.send_message('summarize the story you told me in 1 sentence')\nprint(response.text)\n```\n\n### Send Message (Synchronous Streaming)\n\n```python\nchat = client.chats.create(model='gemini-2.5-flash')\nfor chunk in chat.send_message_stream('tell me a story'):\n    print(chunk.text)\n```\n\n### Send Message (Asynchronous Non-Streaming)\n\n```python\nchat = client.aio.chats.create(model='gemini-2.5-flash')\nresponse = await chat.send_message('tell me a story')\nprint(response.text)\n```\n\n### Send Message (Asynchronous Streaming)\n\n```python\nchat = client.aio.chats.create(model='gemini-2.5-flash')\nasync for chunk in await chat.send_message_stream('tell me a story'):\n    print(chunk.text)\n```\n\n## Files\n\nFiles are only supported in Gemini Developer API. See the 'Create a client'\nsection above to initialize a client.\n\n```sh\n!gcloud storage cp gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf .\n!gcloud storage cp gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf .\n```\n\n### Upload\n\n```python\nfile1 = client.files.upload(file='2312.11805v3.pdf')\nfile2 = client.files.upload(file='2403.05530.pdf')\n\nprint(file1)\nprint(file2)\n```\n\n### Get\n\n```python\nfile1 = client.files.upload(file='2312.11805v3.pdf')\nfile_info = client.files.get(name=file1.name)\n```\n\n### Delete\n\n```python\nfile3 = client.files.upload(file='2312.11805v3.pdf')\n\nclient.files.delete(name=file3.name)\n```\n\n## Caches\n\n`client.caches` contains the control plane APIs for cached content. See the\n'Create a client' section above to initialize a client.\n\n### Create\n\n```python\nfrom google.genai import types\n\nif client.vertexai:\n    file_uris = [\n        'gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf',\n        'gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf',\n    ]\nelse:\n    file_uris = [file1.uri, file2.uri]\n\ncached_content = client.caches.create(\n    model='gemini-2.5-flash',\n    config=types.CreateCachedContentConfig(\n        contents=[\n            types.Content(\n                role='user',\n                parts=[\n                    types.Part.from_uri(\n                        file_uri=file_uris[0], mime_type='application/pdf'\n                    ),\n                    types.Part.from_uri(\n                        file_uri=file_uris[1],\n                        mime_type='application/pdf',\n                    ),\n                ],\n            )\n        ],\n        system_instruction='What is the sum of the two pdfs?',\n        display_name='test cache',\n        ttl='3600s',\n    ),\n)\n```\n\n### Get\n\n```python\ncached_content = client.caches.get(name=cached_content.name)\n```\n\n### Generate Content with Caches\n\n```python\nfrom google.genai import types\n\nresponse = client.models.generate_content(\n    model='gemini-2.5-flash',\n    contents='Summarize the pdfs',\n    config=types.GenerateContentConfig(\n        cached_content=cached_content.name,\n    ),\n)\nprint(response.text)\n```\n\n## Interactions (Preview)\n\n\u003e **Warning:** The Interactions API is in **Beta**. This is a preview of an experimental feature. Features and schemas are subject to **breaking changes**.\n\nThe Interactions API is a unified interface for interacting with Gemini models and agents. It simplifies state management, tool orchestration, and long-running tasks.\n\nSee the [documentation site](https://ai.google.dev/gemini-api/docs/interactions) for more details.\n\n### Basic Interaction\n\n```python\ninteraction = client.interactions.create(\n    model='gemini-2.5-flash',\n    input='Tell me a short joke about programming.'\n)\nprint(interaction.outputs[-1].text)\n\n```\n\n### Stateful Conversation\n\nThe Interactions API supports server-side state management. You can continue a conversation by referencing the `previous_interaction_id`.\n\n```python\n# 1. First turn\ninteraction1 = client.interactions.create(\n    model='gemini-2.5-flash',\n    input='Hi, my name is Amir.'\n)\nprint(f\"Model: {interaction1.outputs[-1].text}\")\n\n# 2. Second turn (passing previous_interaction_id)\ninteraction2 = client.interactions.create(\n    model='gemini-2.5-flash',\n    input='What is my name?',\n    previous_interaction_id=interaction1.id\n)\nprint(f\"Model: {interaction2.outputs[-1].text}\")\n\n```\n\n### Agents (Deep Research)\n\nYou can use specialized agents like `deep-research-pro-preview-12-2025` for complex tasks.\n\n```python\nimport time\n\n# 1. Start the Deep Research Agent\ninitial_interaction = client.interactions.create(\n    input='Research the history of the Google TPUs with a focus on 2025 and 2026.',\n    agent='deep-research-pro-preview-12-2025',\n    background=True\n)\nprint(f\"Research started. Interaction ID: {initial_interaction.id}\")\n\n# 2. Poll for results\nwhile True:\n    interaction = client.interactions.get(id=initial_interaction.id)\n    print(f\"Status: {interaction.status}\")\n\n    if interaction.status == \"completed\":\n        print(\"\\nFinal Report:\\n\", interaction.outputs[-1].text)\n        break\n    elif interaction.status in [\"failed\", \"cancelled\"]:\n        print(f\"Failed with status: {interaction.status}\")\n        break\n\n    time.sleep(10)\n\n```\n\n### Multimodal Input\n\nYou can provide multimodal data (text, images, audio, etc.) in the input list.\n\n```python\nimport base64\n\n# Assuming you have an image loaded as bytes\n# base64_image = ...\n\ninteraction = client.interactions.create(\n    model='gemini-2.5-flash',\n    input=[\n        {'type': 'text', 'text': 'Describe the image.'},\n        {'type': 'image', 'data': base64_image, 'mime_type': 'image/png'}\n    ]\n)\nprint(interaction.outputs[-1].text)\n\n```\n\n### Function Calling\n\nYou can define custom functions for the model to use. The Interactions API handles the tool selection, and you provide the execution result back to the model.\n\n```python\n# 1. Define the tool\ndef get_weather(location: str):\n    \"\"\"Gets the weather for a given location.\"\"\"\n    return f\"The weather in {location} is sunny.\"\n\nweather_tool = {\n    'type': 'function',\n    'name': 'get_weather',\n    'description': 'Gets the weather for a given location.',\n    'parameters': {\n        'type': 'object',\n        'properties': {\n            'location': {'type': 'string', 'description': 'The city and state, e.g. San Francisco, CA'}\n        },\n        'required': ['location']\n    }\n}\n\n# 2. Send the request with tools\ninteraction = client.interactions.create(\n    model='gemini-2.5-flash',\n    input='What is the weather in Mountain View, CA?',\n    tools=[weather_tool]\n)\n\n# 3. Handle the tool call\nfor output in interaction.outputs:\n    if output.type == 'function_call':\n        print(f\"Tool Call: {output.name}({output.arguments})\")\n\n        # Execute your actual function here\n        result = get_weather(**output.arguments)\n\n        # Send result back to the model\n        interaction = client.interactions.create(\n            model='gemini-2.5-flash',\n            previous_interaction_id=interaction.id,\n            input=[{\n                'type': 'function_result',\n                'name': output.name,\n                'call_id': output.id,\n                'result': result\n            }]\n        )\n        print(f\"Response: {interaction.outputs[-1].text}\")\n\n```\n\n### Built-in Tools\nYou can also use Google's built-in tools, such as **Google Search** or **Code Execution**.\n\n#### Grounding with Google Search\n\n```python\ninteraction = client.interactions.create(\n    model='gemini-2.5-flash',\n    input='Who won the last Super Bowl?',\n    tools=[{'type': 'google_search'}]\n)\n\n# Find the text output (not the GoogleSearchResultContent)\ntext_output = next((o for o in interaction.outputs if o.type == 'text'), None)\nif text_output:\n    print(text_output.text)\n\n```\n\n#### Code Execution\n\n```python\ninteraction = client.interactions.create(\n    model='gemini-2.5-flash',\n    input='Calculate the 50th Fibonacci number.',\n    tools=[{'type': 'code_execution'}]\n)\nprint(interaction.outputs[-1].text)\n\n```\n\n### Multimodal Output\n\nThe Interactions API can generate multimodal outputs, such as images. You must specify the `response_modalities`.\n\n```python\nimport base64\n\ninteraction = client.interactions.create(\n    model='gemini-3-pro-image-preview',\n    input='Generate an image of a futuristic city.',\n    response_modalities=['IMAGE']\n)\n\nfor output in interaction.outputs:\n    if output.type == 'image':\n        print(f\"Generated image with mime_type: {output.mime_type}\")\n        # Save the image\n        with open(\"generated_city.png\", \"wb\") as f:\n            f.write(base64.b64decode(output.data))\n\n```\n\n## Tunings\n\n`client.tunings` contains tuning job APIs and supports supervised fine\ntuning through `tune`. Only supported in Vertex AI. See the 'Create a client'\nsection above to initialize a client.\n\n### Tune\n\n-   Vertex AI supports tuning from GCS source or from a [Vertex AI Multimodal Dataset](https://docs.cloud.google.com/vertex-ai/generative-ai/docs/multimodal/datasets)\n\n```python\nfrom google.genai import types\n\nmodel = 'gemini-2.5-flash'\ntraining_dataset = types.TuningDataset(\n    # or gcs_uri=my_vertex_multimodal_dataset\n    gcs_uri='gs://your-gcs-bucket/your-tuning-data.jsonl',\n)\n```\n\n```python\nfrom google.genai import types\n\ntuning_job = client.tunings.tune(\n    base_model=model,\n    training_dataset=training_dataset,\n    config=types.CreateTuningJobConfig(\n        epoch_count=1, tuned_model_display_name='test_dataset_examples model'\n    ),\n)\nprint(tuning_job)\n```\n\n### Get Tuning Job\n\n```python\ntuning_job = client.tunings.get(name=tuning_job.name)\nprint(tuning_job)\n```\n\n```python\nimport time\n\ncompleted_states = set(\n    [\n        'JOB_STATE_SUCCEEDED',\n        'JOB_STATE_FAILED',\n        'JOB_STATE_CANCELLED',\n    ]\n)\n\nwhile tuning_job.state not in completed_states:\n    print(tuning_job.state)\n    tuning_job = client.tunings.get(name=tuning_job.name)\n    time.sleep(10)\n```\n\n#### Use Tuned Model\n\n```python\nresponse = client.models.generate_content(\n    model=tuning_job.tuned_model.endpoint,\n    contents='why is the sky blue?',\n)\n\nprint(response.text)\n```\n\n### Get Tuned Model\n\n```python\ntuned_model = client.models.get(model=tuning_job.tuned_model.model)\nprint(tuned_model)\n```\n\n### List Tuned Models\n\nTo retrieve base models, see [list base models](#list-base-models).\n\n```python\nfor model in client.models.list(config={'page_size': 10, 'query_base': False}):\n    print(model)\n```\n\n```python\npager = client.models.list(config={'page_size': 10, 'query_base': False})\nprint(pager.page_size)\nprint(pager[0])\npager.next_page()\nprint(pager[0])\n```\n\n#### Async\n\n```python\nasync for job in await client.aio.models.list(config={'page_size': 10, 'query_base': False}):\n    print(job)\n```\n\n```python\nasync_pager = await client.aio.models.list(config={'page_size': 10, 'query_base': False})\nprint(async_pager.page_size)\nprint(async_pager[0])\nawait async_pager.next_page()\nprint(async_pager[0])\n```\n\n### Update Tuned Model\n\n```python\nfrom google.genai import types\n\nmodel = pager[0]\n\nmodel = client.models.update(\n    model=model.name,\n    config=types.UpdateModelConfig(\n        display_name='my tuned model', description='my tuned model description'\n    ),\n)\n\nprint(model)\n```\n\n\n### List Tuning Jobs\n\n```python\nfor job in client.tunings.list(config={'page_size': 10}):\n    print(job)\n```\n\n```python\npager = client.tunings.list(config={'page_size': 10})\nprint(pager.page_size)\nprint(pager[0])\npager.next_page()\nprint(pager[0])\n```\n\n#### Async\n\n```python\nasync for job in await client.aio.tunings.list(config={'page_size': 10}):\n    print(job)\n```\n\n```python\nasync_pager = await client.aio.tunings.list(config={'page_size': 10})\nprint(async_pager.page_size)\nprint(async_pager[0])\nawait async_pager.next_page()\nprint(async_pager[0])\n```\n\n## Batch Prediction\n\nOnly supported in Vertex AI. See the 'Create a client' section above to\ninitialize a client.\n\n### Create\n\nVertex AI:\n\n```python\n# Specify model and source file only, destination and job display name will be auto-populated\njob = client.batches.create(\n    model='gemini-2.5-flash',\n    src='bq://my-project.my-dataset.my-table',  # or \"gs://path/to/input/data\"\n)\n\nprint(job)\n```\n\nGemini Developer API:\n\n```python\n# Create a batch job with inlined requests\nbatch_job = client.batches.create(\n    model=\"gemini-2.5-flash\",\n    src=[{\n        \"contents\": [{\n            \"parts\": [{\n                \"text\": \"Hello!\",\n            }],\n            \"role\": \"user\",\n        }],\n        \"config\": {\"response_modalities\": [\"text\"]},\n    }],\n)\n\njob\n```\n\nIn order to create a batch job with file name. Need to upload a json file.\nFor example `myrequests.json`:\n\n```json\n{\"key\":\"request_1\", \"request\": {\"contents\": [{\"parts\": [{\"text\":\n \"Explain how AI works in a few words\"}]}], \"generation_config\": {\"response_modalities\": [\"TEXT\"]}}}\n{\"key\":\"request_2\", \"request\": {\"contents\": [{\"parts\": [{\"text\": \"Explain how Crypto works in a few words\"}]}]}}\n```\n\nThen upload the file.\n\n```python\n# Upload the file\nfile = client.files.upload(\n    file='myrequests.json',\n    config=types.UploadFileConfig(display_name='test-json')\n)\n\n# Create a batch job with file name\nbatch_job = client.batches.create(\n    model=\"gemini-2.5-flash\",\n    src=\"files/test-json\",\n)\n```\n\n```python\n# Get a job by name\njob = client.batches.get(name=job.name)\n\njob.state\n```\n\n```python\ncompleted_states = set(\n    [\n        'JOB_STATE_SUCCEEDED',\n        'JOB_STATE_FAILED',\n        'JOB_STATE_CANCELLED',\n        'JOB_STATE_PAUSED',\n    ]\n)\n\nwhile job.state not in completed_states:\n    print(job.state)\n    job = client.batches.get(name=job.name)\n    time.sleep(30)\n\njob\n```\n\n### List\n\n```python\nfor job in client.batches.list(config=types.ListBatchJobsConfig(page_size=10)):\n    print(job)\n```\n\n```python\npager = client.batches.list(config=types.ListBatchJobsConfig(page_size=10))\nprint(pager.page_size)\nprint(pager[0])\npager.next_page()\nprint(pager[0])\n```\n\n#### Async\n\n```python\nasync for job in await client.aio.batches.list(\n    config=types.ListBatchJobsConfig(page_size=10)\n):\n    print(job)\n```\n\n```python\nasync_pager = await client.aio.batches.list(\n    config=types.ListBatchJobsConfig(page_size=10)\n)\nprint(async_pager.page_size)\nprint(async_pager[0])\nawait async_pager.next_page()\nprint(async_pager[0])\n```\n\n### Delete\n\n```python\n# Delete the job resource\ndelete_job = client.batches.delete(name=job.name)\n\ndelete_job\n```\n\n## Error Handling\n\nTo handle errors raised by the model service, the SDK provides this [`APIError`](https://github.com/googleapis/python-genai/blob/main/google/genai/errors.py) class.\n\n```python\nfrom google.genai import errors\n\ntry:\n    client.models.generate_content(\n        model=\"invalid-model-name\",\n        contents=\"What is your name?\",\n    )\nexcept errors.APIError as e:\n    print(e.code) # 404\n    print(e.message)\n```\n\n## Extra Request Body\n\nThe `extra_body` field in `HttpOptions` accepts a dictionary of additional JSON\nproperties to include in the request body. This can be used to access new or\nexperimental backend features that are not yet formally supported in the SDK.\nThe structure of the dictionary must match the backend API's request structure.\n\n- Vertex AI backend API docs: https://cloud.google.com/vertex-ai/docs/reference/rest\n- Gemini API backend API docs: https://ai.google.dev/api/rest\n\n```python\nresponse = client.models.generate_content(\n    model=\"gemini-2.5-pro\",\n    contents=\"What is the weather in Boston? and how about Sunnyvale?\",\n    config=types.GenerateContentConfig(\n        tools=[get_current_weather],\n        http_options=types.HttpOptions(extra_body={'tool_config': {'function_calling_config': {'mode': 'COMPOSITIONAL'}}}),\n    ),\n)\n```\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgoogleapis%2Fpython-genai","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fgoogleapis%2Fpython-genai","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgoogleapis%2Fpython-genai/lists"}