{"id":13472909,"url":"https://github.com/google-gemini/generative-ai-python","last_synced_at":"2025-03-21T15:08:50.966Z","repository":{"id":161665536,"uuid":"635974055","full_name":"google-gemini/generative-ai-python","owner":"google-gemini","description":"The official Python library for the Google Gemini API","archived":false,"fork":false,"pushed_at":"2024-10-24T18:05:03.000Z","size":45195,"stargazers_count":1532,"open_issues_count":98,"forks_count":306,"subscribers_count":32,"default_branch":"main","last_synced_at":"2024-10-29T15:36:26.434Z","etag":null,"topics":["gemini","gemini-api","google","python"],"latest_commit_sha":null,"homepage":"https://pypi.org/project/google-generativeai/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/google-gemini.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":"CODEOWNERS","security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-05-03T21:27:14.000Z","updated_at":"2024-10-29T14:49:47.000Z","dependencies_parsed_at":"2024-02-05T05:29:51.227Z","dependency_job_id":"77ec7f3b-203b-42d1-99fa-d3b7ddeba44b","html_url":"https://github.com/google-gemini/generative-ai-python","commit_stats":null,"previous_names":["google-gemini/generative-ai-python","google/generative-ai-python"],"tags_count":27,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-gemini%2Fgenerative-ai-python","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-gemini%2Fgenerative-ai-python/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-gemini%2Fgenerative-ai-python/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-gemini%2Fgenerative-ai-python/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/google-gemini","download_url":"https://codeload.github.com/google-gemini/generative-ai-python/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":244811540,"owners_count":20514316,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["gemini","gemini-api","google","python"],"created_at":"2024-07-31T16:00:58.954Z","updated_at":"2025-03-21T15:08:50.920Z","avatar_url":"https://github.com/google-gemini.png","language":"Python","readme":"# Google AI Python SDK for the Gemini API\n\n[![PyPI version](https://badge.fury.io/py/google-generativeai.svg)](https://badge.fury.io/py/google-generativeai)\n![Python support](https://img.shields.io/pypi/pyversions/google-generativeai)\n![PyPI - Downloads](https://img.shields.io/pypi/dd/google-generativeai)\n\n\u003e [!IMPORTANT]\n\u003e From Gemini 2.0 onwards this SDK will no longer be\ndeveloping new features. Any new code should be written using the new SDK, `google-genai` ([github](https://github.com/googleapis/python-genai),\n[pypi](https://pypi.org/project/google-genai/)). See the migration guide below to upgrade to the new SDK.\n\n# Upgrade the Google GenAI SDK for Python\n\nWith Gemini 2 we are offering a [new SDK](https://github.com/googleapis/python-genai)\n(\u003ccode\u003e[google-genai](https://pypi.org/project/google-genai/)\u003c/code\u003e,\n\u003ccode\u003ev1.0\u003c/code\u003e). The updated SDK is fully compatible with all Gemini API\nmodels and features, including recent additions like the\n[live API](https://aistudio.google.com/live) (audio + video streaming),\nimproved tool usage (\n[code execution](https://ai.google.dev/gemini-api/docs/code-execution?lang=python),\n[function calling](https://ai.google.dev/gemini-api/docs/function-calling/tutorial?lang=python) and integrated\n[Google search grounding](https://ai.google.dev/gemini-api/docs/grounding?lang=python)),\nand media generation ([Imagen](https://ai.google.dev/gemini-api/docs/imagen)).\nThis SDK allows you to connect to the Gemini API through either\n[Google AI Studio](https://aistudio.google.com/prompts/new_chat?model=gemini-2.0-flash-exp) or\n[Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/gemini-v2).\n\nThe \u003ccode\u003e[google-generativeai](https://pypi.org/project/google-generativeai)\u003c/code\u003e\npackage will continue to support the original Gemini models.\nIt \u003cem\u003ecan\u003c/em\u003e also be used with Gemini 2 models, just with a limited feature\nset. All new features will be developed in the new Google GenAI SDK.\n\n\u003c!-- \n[START update]\n# With Gemini-2 we're launching a new SDK, see this doc for details.\n# https://ai.google.dev/gemini-api/docs/migrate\n[END update]\n --\u003e\n\n\u003ctable align=\"left\"\u003e\n  \u003ctd\u003e\n    \u003ca target=\"_blank\" href=\"https://colab.research.google.com/github/google-gemini/cookbook/blob/main/quickstarts/Get_started.ipynb\"\u003e\n        \u003cimg src=\"https://ai.google.dev/site-assets/images/docs/colab_logo_32px.png\" /\u003e\n        Try the new SDK in Google Colab\n    \u003c/a\u003e\n  \u003c/td\u003e\n\u003c/table\u003e\n\u003c/br\u003e\u003c/br\u003e\n\n## Install the SDK\n\n**Before**\n\n```\npip install -U -q \"google-generativeai\"\n```\n\n**After**\n\n```\npip install -U -q \"google-genai\"\n```\n\n## Authenticate\n\nAuthenticate with API key. You can\n[create](https://aistudio.google.com/app/apikey)\nyour API key using Google AI studio.\n\n\nThe old SDK implicitly handled the API client object behind the scenes. In the\nnew SDK you create the API client and use it to call the API.\n\nRemember, in either case the SDK will pick\nup your API key from the `GOOGLE_API_KEY` environment variable if you don't pass\none to `configure`/`Client`.\n\n\u003cpre class=\"devsite-terminal\"\u003e\u003ccode\u003eexport GOOGLE_API_KEY=...\u003c/code\u003e\u003c/pre\u003e\n\n**Before**\n\n```python\nimport google.generativeai as genai\n\ngenai.configure(api_key=...)\n```\n\n**After**\n\n```python\nfrom google import genai\n\nclient = genai.Client(api_key=...)\n```\n\n## Generate content\n\nThe new SDK provides access to all the API methods through the `Client` object.\nExcept for a few stateful special cases (`chat`, live-api `session`s) these are all\nstateless functions. For utility and uniformity objects returned are `pydantic`\nclasses.\n\n**Before**\n\n```python\nimport google.generativeai as genai\n\nmodel = genai.GenerativeModel('gemini-1.5-flash')\nresponse = model.generate_content(\n    'Tell me a story in 300 words'\n)\nprint(response.text)\n```\n\n**After**\n\n```python\nfrom google import genai\nclient = genai.Client()\n\nresponse = client.models.generate_content(\n    model='gemini-2.0-flash', \n    contents='Tell me a story in 300 words.'\n)\nprint(response.text)\n\nprint(response.model_dump_json(\n    exclude_none=True, indent=4))\n```\n\n\nMany of the same convenience features exist in the new SDK. For example\n`PIL.Image` objects are automatically converted:\n\n**Before**\n\n```python\nimport google.generativeai as genai\n\nmodel = genai.GenerativeModel('gemini-1.5-flash')\nresponse = model.generate_content([\n    'Tell me a story based on this image',\n    Image.open(image_path)\n])\nprint(response.text)\n```\n\n**After**\n\n```python\nfrom google import genai\nfrom PIL import Image\n\nclient = genai.Client()\n\nresponse = client.models.generate_content(\n    model='gemini-2.0-flash',\n    contents=[\n        'Tell me a story based on this image',\n        Image.open(image_path)\n    ]\n)\nprint(response.text)\n```\n\n\n### Streaming\n\nStreaming methods are each separate functions named with a `_stream` suffix.\n\n**Before**\n\n```python\nimport google.generativeai as genai\n\nresponse = model.generate_content(\n    \"Write a cute story about cats.\",\n    stream=True)\nfor chunk in response:\n    print(chunk.text)\n```\n\n**After**\n\n```python\nfrom google import genai\nclient = genai.Client()\n\nfor chunk in client.models.generate_content_stream(\n  model='gemini-2.0-flash',\n  contents='Tell me a story in 300 words.'\n):\n    print(chunk.text)\n```\n\n\n## Optional arguments\n\nFor all methods in the new SDK the required arguments are provided as keyword\narguments. All optional inputs are provided in the `config` argument.\n\nThe `config` can always be passed as a dictionary or, for better autocomplete and\nstricter typing, each method has a `Config` class in the `google.genai.types`\nmodule. For utility and uniformity, everything in the `types` module is defined\nas a `pydantic` class. \n\n**Before**\n\n```python\nimport google.generativeai as genai\n\nmodel = genai.GenerativeModel(\n   'gemini-1.5-flash',\n    system_instruction='you are a story teller for kids under 5 years old',\n    generation_config=genai.GenerationConfig(\n       max_output_tokens=400,\n       top_k=2,\n       top_p=0.5,\n       temperature=0.5,\n       response_mime_type='application/json',\n       stop_sequences=['\\n'],\n    )\n)\nresponse = model.generate_content('tell me a story in 100 words')\n\n```\n\n**After**\n\n```python\nfrom google import genai\nfrom google.genai import types\nclient = genai.Client()\n\nresponse = client.models.generate_content(\n  model='gemini-2.0-flash',\n  contents='Tell me a story in 100 words.',\n  config=types.GenerateContentConfig(\n      system_instruction='you are a story teller for kids under 5 years old',\n      max_output_tokens= 400,\n      top_k= 2,\n      top_p= 0.5,\n      temperature= 0.5,\n      response_mime_type= 'application/json',\n      stop_sequences= ['\\n'],\n      seed=42,\n   ),\n)\n```\n\n\n### Example: Safety settings\n\nGenerate response with safety settings:\n\n**Before**\n\n```python\nimport google.generativeai as genai\n\nmodel = genai.GenerativeModel('gemini-1.5-flash')\nresponse = model.generate_content(\n    'say something bad',\n    safety_settings={\n        'HATE': 'BLOCK_ONLY_HIGH',\n        'HARASSMENT': 'BLOCK_ONLY_HIGH',\n   }\n)\n```\n\n**After**\n\n```python\nfrom google import genai\nfrom google.genai import types\nclient = genai.Client()\n\nresponse = client.models.generate_content(\n  model='gemini-2.0-flash',\n  contents='say something bad',\n  config=types.GenerateContentConfig(\n      safety_settings= [\n          types.SafetySetting(\n              category='HARM_CATEGORY_HATE_SPEECH',\n              threshold='BLOCK_ONLY_HIGH'\n          ),\n      ]\n  ),\n)\n```\n\n\n## Async\n\nTo use the new SDK with `asyncio`, there is a separate `async` implementation of\nevery method under `client.aio`.\n\n**Before**\n\n```python\nimport google.generativeai as genai\n\nmodel = genai.GenerativeModel('gemini-1.5-flash')\nresponse = model.generate_content_async(\n    'tell me a story in 100 words'\n)\n```\n\n**After**\n\n```python\nfrom google import genai\nclient = genai.Client()\n\nresponse = await client.aio.models.generate_content(\n    model='gemini-2.0-flash', \n    contents='Tell me a story in 300 words.'\n)\n```\n\n## Chat\n\nStarts a chat and sends a message to the model:\n\n**Before**\n\n```python\nimport google.generativeai as genai\n\nmodel = genai.GenerativeModel('gemini-1.5-flash')\nchat = model.start_chat()\n\nresponse = chat.send_message(\n    \"Tell me a story in 100 words\")\nresponse = chat.send_message(\n    \"What happened after that?\")\n```\n\n**After**\n\n```python\nfrom google import genai\nclient = genai.Client()\n\nchat = client.chats.create(model='gemini-2.0-flash')\n\nresponse = chat.send_message(\n    message='Tell me a story in 100 words')\nresponse = chat.send_message(\n    message='What happened after that?')\n```\n\n\n## Function calling\n\nIn the New SDK, automatic function calling is the default. Here we disable it. \n\n**Before**\n\n```python\nimport google.generativeai as genai\nfrom enum import Enum \n\ndef get_current_weather(location: str) -\u003e str:\n    \"\"\"Get the current whether in a given location.\n\n    Args:\n        location: required, The city and state, e.g. San Franciso, CA\n        unit: celsius or fahrenheit\n    \"\"\"\n    print(f'Called with: {location=}')\n    return \"23C\"\n\nmodel = genai.GenerativeModel(\n    model_name=\"gemini-1.5-flash\",\n    tools=[get_current_weather]\n)\n\nresponse = model.generate_content(\"What is the weather in San Francisco?\")\nfunction_call = response.candidates[0].parts[0].function_call\n```\n\n**After**\n\n```python\nfrom google import genai\nfrom google.genai import types\nclient = genai.Client()\n\ndef get_current_weather(location: str) -\u003e str:\n    \"\"\"Get the current whether in a given location.\n\n    Args:\n        location: required, The city and state, e.g. San Franciso, CA\n        unit: celsius or fahrenheit\n    \"\"\"\n    print(f'Called with: {location=}')\n    return \"23C\"\n\nresponse = client.models.generate_content(\n   model='gemini-2.0-flash',\n   contents=\"What is the weather like in Boston?\",\n   config=types.GenerateContentConfig(\n       tools=[get_current_weather],\n       automatic_function_calling={'disable': True},\n   ),\n)\n\nfunction_call = response.candidates[0].content.parts[0].function_call\n```\n\n### Automatic function calling\n\nThe old SDK only supports automatic function calling in chat. In the new SDK\nthis is the default behavior in `generate_content`.\n\n**Before**\n\n```python\nimport google.generativeai as genai\n\ndef get_current_weather(city: str) -\u003e str:\n    return \"23C\"\n\nmodel = genai.GenerativeModel(\n    model_name=\"gemini-1.5-flash\",\n    tools=[get_current_weather]\n)\n\nchat = model.start_chat(\n    enable_automatic_function_calling=True)\nresult = chat.send_message(\"What is the weather in San Francisco?\")\n```\n\n**After**\n\n```python\nfrom google import genai\nfrom google.genai import types\nclient = genai.Client()\n\ndef get_current_weather(city: str) -\u003e str:\n    return \"23C\"\n\nresponse = client.models.generate_content(\n   model='gemini-2.0-flash',\n   contents=\"What is the weather like in Boston?\",\n   config=types.GenerateContentConfig(\n       tools=[get_current_weather] \n   ),\n)\n```\n\n## Code execution\n\nCode execution is a tool that allows the model to generate Python code, run it,\nand return the result.\n\n**Before**\n\n```python\nimport google.generativeai as genai\n\nmodel = genai.GenerativeModel(\n    model_name=\"gemini-1.5-flash\",\n    tools=\"code_execution\"\n)\n\nresult = model.generate_content(\n  \"What is the sum of the first 50 prime numbers? Generate and run code for \"\n  \"the calculation, and make sure you get all 50.\")\n```\n\n**After**\n\n```python\nfrom google import genai\nfrom google.genai import types\nclient = genai.Client()\n\nresponse = client.models.generate_content(\n    model='gemini-2.0-flash',\n    contents='What is the sum of the first 50 prime numbers? Generate and run '\n             'code for the calculation, and make sure you get all 50.',\n    config=types.GenerateContentConfig(\n        tools=[types.Tool(code_execution=types.CodeExecution())],\n    ),\n)\n```\n\n## Search grounding\n\n`GoogleSearch` (Gemini\u003e=2.0) and `GoogleSearchRetrieval` (Gemini \u003c 2.0) are tools\nthat allow the model to retrieve public web data for grounding, powered by Google.\n\n**Before**\n\n```python\nimport google.generativeai as genai\n\nmodel = genai.GenerativeModel('gemini-1.5-flash')\nresponse = model.generate_content(\n    contents=\"what is the Google stock price?\",\n    tools='google_search_retrieval'\n)\n```\n\n**After**\n\n```python\nfrom google import genai\nfrom google.genai import types\nclient = genai.Client()\n\nresponse = client.models.generate_content(\n    model='gemini-2.0-flash',\n    contents='What is the Google stock price?',\n    config=types.GenerateContentConfig(\n        tools=[\n            types.Tool(\n                google_search=types.GoogleSearch()\n            )\n        ]\n    )\n)\n```\n\n## JSON response\n\nGenerate answers in JSON format.\n\nBy specifying a `response_schema` and setting\n`response_mime_type=\"application/json\"` users can constrain the model to produce a\n`JSON` response following a given structure. The new SDK uses `pydantic` classes\nto provide the schema (although you can pass a `genai.types.Schema`, or equivalent\n`dict`). When possible, the SDK will parse the returned JSON, and return the\nresult in `response.parsed`. If you provided a `pydantic` class as the schema the\nSDK will convert that `JSON` to an instance of the class.\n\n**Before**\n\n```python\nimport google.generativeai as genai\nimport typing_extensions as typing\n\nclass CountryInfo(typing.TypedDict):\n    name: str\n    population: int\n    capital: str\n    continent: str\n    major_cities: list[str]\n    gdp: int\n    official_language: str\n    total_area_sq_mi: int\n\nmodel = genai.GenerativeModel(model_name=\"gemini-1.5-flash\")\nresult = model.generate_content(\n    \"Give me information of the United States\",\n     generation_config=genai.GenerationConfig(\n         response_mime_type=\"application/json\",\n         response_schema = CountryInfo\n     ),\n)\n\n```\n\n**After**\n\n```python\nfrom google import genai\nfrom pydantic import BaseModel\nclient = genai.Client()\n\nclass CountryInfo(BaseModel):\n    name: str\n    population: int\n    capital: str\n    continent: str\n    major_cities: list[str]\n    gdp: int\n    official_language: str\n    total_area_sq_mi: int\n\nresponse = client.models.generate_content( \n    model='gemini-2.0-flash', \n    contents='Give me information of the United States.', \n    config={ \n        'response_mime_type': 'application/json',\n        'response_schema': CountryInfo, \n    }, \n )\n\nresponse.parsed\n```\n\n## Files\n\n### Upload\n\nUpload a file:\n\n**Before**\n\n```python\nimport requests\nimport pathlib\nimport google.generativeai as genai\n\n# Download file\nresponse = requests.get(\n    'https://storage.googleapis.com/generativeai-downloads/data/a11.txt')\npathlib.Path('a11.txt').write_text(response.text)\n\nfile = genai.upload_file(path='a11.txt')\n\nmodel = genai.GenerativeModel('gemini-1.5-flash')\nresponse = model.generate_content([\n    'Can you summarize this file:', \n    my_file\n])\nprint(response.text)\n```\n\n**After**\n\n```python\nimport requests\nimport pathlib\nfrom google import genai\nclient = genai.Client()\n\n# Download file\nresponse = requests.get(\n    'https://storage.googleapis.com/generativeai-downloads/data/a11.txt')\npathlib.Path('a11.txt').write_text(response.text)\n\nmy_file = client.files.upload(file='a11.txt')\n\nresponse = client.models.generate_content(\n    model='gemini-2.0-flash', \n    contents=[\n        'Can you summarize this file:', \n        my_file\n    ]\n)\nprint(response.text)\n```\n\n### List and get\n\nList uploaded files and get an uploaded file with a file name:\n\n**Before**\n\n```python\nimport google.generativeai as genai\n\nfor file in genai.list_files():\n  print(file.name)\n\nfile = genai.get_file(name=file.name)\n```\n\n**After**\n\n```python\nfrom google import genai\nclient = genai.Client()\n\nfor file in client.files.list():\n    print(file.name)\n\nfile = client.files.get(name=file.name)\n```\n\n\n### Delete\n\nDelete a file:\n\n**Before**\n\n```python\nimport pathlib\nimport google.generativeai as genai\n\npathlib.Path('dummy.txt').write_text(dummy)\ndummy_file = genai.upload_file(path='dummy.txt')\n\nfile = genai.delete_file(name=dummy_file.name)\n```\n\n**After**\n\n```python\nimport pathlib\nfrom google import genai\nclient = genai.Client()\n\npathlib.Path('dummy.txt').write_text(dummy)\ndummy_file = client.files.upload(file='dummy.txt')\n\nresponse = client.files.delete(name=dummy_file.name)\n```\n\n\n## Context caching\n\nContext caching allows the user to pass the content to the model once, cache the\ninput tokens, and then refer to the cached tokens in subsequent calls to lower the\ncost.\n\n**Before**\n\n```python\nimport requests\nimport pathlib\nimport google.generativeai as genai\nfrom google.generativeai import caching\n\n# Download file\nresponse = requests.get(\n    'https://storage.googleapis.com/generativeai-downloads/data/a11.txt')\npathlib.Path('a11.txt').write_text(response.text)\n\n\n# Upload file\ndocument = genai.upload_file(path=\"a11.txt\")\n\n# Create cache\napollo_cache = caching.CachedContent.create(\n    model=\"gemini-1.5-flash-001\",\n    system_instruction=\"You are an expert at analyzing transcripts.\",\n    contents=[document],\n)\n\n# Generate response\napollo_model = genai.GenerativeModel.from_cached_content(\n    cached_content=apollo_cache\n)\nresponse = apollo_model.generate_content(\"Find a lighthearted moment from this transcript\")\n```\n\n**After**\n\n```python\nimport requests\nimport pathlib\nfrom google import genai\nfrom google.genai import types\nclient = genai.Client()\n\n# Check which models support caching.\nfor m in client.models.list():\n  for action in m.supported_actions:\n    if action == \"createCachedContent\":\n      print(m.name) \n      break\n\n# Download file\nresponse = requests.get(\n    'https://storage.googleapis.com/generativeai-downloads/data/a11.txt')\npathlib.Path('a11.txt').write_text(response.text)\n\n\n# Upload file\ndocument = client.files.upload(file='a11.txt')\n\n# Create cache\nmodel='gemini-1.5-flash-001'\napollo_cache = client.caches.create(\n      model=model,\n      config={\n          'contents': [document],\n          'system_instruction': 'You are an expert at analyzing transcripts.',\n      },\n  )\n\n# Generate response\nresponse = client.models.generate_content(\n    model=model,\n    contents='Find a lighthearted moment from this transcript',\n    config=types.GenerateContentConfig(\n        cached_content=apollo_cache.name,\n    )\n)\n```\n\n## Count tokens\n\nCount the number of tokens in a request.\n\n**Before**\n\n```python\nimport google.generativeai as genai\n\nmodel = genai.GenerativeModel('gemini-1.5-flash')\nresponse = model.count_tokens(\n    'The quick brown fox jumps over the lazy dog.')\n\n```\n\n**After**\n\n```python\nfrom google import genai\nclient = genai.Client()\n\nresponse = client.models.count_tokens(\n    model='gemini-2.0-flash',\n    contents='The quick brown fox jumps over the lazy dog.',\n)\n```\n\n## Generate images\n\nGenerate images:\n\n**Before**\n\n```python\n#pip install https://github.com/google-gemini/generative-ai-python@imagen\nimport google.generativeai as genai\n\nimagen = genai.ImageGenerationModel(\n    \"imagen-3.0-generate-001\")\ngen_images = imagen.generate_images(\n    prompt=\"Robot holding a red skateboard\",\n    number_of_images=1,\n    safety_filter_level=\"block_only_high\",\n    person_generation=\"allow_adult\",\n    aspect_ratio=\"3:4\",\n    negative_prompt=\"Outside\",\n)\n```\n\n**After**\n\n```python\nfrom google import genai\nclient = genai.Client()\n\ngen_images = client.models.generate_image(\n    model='imagen-3.0-generate-001',\n    prompt='Robot holding a red skateboard',\n    config=types.GenerateImageConfig(\n        number_of_images= 1,\n        safety_filter_level= \"BLOCK_ONLY_HIGH\",\n        person_generation= \"ALLOW_ADULT\",\n        aspect_ratio= \"3:4\",\n        negative_prompt= \"Outside\",\n    )\n)\n\nfor n, image in enumerate(gen_images.generated_images):\n    pathlib.Path(f'{n}.png').write_bytes(\n        image.image.image_bytes)\n```\n\n\n## Embed content\n\nGenerate content embeddings.\n\n**Before**\n\n```python\nimport google.generativeai as genai\n\nresponse = genai.embed_content(\n   model='models/text-embedding-004',\n   content='Hello world'\n)\n```\n\n**After**\n\n```python\nfrom google import genai\nclient = genai.Client()\n\nresponse = client.models.embed_content(\n   model='text-embedding-004',\n   contents='Hello world',\n)\n```\n\n## Tune a Model\n\nCreate and use a tuned model.\n\nThe new SDK simplifies tuning with `client.tunings.tune`, which launches the\ntuning job and polls until the job is complete.\n\n**Before**\n\n```python\nimport google.generativeai as genai\nimport random\n\n# create tuning model\ntrain_data = {} \nfor i in range(1, 6): \n   key = f'input {i}' \n   value = f'output {i}' \n   train_data[key] = value\n\nname = f'generate-num-{random.randint(0,10000)}'\noperation = genai.create_tuned_model(\n    source_model='models/gemini-1.5-flash-001-tuning',\n    training_data=train_data,\n    id = name,\n    epoch_count = 5,\n    batch_size=4,\n    learning_rate=0.001,\n)\n# wait for tuning complete\ntuningProgress = operation.result()\n\n# generate content with the tuned model\nmodel = genai.GenerativeModel(model_name=f'tunedModels/{name}')\nresponse = model.generate_content('55')\n```\n\n**After**\n\n```python\nfrom google import genai\nfrom google.genai import types\n\nclient = genai.Client()\n\n# Check which models are available for tuning.\nfor m in client.models.list():\n  for action in m.supported_actions:\n    if action == \"createTunedModel\":\n      print(m.name) \n      break\n\n# create tuning model\ntraining_dataset=types.TuningDataset(\n        examples=[\n            types.TuningExample(\n                text_input=f'input {i}',\n                output=f'output {i}',\n            )\n            for i in range(5)\n        ],\n    )\ntuning_job = client.tunings.tune(\n    base_model='models/gemini-1.5-flash-001-tuning',\n    training_dataset=training_dataset,\n    config=types.CreateTuningJobConfig(\n        epoch_count= 5,\n        batch_size=4,\n        learning_rate=0.001,\n        tuned_model_display_name=\"test tuned model\"\n    )\n)\n\n# generate content with the tuned model\nresponse = client.models.generate_content(\n    model=tuning_job.tuned_model.model,\n    contents='55', \n)\n```\n\n","funding_links":[],"categories":["Python","LLM Providers"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgoogle-gemini%2Fgenerative-ai-python","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fgoogle-gemini%2Fgenerative-ai-python","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgoogle-gemini%2Fgenerative-ai-python/lists"}