{"id":19945796,"url":"https://github.com/google-gemini-php/client","last_synced_at":"2025-12-29T19:29:05.431Z","repository":{"id":221963100,"uuid":"755889931","full_name":"google-gemini-php/client","owner":"google-gemini-php","description":"⚡️ Gemini PHP is a community-maintained PHP API client that allows you to interact with the Gemini AI API.","archived":false,"fork":false,"pushed_at":"2025-05-07T12:28:07.000Z","size":269,"stargazers_count":239,"open_issues_count":22,"forks_count":59,"subscribers_count":12,"default_branch":"main","last_synced_at":"2025-05-07T13:18:30.760Z","etag":null,"topics":["gemini","gemini-api","google-ai","google-ai-client-sdk","google-ai-studio"],"latest_commit_sha":null,"homepage":"","language":"PHP","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/google-gemini-php.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE.md","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-02-11T11:53:44.000Z","updated_at":"2025-05-07T12:28:12.000Z","dependencies_parsed_at":"2024-04-23T19:47:06.621Z","dependency_job_id":"0f175da3-ab3d-4ab8-8124-88a6ef337950","html_url":"https://github.com/google-gemini-php/client","commit_stats":{"total_commits":52,"total_committers":6,"mean_commits":8.666666666666666,"dds":"0.11538461538461542","last_synced_commit":"11e7413b231dd2ab7ca6169f780d6cc4eeae41a1"},"previous_names":["google-gemini-php/client"],"tags_count":24,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-gemini-php%2Fclient","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-gemini-php%2Fclient/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-gemini-php%2Fclient/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-gemini-php%2Fclient/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/google-gemini-php","download_url":"https://codeload.github.com/google-gemini-php/client/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254270646,"owners_count":22042859,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["gemini","gemini-api","google-ai","google-ai-client-sdk","google-ai-studio"],"created_at":"2024-11-13T00:26:56.690Z","updated_at":"2025-12-29T19:29:05.425Z","avatar_url":"https://github.com/google-gemini-php.png","language":"PHP","funding_links":[],"categories":["LLMs \u0026 AI APIs","LLM Clients \u0026 Adapters"],"sub_categories":["Recommended core stack"],"readme":"\u003cp align=\"center\"\u003e\n    \u003cimg src=\"https://raw.githubusercontent.com/google-gemini-php/client/main/art/example.png\" width=\"600\" alt=\"Google Gemini PHP\"\u003e\n    \u003cp align=\"center\"\u003e\n        \u003ca href=\"https://packagist.org/packages/google-gemini-php/client\"\u003e\u003cimg alt=\"Latest Version\" src=\"https://img.shields.io/packagist/v/google-gemini-php/client\"\u003e\u003c/a\u003e\n        \u003ca href=\"https://packagist.org/packages/google-gemini-php/client\"\u003e\u003cimg alt=\"License\" src=\"https://img.shields.io/github/license/google-gemini-php/client\"\u003e\u003c/a\u003e\n    \u003c/p\u003e\n\u003c/p\u003e\n\n------\n\n**Gemini PHP** is a community-maintained PHP API client that allows you to interact with the Gemini AI API.\n\n- Fatih AYDIN [github.com/aydinfatih](https://github.com/aydinfatih)\n- Vytautas Smilingis [github.com/Plytas](https://github.com/Plytas)\n\n## Table of Contents\n- [Prerequisites](#prerequisites)\n- [Setup](#setup)\n  - [Installation](#installation)\n  - [Setup your API key](#setup-your-api-key)\n  - [Upgrade to 2.0](#upgrade-to-20)\n- [Usage](#usage)\n    - [Chat Resource](#chat-resource)\n      - [Text-only Input](#text-only-input)\n      - [Text-and-image Input](#text-and-image-input)\n      - [Text-and-video Input](#text-and-video-input)\n      - [Image Generation](#image-generation)\n      - [Multi-turn Conversations (Chat)](#multi-turn-conversations-chat)\n      - [Chat with Streaming](#chat-with-streaming)\n      - [Stream Generate Content](#stream-generate-content)\n      - [Structured Output](#structured-output)\n      - [Function calling](#function-calling)\n      - [Code Execution](#code-execution)\n      - [Grounding with Google Search](#grounding-with-google-search)\n      - [Grounding with Google Maps](#grounding-with-google-maps)\n      - [Grounding with File Search](#grounding-with-file-search)\n      - [System Instructions](#system-instructions)\n      - [Speech generation](#speech-generation)\n      - [Thinking Mode](#thinking-mode)\n      - [Count tokens](#count-tokens)\n      - [Configuration](#configuration)\n    - [File Management](#file-management)\n      - [File Upload](#file-upload)\n      - [List Files](#list-files)\n      - [Get File Metadata](#get-file-metadata)\n      - [Delete File](#delete-file)\n    - [Cached Content](#cached-content)\n      - [Create Cached Content](#create-cached-content)\n      - [List Cached Content](#list-cached-content)\n      - [Get Cached Content](#get-cached-content)\n      - [Update Cached Content](#update-cached-content)\n      - [Delete Cached Content](#delete-cached-content)\n      - [Use Cached Content](#use-cached-content)\n    - [File Search Stores](#file-search-stores)\n      - [Create File Search Store](#create-file-search-store)\n      - [Get File Search Store](#get-file-search-store)\n      - [List File Search Stores](#list-file-search-stores)\n      - [Delete File Search Store](#delete-file-search-store)\n      - [Update File Search Store](#update-file-search-store)\n    - [File Search Documents](#file-search-documents)\n      - [Create File Search Document](#create-file-search-document)\n      - [Get File Search Document](#get-file-search-document)\n      - [List File Search Documents](#list-file-search-documents)\n      - [Delete File Search Document](#delete-file-search-document)\n    - [Embedding Resource](#embedding-resource)\n    - [Models](#models)\n        - [List Models](#list-models)\n        - [Get Model](#get-model)\n- [Troubleshooting](#troubleshooting)\n- [Testing](#testing)\n\n\n## Prerequisites\nTo complete this quickstart, make sure that your development environment meets the following requirements:\n\n- Requires [PHP 8.1+](https://php.net/releases/)\n\n\n## Setup\n\n### Installation\n\nFirst, install Gemini via the [Composer](https://getcomposer.org/) package manager:\n\n```bash\ncomposer require google-gemini-php/client\n```\n\nEnsure that the `php-http/discovery` composer plugin is allowed to run or install a client manually if your project does not already have a PSR-18 client integrated.\n\n```bash\ncomposer require guzzlehttp/guzzle\n```\n\n### Setup your API key\nTo use the Gemini API, you'll need an API key. If you don't already have one, create a key in Google AI Studio.\n\n[Get an API key](https://aistudio.google.com/app/apikey)\n\n### Upgrade to 2.0\n\nStarting 2.0 release this package will work only with Gemini v1beta API ([see API versions](https://ai.google.dev/gemini-api/docs/api-versions)).\n\nTo update, run this command:\n\n```bash\ncomposer require google-gemini-php/client:^2.0\n```\n\nThis release introduces support for new features:\n* Structured output\n* System instructions\n* File uploads\n* Function calling\n* Code execution\n* Grounding with Google Search\n* Cached content\n* Thinking model configuration\n* Speech model configuration\n* URL context retrieval\n\n`\\Gemini\\Enums\\ModelType` enum has been deprecated and will be removed in next major version. Together with this `$client-\u003egeminiPro()` and `$client-\u003egeminiFlash()` methods have been deprecated as well.\nWe suggest using `$client-\u003egenerativeModel()` method and pass in the model string directly. All methods that had previously accepted `ModelType` enum now accept a `BackedEnum`. We recommend implementing your own enum for convenience.\n\nThere may be other breaking changes not listed here. If you encounter any issues, please submit an issue or a pull request.\n\n## Usage\n\nInteract with Gemini's API:\n\n```php\nuse Gemini\\Enums\\ModelVariation;\nuse Gemini\\GeminiHelper;\nuse Gemini;\n\n$yourApiKey = getenv('YOUR_API_KEY');\n$client = Gemini::client($yourApiKey);\n\n$result = $client-\u003egenerativeModel(model: 'gemini-2.0-flash')-\u003egenerateContent('Hello');\n$result-\u003etext(); // Hello! How can I assist you today?\n\n// Helper method usage\n$result = $client-\u003egenerativeModel(\n    model: GeminiHelper::generateGeminiModel(\n        variation: ModelVariation::FLASH,\n        generation: 2.5,\n        version: \"preview-04-17\"\n    ), // models/gemini-2.5-flash-preview-04-17\n);\n$result-\u003etext(); // Hello! How can I assist you today?\n```\n\nIf necessary, it is possible to configure and create a separate client.\n\n```php\nuse Gemini;\n\n$yourApiKey = getenv('YOUR_API_KEY');\n\n$client = Gemini::factory()\n    -\u003ewithApiKey($yourApiKey)\n    -\u003ewithBaseUrl('https://generativelanguage.example.com/v1beta') // default: https://generativelanguage.googleapis.com/v1beta/\n    -\u003ewithHttpHeader('X-My-Header', 'foo')\n    -\u003ewithQueryParam('my-param', 'bar')\n    -\u003ewithHttpClient($guzzleClient = new \\GuzzleHttp\\Client(['timeout' =\u003e 30]))  // default: HTTP client found using PSR-18 HTTP Client Discovery\n    -\u003ewithStreamHandler(fn(RequestInterface $request): ResponseInterface =\u003e $guzzleClient-\u003esend($request, [\n        'stream' =\u003e true // Allows to provide a custom stream handler for the http client.\n    ]))\n    -\u003emake();\n```\n\n\n### Chat Resource\n\nFor a complete list of supported input formats and methods in Gemini API v1, see the [models documentation](https://ai.google.dev/gemini-api/docs/models).\n\n#### Text-only Input\nGenerate a response from the model given an input message.\n\n```php\nuse Gemini;\n\n$yourApiKey = getenv('YOUR_API_KEY');\n$client = Gemini::client($yourApiKey);\n\n$result = $client-\u003egenerativeModel(model: 'gemini-2.0-flash')-\u003egenerateContent('Hello');\n\n$result-\u003etext(); // Hello! How can I assist you today?\n```\n\n#### Text-and-image Input\nGenerate responses by providing both text prompts and images to the Gemini model.\n\n```php\nuse Gemini\\Data\\Blob;\nuse Gemini\\Enums\\MimeType;\n\n$result = $client\n    -\u003egenerativeModel(model: 'gemini-2.0-flash')\n    -\u003egenerateContent([\n        'What is this picture?',\n        new Blob(\n            mimeType: MimeType::IMAGE_JPEG,\n            data: base64_encode(\n                file_get_contents('https://storage.googleapis.com/generativeai-downloads/images/scones.jpg')\n            )\n        )\n    ]);\n\n$result-\u003etext(); //  The picture shows a table with a white tablecloth. On the table are two cups of coffee, a bowl of blueberries, a silver spoon, and some flowers. There are also some blueberry scones on the table.\n```\n\n#### Text-and-video Input\nProcess video content and get AI-generated descriptions using the Gemini API with an uploaded video file.\n\n```php\nuse Gemini\\Data\\UploadedFile;\nuse Gemini\\Enums\\MimeType;\n\n$result = $client\n    -\u003egenerativeModel(model: 'gemini-2.0-flash')\n    -\u003egenerateContent([\n        'What is this video?',\n        new UploadedFile(\n            fileUri: '123-456', // accepts just the name or the full URI\n            mimeType: MimeType::VIDEO_MP4\n        )\n    ]);\n\n$result-\u003etext(); //  The video shows...\n```\n\n#### Image Generation\nGenerate images from text prompts using the Imagen model.\n\n```php\nuse Gemini\\Data\\ImageConfig;\nuse Gemini\\Data\\GenerationConfig;\n\n$imageConfig = new ImageConfig(aspectRatio: '16:9');\n$generationConfig = new GenerationConfig(imageConfig: $imageConfig);\n\n$response = $client-\u003egenerativeModel(model: 'gemini-2.5-flash-image')\n    -\u003ewithGenerationConfig($generationConfig)\n    -\u003egenerateContent('Draw a futuristic city');\n\n// Save the image\nfile_put_contents('image.png', base64_decode($response-\u003eparts()[0]-\u003einlineData-\u003edata));\n```\n\n#### Multi-turn Conversations (Chat)\nUsing Gemini, you can build freeform conversations across multiple turns.\n\n```php\nuse Gemini\\Data\\Content;\nuse Gemini\\Enums\\Role;\n\n$chat = $client\n    -\u003egenerativeModel(model: 'gemini-2.0-flash')\n    -\u003estartChat(history: [\n        Content::parse(part: 'The stories you write about what I have to say should be one line. Is that clear?'),\n        Content::parse(part: 'Yes, I understand. The stories I write about your input should be one line long.', role: Role::MODEL)\n    ]);\n\n$response = $chat-\u003esendMessage('Create a story set in a quiet village in 1600s France');\necho $response-\u003etext(); // Amidst rolling hills and winding cobblestone streets, the tranquil village of Beausoleil whispered tales of love, intrigue, and the magic of everyday life in 17th century France.\n\n$response = $chat-\u003esendMessage('Rewrite the same story in 1600s England');\necho $response-\u003etext(); // In the heart of England's lush countryside, amidst emerald fields and thatched-roof cottages, the village of Willowbrook unfolded a tapestry of love, mystery, and the enchantment of ordinary days in the 17th century.\n```\n\n#### Chat with Streaming\nYou can also stream the response in a chat session. The history is automatically updated with the full response after the stream completes.\n\n```php\n$chat = $client-\u003egenerativeModel(model: 'gemini-2.0-flash')-\u003estartChat();\n\n$stream = $chat-\u003estreamSendMessage('Hello');\n\nforeach ($stream as $response) {\n    echo $response-\u003etext();\n}\n```\n\n\n#### Stream Generate Content\nBy default, the model returns a response after completing the entire generation process. You can achieve faster interactions by not waiting for the entire result, and instead use streaming to handle partial results.\n\n```php\n$stream = $client\n    -\u003egenerativeModel(model: 'gemini-2.0-flash')\n    -\u003estreamGenerateContent('Write long a story about a magic backpack.');\n\nforeach ($stream as $response) {\n    echo $response-\u003etext();\n}\n```\n\n#### Structured Output\nGemini generates unstructured text by default, but some applications require structured text. For these use cases, you can constrain Gemini to respond with JSON, a structured data format suitable for automated processing. You can also constrain the model to respond with one of the options specified in an enum.\n\n```php\nuse Gemini\\Data\\GenerationConfig;\nuse Gemini\\Data\\Schema;\nuse Gemini\\Enums\\DataType;\nuse Gemini\\Enums\\ResponseMimeType;\n\n$result = $client\n    -\u003egenerativeModel(model: 'gemini-2.0-flash')\n    -\u003ewithGenerationConfig(\n        generationConfig: new GenerationConfig(\n            responseMimeType: ResponseMimeType::APPLICATION_JSON,\n            responseSchema: new Schema(\n                type: DataType::ARRAY,\n                items: new Schema(\n                    type: DataType::OBJECT,\n                    properties: [\n                        'recipe_name' =\u003e new Schema(type: DataType::STRING),\n                        'cooking_time_in_minutes' =\u003e new Schema(type: DataType::INTEGER)\n                    ],\n                    required: ['recipe_name', 'cooking_time_in_minutes'],\n                )\n            )\n        )\n    )\n    -\u003egenerateContent('List 5 popular cookie recipes with cooking time');\n\n$result-\u003ejson();\n\n//[\n//    {\n//      +\"cooking_time_in_minutes\": 10,\n//      +\"recipe_name\": \"Chocolate Chip Cookies\",\n//    },\n//    {\n//      +\"cooking_time_in_minutes\": 12,\n//      +\"recipe_name\": \"Oatmeal Raisin Cookies\",\n//    },\n//    {\n//      +\"cooking_time_in_minutes\": 10,\n//      +\"recipe_name\": \"Peanut Butter Cookies\",\n//    },\n//    {\n//      +\"cooking_time_in_minutes\": 10,\n//      +\"recipe_name\": \"Snickerdoodles\",\n//    },\n//    {\n//      +\"cooking_time_in_minutes\": 12,\n//      +\"recipe_name\": \"Sugar Cookies\",\n//    },\n//  ]\n\n```\n\n#### Function calling\nGemini provides the ability to define and utilize custom functions that the model can call during conversations. This enables the model to perform specific actions or calculations through your defined functions.\n\n```php\n\u003c?php\n\nuse Gemini\\Data\\Content;\nuse Gemini\\Data\\FunctionCall;\nuse Gemini\\Data\\FunctionDeclaration;\nuse Gemini\\Data\\FunctionResponse;\nuse Gemini\\Data\\Part;\nuse Gemini\\Data\\Schema;\nuse Gemini\\Data\\Tool;\nuse Gemini\\Enums\\DataType;\nuse Gemini\\Enums\\Role;\n\nfunction handleFunctionCall(FunctionCall $functionCall): Content\n{\n    if ($functionCall-\u003ename === 'addition') {\n        return new Content(\n            parts: [\n                new Part(\n                    functionResponse: new FunctionResponse(\n                        name: 'addition',\n                        response: ['answer' =\u003e $functionCall-\u003eargs['number1'] + $functionCall-\u003eargs['number2']],\n                    ),\n                    thoughtSignature: 'some-signature' // Optional: Required for some models (e.g. Gemini 3 Pro)\n                )\n            ],\n            role: Role::USER\n        );\n    }\n\n    //Handle other function calls\n}\n\n$chat = $client\n    -\u003egenerativeModel(model: 'gemini-2.0-flash')\n    -\u003ewithTool(new Tool(\n        functionDeclarations: [\n            new FunctionDeclaration(\n                name: 'addition',\n                description: 'Performs addition',\n                parameters: new Schema(\n                    type: DataType::OBJECT,\n                    properties: [\n                        'number1' =\u003e new Schema(\n                            type: DataType::NUMBER,\n                            description: 'First number'\n                        ),\n                        'number2' =\u003e new Schema(\n                            type: DataType::NUMBER,\n                            description: 'Second number'\n                        ),\n                    ],\n                    required: ['number1', 'number2']\n                )\n            )\n        ]\n    ))\n    -\u003estartChat();\n\n$response = $chat-\u003esendMessage('What is 4 + 3?');\n\nif ($response-\u003eparts()[0]-\u003efunctionCall !== null) {\n    $thoughtSignature = $response-\u003eparts()[0]-\u003ethoughtSignature; // Access the thought signature\n    $functionResponse = handleFunctionCall($response-\u003eparts()[0]-\u003efunctionCall);\n\n    $response = $chat-\u003esendMessage($functionResponse);\n}\n\necho $response-\u003etext(); // 4 + 3 = 7\n```\n\n#### Code Execution\nGemini models can generate and execute code automatically, and return the result to you. This is useful for tasks that require computation, data manipulation, or other programmatic operations.\n\n```php\nuse Gemini\\Data\\CodeExecution;\nuse Gemini\\Data\\Tool;\n\n$response = $client\n    -\u003egenerativeModel(model: 'gemini-2.0-flash')\n    -\u003ewithTool(new Tool(codeExecution: CodeExecution::from()))\n    -\u003egenerateContent('What is the sum of the first 50 prime numbers? Generate and run code for the calculation, and make sure you get all 50.');\n\n// Access the executed code and results\nforeach ($response-\u003eparts() as $part) {\n    if ($part-\u003eexecutableCode !== null) {\n        echo \"Language: \" . $part-\u003eexecutableCode-\u003elanguage-\u003evalue . \"\\n\";\n        echo \"Code: \" . $part-\u003eexecutableCode-\u003ecode . \"\\n\";\n    }\n    if ($part-\u003ecodeExecutionResult !== null) {\n        echo \"Outcome: \" . $part-\u003ecodeExecutionResult-\u003eoutcome-\u003evalue . \"\\n\";\n        echo \"Output: \" . $part-\u003ecodeExecutionResult-\u003eoutput . \"\\n\";\n    }\n}\n```\n\n#### Grounding with Google Search\nGrounding with Google Search connects the Gemini model to real-time web content and works with all available languages. This allows Gemini to provide more accurate answers and cite verifiable sources beyond its knowledge cutoff.\n\n**For Gemini 2.0 and later models (Recommended):**\n\nUse the simple `GoogleSearch` tool which automatically handles search queries:\n\n```php\nuse Gemini\\Data\\GoogleSearch;\nuse Gemini\\Data\\Tool;\n\n$response = $client\n    -\u003egenerativeModel(model: 'gemini-2.0-flash')\n    -\u003ewithTool(new Tool(googleSearch: GoogleSearch::from()))\n    -\u003egenerateContent('Who won the Euro 2024?');\n\necho $response-\u003etext();\n// Spain won Euro 2024, defeating England 2-1 in the final.\n\n// Access grounding metadata to see sources\n$groundingMetadata = $response-\u003ecandidates[0]-\u003egroundingMetadata;\nif ($groundingMetadata !== null) {\n    // Get the search queries that were executed\n    foreach ($groundingMetadata-\u003ewebSearchQueries ?? [] as $query) {\n        echo \"Search query: {$query}\\n\";\n    }\n    \n    // Get the web sources\n    foreach ($groundingMetadata-\u003egroundingChunks ?? [] as $chunk) {\n        if ($chunk-\u003eweb !== null) {\n            echo \"Source: {$chunk-\u003eweb-\u003etitle} - {$chunk-\u003eweb-\u003euri}\\n\";\n        }\n    }\n    \n    // Get grounding supports (links text segments to sources)\n    foreach ($groundingMetadata-\u003egroundingSupports ?? [] as $support) {\n        if ($support-\u003esegment !== null) {\n            echo \"Text segment: {$support-\u003esegment-\u003etext}\\n\";\n            echo \"Supported by chunks: \" . implode(', ', $support-\u003egroundingChunkIndices ?? []) . \"\\n\";\n        }\n    }\n}\n```\n\n#### Grounding with Google Maps\nGrounding with Google Maps allows the model to utilize real-world geographical data. This enables more precise location-based responses, such as finding nearby points of interest.\n\n```php\nuse Gemini\\Data\\GoogleMaps;\nuse Gemini\\Data\\RetrievalConfig;\nuse Gemini\\Data\\Tool;\nuse Gemini\\Data\\ToolConfig;\n\n$tool = new Tool(\n    googleMaps: new GoogleMaps(enableWidget: true)\n);\n\n$toolConfig = new ToolConfig(\n    retrievalConfig: new RetrievalConfig(\n        latitude: 40.758896,\n        longitude: -73.985130\n    )\n);\n\n$response = $client\n    -\u003egenerativeModel(model: 'gemini-2.0-flash')\n    -\u003ewithTool($tool)\n    -\u003ewithToolConfig($toolConfig)\n    -\u003egenerateContent('Find coffee shops near me');\n\necho $response-\u003etext();\n// (Model output referencing coffee shops)\n```\n\n#### Grounding with File Search\nGrounding with File Search enables the model to retrieve and utilize information from your indexed files. This is useful for answering questions based on private or extensive document collections.\n\n```php\nuse Gemini\\Data\\FileSearch;\nuse Gemini\\Data\\Tool;\n\n$tool = new Tool(\n    fileSearch: new FileSearch(\n        fileSearchStoreNames: ['files/my-document-store'],\n        metadataFilter: 'author = \"Robert Graves\"'\n    )\n);\n\n$response = $client\n    -\u003egenerativeModel(model: 'gemini-2.0-flash')\n    -\u003ewithTool($tool)\n    -\u003egenerateContent('Summarize the document about Greek myths by Robert Graves');\n\necho $response-\u003etext();\n// (Model output summarizing the document)\n```\n\n#### System Instructions\nSystem instructions let you steer the behavior of the model based on your specific needs and use cases. You can set the role and personality of the model, define the format of responses, and provide goals and guardrails for model behavior.\n\n```php\nuse Gemini\\Data\\Content;\n\n$response = $client\n    -\u003egenerativeModel(model: 'gemini-2.0-flash')\n    -\u003ewithSystemInstruction(\n        Content::parse('You are a helpful assistant that always responds in the style of a pirate. Use nautical terms and pirate slang in all your responses.')\n    )\n    -\u003egenerateContent('Tell me about PHP programming');\n\necho $response-\u003etext();\n// Ahoy there, matey! Let me tell ye about this fine treasure called PHP programming...\n```\n\nYou can also combine system instructions with other features:\n\n```php\nuse Gemini\\Data\\Content;\nuse Gemini\\Data\\GenerationConfig;\nuse Gemini\\Enums\\ResponseMimeType;\n\n$response = $client\n    -\u003egenerativeModel(model: 'gemini-2.0-flash')\n    -\u003ewithSystemInstruction(\n        Content::parse('You are a JSON API. Always respond with valid JSON objects. Be concise.')\n    )\n    -\u003ewithGenerationConfig(\n        new GenerationConfig(responseMimeType: ResponseMimeType::APPLICATION_JSON)\n    )\n    -\u003egenerateContent('Give me information about the Eiffel Tower');\n\nprint_r($response-\u003ejson());\n```\n\n#### Speech generation\nGemini allows generating [speech from a text](https://ai.google.dev/gemini-api/docs/speech-generation). To use that, make sure to use a model that supports this functionality. The model will output base64 encoded audio string.\n\n##### Single speaker\n\n```php\nuse Gemini\\Data\\GenerationConfig;\nuse Gemini\\Data\\SpeechConfig;\nuse Gemini\\Data\\VoiceConfig;\nuse Gemini\\Data\\PrebuiltVoiceConfig;\nuse Gemini\\Enums\\ResponseModality;\n\n$response = $client-\u003egenerativeModel('gemini-2.5-flash-preview-tts')-\u003ewithGenerationConfig(\n    generationConfig: new GenerationConfig(\n        responseModalities: [ResponseModality::AUDIO],\n        speechConfig: new SpeechConfig(\n            voiceConfig: new VoiceConfig(\n                new PrebuiltVoiceConfig(voiceName: 'Kore')\n            ),\n        )\n    )\n)-\u003egenerateContent(\"Say: Hello world\");\n\n// The response contains base64 encoded audio\n$audioData = $response-\u003eparts()[0]-\u003einlineData-\u003edata;\n```\n\n##### Multi speaker\n\n```php\nuse Gemini\\Data\\GenerationConfig;\nuse Gemini\\Data\\SpeechConfig;\nuse Gemini\\Data\\MultiSpeakerVoiceConfig;\nuse Gemini\\Data\\PrebuiltVoiceConfig;\nuse Gemini\\Data\\SpeakerVoiceConfig;\nuse Gemini\\Data\\VoiceConfig;\nuse Gemini\\Enums\\ResponseModality;\n\n$response = $client-\u003egenerativeModel('gemini-2.5-flash-preview-tts')-\u003ewithGenerationConfig(\n    generationConfig: new GenerationConfig(\n        responseModalities: [ResponseModality::AUDIO],\n        speechConfig: new SpeechConfig(\n            multiSpeakerVoiceConfig: new MultiSpeakerVoiceConfig([\n                new SpeakerVoiceConfig(\n                    speaker: 'Joe',\n                    voiceConfig: new VoiceConfig(\n                        new PrebuiltVoiceConfig('Kore'),\n                    )\n                ),\n                new SpeakerVoiceConfig(\n                    speaker: 'Jane',\n                    voiceConfig: new VoiceConfig(\n                        new PrebuiltVoiceConfig('Puck'),\n                    )\n                )\n            ]),\n            languageCode: 'en-GB'\n        )\n    )\n)-\u003egenerateContent(\"TTS the following conversation between Joe and Jane:\\nJoe: How's it going today Jane?\\nJane: Not too bad, how about you?\");\n\n// The response contains base64 encoded audio\n$audioData = $response-\u003eparts()[0]-\u003einlineData-\u003edata;\n```\n\n#### Thinking Mode\nFor models that support thinking mode (like Gemini 2.0), you can configure the model to show its reasoning process. This is useful for complex problem-solving and understanding how the model arrives at its answers.\n\n```php\nuse Gemini\\Data\\GenerationConfig;\nuse Gemini\\Data\\ThinkingConfig;\n\n$response = $client\n    -\u003egenerativeModel(model: 'gemini-2.0-flash-thinking-exp')\n    -\u003ewithGenerationConfig(\n        new GenerationConfig(\n            thinkingConfig: new ThinkingConfig(\n                includeThoughts: true,\n                thinkingBudget: 1024\n            )\n        )\n    )\n    -\u003egenerateContent('Solve this logic puzzle: If all Bloops are Razzies and all Razzies are Lazzies, are all Bloops definitely Lazzies?');\n\n// Access the model's thoughts and final answer\nforeach ($response-\u003ecandidates[0]-\u003econtent-\u003eparts as $part) {\n    if ($part-\u003ethought === true) {\n        // This part contains the model's thinking process\n        echo \"Model's thinking: \" . $part-\u003etext . \"\\n\\n\";\n    } else if ($part-\u003etext !== null) {\n        // This is the final answer\n        echo \"Final answer: \" . $part-\u003etext . \"\\n\";\n    }\n}\n```\n\n#### Count tokens\nWhen using long prompts, it might be useful to count tokens before sending any content to the model.\n\n```php\n$response = $client\n    -\u003egenerativeModel(model: 'gemini-2.0-flash')\n    -\u003ecountTokens('Write a story about a magic backpack.');\n\necho $response-\u003etotalTokens; // 9\n```\n\n#### Configuration\nEvery prompt you send to the model includes parameter values that control how the model generates a response. The model can generate different results for different parameter values. Learn more about [model parameters](https://ai.google.dev/docs/concepts#model_parameters).\n\nAlso, you can use safety settings to adjust the likelihood of getting responses that may be considered harmful. By default, safety settings block content with medium and/or high probability of being unsafe content across all dimensions. Learn more about [safety settings](https://ai.google.dev/docs/concepts#safety_setting).\n\nWhen using tools like `GoogleMaps`, you may also provide additional configuration via `ToolConfig`, such as `RetrievalConfig` for geographical context.\n\n```php\nuse Gemini\\Data\\GenerationConfig;\nuse Gemini\\Enums\\HarmBlockThreshold;\nuse Gemini\\Data\\SafetySetting;\nuse Gemini\\Enums\\HarmCategory;\n\n$safetySettingDangerousContent = new SafetySetting(\n    category: HarmCategory::HARM_CATEGORY_DANGEROUS_CONTENT,\n    threshold: HarmBlockThreshold::BLOCK_ONLY_HIGH\n);\n\n$safetySettingHateSpeech = new SafetySetting(\n    category: HarmCategory::HARM_CATEGORY_HATE_SPEECH,\n    threshold: HarmBlockThreshold::BLOCK_ONLY_HIGH\n);\n\n$generationConfig = new GenerationConfig(\n    stopSequences: [\n        'Title',\n    ],\n    maxOutputTokens: 800,\n    temperature: 1,\n    topP: 0.8,\n    topK: 10\n);\n\n$generativeModel = $client\n    -\u003egenerativeModel(model: 'gemini-2.0-flash')\n    -\u003ewithSafetySetting($safetySettingDangerousContent)\n    -\u003ewithSafetySetting($safetySettingHateSpeech)\n    -\u003ewithGenerationConfig($generationConfig)\n    -\u003egenerateContent('Write a story about a magic backpack.');\n```\n\n### File Management\n\nThe File API lets you store up to 20GB of files per project, with a per-file maximum size of 2GB. Files are stored for 48 hours and can be accessed in API calls.\n\n#### File Upload\nTo reference larger files and videos with various prompts, upload them to Gemini storage.\n\n```php\nuse Gemini\\Enums\\FileState;\nuse Gemini\\Enums\\MimeType;\n\n$files = $client-\u003efiles();\necho \"Uploading\\n\";\n$meta = $files-\u003eupload(\n    filename: 'video.mp4',\n    mimeType: MimeType::VIDEO_MP4,\n    displayName: 'Video'\n);\necho \"Processing\";\ndo {\n    echo \".\";\n    sleep(2);\n    $meta = $files-\u003emetadataGet($meta-\u003euri);\n} while (!$meta-\u003estate-\u003ecomplete());\necho \"\\n\";\n\nif ($meta-\u003estate == FileState::Failed) {\n    die(\"Upload failed:\\n\" . json_encode($meta-\u003etoArray(), JSON_PRETTY_PRINT));\n}\n\necho \"Processing complete\\n\" . json_encode($meta-\u003etoArray(), JSON_PRETTY_PRINT);\necho \"\\n{$meta-\u003euri}\";\n```\n\n#### List Files\nList all uploaded files in your project.\n\n```php\n$response = $client-\u003efiles()-\u003elist(pageSize: 10);\n\nforeach ($response-\u003efiles as $file) {\n    echo \"Name: {$file-\u003ename}\\n\";\n    echo \"Display Name: {$file-\u003edisplayName}\\n\";\n    echo \"Size: {$file-\u003esizeBytes} bytes\\n\";\n    echo \"MIME Type: {$file-\u003emimeType}\\n\";\n    echo \"State: {$file-\u003estate-\u003evalue}\\n\";\n    echo \"---\\n\";\n}\n\n// Get next page if available\nif ($response-\u003enextPageToken) {\n    $nextPage = $client-\u003efiles()-\u003elist(pageSize: 10, nextPageToken: $response-\u003enextPageToken);\n}\n```\n\n#### Get File Metadata\nRetrieve metadata for a specific file.\n\n```php\n$meta = $client-\u003efiles()-\u003emetadataGet('abc123');\n// or use the full URI\n$meta = $client-\u003efiles()-\u003emetadataGet($file-\u003euri);\n\necho \"File: {$meta-\u003edisplayName}\\n\";\necho \"State: {$meta-\u003estate-\u003evalue}\\n\";\necho \"Size: {$meta-\u003esizeBytes} bytes\\n\";\n```\n\n#### Delete File\nDelete a file from Gemini storage.\n\n```php\n$client-\u003efiles()-\u003edelete('files/abc123');\n// or use the full URI\n$client-\u003efiles()-\u003edelete($file-\u003euri);\n```\n\n### Cached Content\n\nContext caching allows you to save and reuse precomputed input tokens for frequently used content. This reduces costs and latency for requests with large amounts of shared context.\n\n#### Create Cached Content\nCache content that you'll reuse across multiple requests.\n\n```php\nuse Gemini\\Data\\Content;\n\n$cachedContent = $client-\u003ecachedContents()-\u003ecreate(\n    model: 'gemini-2.0-flash',\n    systemInstruction: Content::parse('You are an expert PHP developer.'),\n    parts: [\n        'This is a large codebase...',\n        'File 1 contents...',\n        'File 2 contents...'\n    ],\n    ttl: '3600s', // Cache for 1 hour\n    displayName: 'PHP Codebase Cache'\n);\n\necho \"Cached content created: {$cachedContent-\u003ename}\\n\";\n```\n\n#### List Cached Content\nList all cached content in your project.\n\n```php\n$response = $client-\u003ecachedContents()-\u003elist(pageSize: 10);\n\nforeach ($response-\u003ecachedContents as $cached) {\n    echo \"Name: {$cached-\u003ename}\\n\";\n    echo \"Display Name: {$cached-\u003edisplayName}\\n\";\n    echo \"Model: {$cached-\u003emodel}\\n\";\n    echo \"Expires: {$cached-\u003eexpireTime}\\n\";\n    echo \"---\\n\";\n}\n```\n\n#### Get Cached Content\nRetrieve a specific cached content by name.\n\n```php\n$cached = $client-\u003ecachedContents()-\u003eretrieve('cachedContents/abc123');\n\necho \"Model: {$cached-\u003emodel}\\n\";\necho \"Created: {$cached-\u003ecreateTime}\\n\";\necho \"Expires: {$cached-\u003eexpireTime}\\n\";\n```\n\n#### Update Cached Content\nUpdate the expiration time of cached content.\n\n```php\n// Extend by TTL\n$updated = $client-\u003ecachedContents()-\u003eupdate(\n    name: 'cachedContents/abc123',\n    ttl: '7200s' // Extend by 2 hours\n);\n\n// Or set absolute expiration time\n$updated = $client-\u003ecachedContents()-\u003eupdate(\n    name: 'cachedContents/abc123',\n    expireTime: '2024-12-31T23:59:59Z'\n);\n```\n\n#### Delete Cached Content\nDelete cached content when no longer needed.\n\n```php\n$client-\u003ecachedContents()-\u003edelete('cachedContents/abc123');\n```\n\n#### Use Cached Content\nUse cached content in your requests to save tokens and reduce latency.\n\n```php\n$response = $client\n    -\u003egenerativeModel(model: 'gemini-2.0-flash')\n    -\u003ewithCachedContent('cachedContents/abc123')\n    -\u003egenerateContent('Explain the main function in this codebase');\n\necho $response-\u003etext();\n\n// Check token usage\necho \"Cached tokens used: {$response-\u003eusageMetadata-\u003ecachedContentTokenCount}\\n\";\necho \"New tokens used: {$response-\u003eusageMetadata-\u003epromptTokenCount}\\n\";\n```\n\n### File Search Stores\n\nFile search allows you to search files that were uploaded through the File API.\n\n#### Create File Search Store\nCreate a file search store.\n\n```php\nuse Gemini\\Enums\\FileState;\nuse Gemini\\Enums\\MimeType;\nuse Gemini\\Enums\\Schema;\nuse Gemini\\Enums\\DataType;\n\n$files = $client-\u003efiles();\necho \"Uploading\\n\";\n$meta = $files-\u003eupload(\n    filename: 'document.pdf',\n    mimeType: MimeType::APPLICATION_PDF,\n    displayName: 'Document for search'\n);\necho \"Processing\";\ndo {\n    echo \".\";\n    sleep(2);\n    $meta = $files-\u003emetadataGet($meta-\u003euri);\n} while (! $meta-\u003estate-\u003ecomplete());\necho \"\\n\";\n\nif ($meta-\u003estate == FileState::Failed) {\n    die(\"Upload failed:\\n\".json_encode($meta-\u003etoArray(), JSON_PRETTY_PRINT));\n}\n\n$fileSearchStore = $client-\u003efileSearchStores()-\u003ecreate(\n    displayName: 'My Search Store',\n);\n\necho \"File search store created: {$fileSearchStore-\u003ename}\\n\";\n```\n\n#### Get File Search Store\nGet a specific file search store by name.\n\n```php\n$fileSearchStore = $client-\u003efileSearchStores()-\u003eget('fileSearchStores/my-search-store');\n\necho \"Name: {$fileSearchStore-\u003ename}\\n\";\necho \"Display Name: {$fileSearchStore-\u003edisplayName}\\n\";\n```\n\n#### List File Search Stores\nList all file search stores.\n\n```php\n$response = $client-\u003efileSearchStores()-\u003elist(pageSize: 10);\n\nforeach ($response-\u003efileSearchStores as $fileSearchStore) {\n    echo \"Name: {$fileSearchStore-\u003ename}\\n\";\n    echo \"Display Name: {$fileSearchStore-\u003edisplayName}\\n\";\n    echo \"--- \\n\";\n}\n```\n\n#### Delete File Search Store\nDelete a file search store by name.\n\n```php\n$client-\u003efileSearchStores()-\u003edelete('fileSearchStores/my-search-store');\n```\n\n### File Search Documents\n\n#### Upload File Search Document\nUpload a local file directly to a file search store.\n\n```php\nuse Gemini\\Enums\\MimeType;\n\n$response = $client-\u003efileSearchStores()-\u003eupload(\n    storeName: 'fileSearchStores/my-search-store',\n    filename: 'document2.pdf',\n    mimeType: MimeType::APPLICATION_PDF,\n    displayName: 'Another Search Document'\n);\n\necho \"File search document upload operation: {$response-\u003ename}\\n\";\n```\n\n#### Get File Search Document\nGet a specific file search document by name.\n\n```php\n$fileSearchDocument = $client-\u003efileSearchStores()-\u003egetDocument('fileSearchStores/my-search-store/fileSearchDocuments/my-document');\n\necho \"Name: {$fileSearchDocument-\u003ename}\\n\";\necho \"Display Name: {$fileSearchDocument-\u003edisplayName}\\n\";\n```\n\n#### List File Search Documents\nList all file search documents within a store.\n\n```php\n$response = $client-\u003efileSearchStores()-\u003elistDocuments(storeName: 'fileSearchStores/my-search-store', pageSize: 10);\n\nforeach ($response-\u003edocuments as $fileSearchDocument) {\n    echo \"Name: {$fileSearchDocument-\u003ename}\\n\";\n    echo \"Display Name: {$fileSearchDocument-\u003edisplayName}\\n\";\n    echo \"Create Time: {$fileSearchDocument-\u003ecreateTime}\\n\";\n    echo \"Update Time: {$fileSearchDocument-\u003eupdateTime}\\n\";\n    echo \"--- \\n\";\n}\n```\n\n#### Delete File Search Document\nDelete a file search document by name.\n\n```php\n$client-\u003efileSearchStores()-\u003edeleteDocument('fileSearchStores/my-search-store/fileSearchDocuments/my-document');\n```\n\n### Embedding Resource\nEmbedding is a technique used to represent information as a list of floating point numbers in an array. With Gemini, you can represent text (words, sentences, and blocks of text) in a vectorized form, making it easier to compare and contrast embeddings. For example, two texts that share a similar subject matter or sentiment should have similar embeddings, which can be identified through mathematical comparison techniques such as cosine similarity.\n\nUse the `text-embedding-004` model with either `embedContents` or `batchEmbedContents`:\n\n```php\n$response = $client\n    -\u003eembeddingModel('text-embedding-004')\n    -\u003eembedContent(\"Write a story about a magic backpack.\");\n\nprint_r($response-\u003eembedding-\u003evalues);\n//[\n//    [0] =\u003e 0.008624583\n//    [1] =\u003e -0.030451821\n//    [2] =\u003e -0.042496547\n//    [3] =\u003e -0.029230341\n//    [4] =\u003e 0.05486475\n//    [5] =\u003e 0.006694871\n//    [6] =\u003e 0.004025645\n//    [7] =\u003e -0.007294857\n//    [8] =\u003e 0.0057651913\n//    ...\n//]\n```\n\n```php\n$response = $client\n    -\u003eembeddingModel('text-embedding-004')\n    -\u003ebatchEmbedContents(\"Bu bir testtir\", \"Deneme123\");\n\nprint_r($response-\u003eembeddings);\n// [\n// [0] =\u003e Gemini\\Data\\ContentEmbedding Object\n// (\n//     [values] =\u003e Array\n//         (\n//         [0] =\u003e 0.035855837\n//         [1] =\u003e -0.049537655\n//         [2] =\u003e -0.06834927\n//         [3] =\u003e -0.010445258\n//         [4] =\u003e 0.044641383\n//         [5] =\u003e 0.031156342\n//         [6] =\u003e -0.007810312\n//         [7] =\u003e -0.0106866965\n//         ...\n//         ),\n// ),\n// [1] =\u003e Gemini\\Data\\ContentEmbedding Object\n// (\n//     [values] =\u003e Array\n//         (\n//         [0] =\u003e 0.035855837\n//         [1] =\u003e -0.049537655\n//         [2] =\u003e -0.06834927\n//         [3] =\u003e -0.010445258\n//         [4] =\u003e 0.044641383\n//         [5] =\u003e 0.031156342\n//         [6] =\u003e -0.007810312\n//         [7] =\u003e -0.0106866965\n//         ...\n//         ),\n// ),\n// ]\n```\n\n### Models\n\nWe recommend checking [Google documentation](https://ai.google.dev/gemini-api/docs/models) for the latest supported models.\n\n#### List Models\nUse list models to see the available Gemini models programmatically:\n\n- **pageSize (optional)**:\n    The maximum number of Models to return (per page). \u003cbr\u003e\n    If unspecified, 50 models will be returned per page. This method returns at most 1000 models per page, even if you pass a larger pageSize.\n\n\n- **nextPageToken (optional)**:\n    A page token, received from a previous models.list call. \u003cbr\u003e\n    Provide the pageToken returned by one request as an argument to the next request to retrieve the next page.\n    When paginating, all other parameters provided to models.list must match the call that provided the page token.\n\n```php\n$response = $client-\u003emodels()-\u003elist(pageSize: 3, nextPageToken: 'ChFtb2RlbHMvZ2VtaW5pLXBybw==');\n\n$response-\u003emodels;\n//[\n//    [0] =\u003e Gemini\\Data\\Model Object\n//        (\n//            [name] =\u003e models/gemini-2.0-flash\n//            [version] =\u003e 2.0\n//            [displayName] =\u003e Gemini 2.0 Flash\n//            [description] =\u003e Gemini 2.0 Flash\n//            ...\n//        )\n//    [1] =\u003e Gemini\\Data\\Model Object\n//        (\n//            [name] =\u003e models/gemini-2.5-pro-preview-05-06\n//            [version] =\u003e 2.5-preview-05-06\n//            [displayName] =\u003e Gemini 2.5 Pro Preview 05-06\n//            [description] =\u003e Preview release (May 6th, 2025) of Gemini 2.5 Pro\n//            ...\n//        )\n//    [2] =\u003e Gemini\\Data\\Model Object\n//        (\n//            [name] =\u003e models/text-embedding-004\n//            [version] =\u003e 004\n//            [displayName] =\u003e Text Embedding 004\n//            [description] =\u003e Obtain a distributed representation of a text.\n//            ...\n//        )\n//]\n```\n\n```php\n$response-\u003enextPageToken // Chltb2RlbHMvZ2VtaW5pLTEuMC1wcm8tMDAx\n```\n\n#### Get Model\nGet information about a model, such as version, display name, input token limit, etc.\n\n```php\n\n$response = $client-\u003emodels()-\u003eretrieve('models/gemini-2.5-pro-preview-05-06');\n\n$response-\u003emodel;\n//Gemini\\Data\\Model Object\n//(\n//    [name] =\u003e models/gemini-2.5-pro-preview-05-06\n//    [version] =\u003e 2.5-preview-05-06\n//    [displayName] =\u003e Gemini 2.5 Pro Preview 05-06\n//    [description] =\u003e Preview release (May 6th, 2025) of Gemini 2.5 Pro\n//    ...\n//)\n```\n\n## Troubleshooting\n\n### Timeout\n\nYou may run into a timeout when sending requests to the API. The default timeout depends on the HTTP client used.\n\nYou can increase the timeout by configuring the HTTP client and passing in to the factory.\n\nThis example illustrates how to increase the timeout using Guzzle.\n\n```php\nGemini::factory()\n    -\u003ewithApiKey($apiKey)\n    -\u003ewithHttpClient(new \\GuzzleHttp\\Client(['timeout' =\u003e $timeout]))\n    -\u003emake();\n```\n\n## Testing\n\nThe package provides a fake implementation of the `Gemini\\Client` class that allows you to fake the API responses.\n\nTo test your code ensure you swap the `Gemini\\Client` class with the `Gemini\\Testing\\ClientFake` class in your test case.\n\nThe fake responses are returned in the order they are provided while creating the fake client.\n\nAll responses are having a `fake()` method that allows you to easily create a response object by only providing the parameters relevant for your test case.\n\n```php\nuse Gemini\\Testing\\ClientFake;\nuse Gemini\\Responses\\GenerativeModel\\GenerateContentResponse;\n\n$client = new ClientFake([\n    GenerateContentResponse::fake([\n        'candidates' =\u003e [\n            [\n                'content' =\u003e [\n                    'parts' =\u003e [\n                        [\n                            'text' =\u003e 'success',\n                        ],\n                    ],\n                ],\n            ],\n        ],\n    ]),\n]);\n\n$result = $fake-\u003egenerativeModel(model: 'gemini-2.0-flash')-\u003egenerateContent('test');\n\nexpect($result-\u003etext())-\u003etoBe('success');\n```\n\nIn case of a streamed response you can optionally provide a resource holding the fake response data.\n\n```php\nuse Gemini\\Testing\\ClientFake;\nuse Gemini\\Responses\\GenerativeModel\\GenerateContentResponse;\n\n$client = new ClientFake([\n    GenerateContentResponse::fakeStream(),\n]);\n\n$result = $client-\u003egenerativeModel(model: 'gemini-2.0-flash')-\u003estreamGenerateContent('Hello');\n\nexpect($response-\u003egetIterator()-\u003ecurrent())\n    -\u003etext()-\u003etoBe('In the bustling city of Aethelwood, where the cobblestone streets whispered');\n```\n\nAfter the requests have been sent there are various methods to ensure that the expected requests were sent:\n\n```php\n// assert list models request was sent\nuse Gemini\\Resources\\GenerativeModel;\nuse Gemini\\Resources\\Models;\n\n$fake-\u003emodels()-\u003eassertSent(callback: function ($method) {\n    return $method === 'list';\n});\n// or\n$fake-\u003eassertSent(resource: Models::class, callback: function ($method) {\n    return $method === 'list';\n});\n\n$fake-\u003egeminiPro()-\u003eassertSent(function (string $method, array $parameters) {\n    return $method === 'generateContent' \u0026\u0026\n        $parameters[0] === 'Hello';\n});\n// or\n$fake-\u003eassertSent(resource: GenerativeModel::class, model: 'gemini-2.0-flash', callback: function (string $method, array $parameters) {\n    return $method === 'generateContent' \u0026\u0026\n        $parameters[0] === 'Hello';\n});\n\n// assert 2 generative model requests were sent\n$client-\u003eassertSent(resource: GenerativeModel::class, model: 'gemini-2.0-flash', callback: 2);\n// or\n$client-\u003egenerativeModel(model: 'gemini-2.0-flash')-\u003eassertSent(2);\n\n// assert no generative model requests were sent\n$client-\u003eassertNotSent(resource: GenerativeModel::class, model: 'gemini-2.0-flash');\n// or\n$client-\u003egenerativeModel(model: 'gemini-2.0-flash')-\u003eassertNotSent();\n\n// assert no requests were sent\n$client-\u003eassertNothingSent();\n```\n\nTo write tests expecting the API request to fail you can provide a `Throwable` object as the response.\n\n```php\nuse Gemini\\Testing\\ClientFake;\nuse Gemini\\Exceptions\\ErrorException;\n\n$client = new ClientFake([\n    new ErrorException([\n        'message' =\u003e 'The model `gemini-basic` does not exist',\n        'status' =\u003e 'INVALID_ARGUMENT',\n        'code' =\u003e 400,\n    ]),\n]);\n\n// the `ErrorException` will be thrown\n$client-\u003egenerativeModel(model: 'gemini-2.0-flash')-\u003egenerateContent('test');\n```\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgoogle-gemini-php%2Fclient","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fgoogle-gemini-php%2Fclient","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgoogle-gemini-php%2Fclient/lists"}