{"id":21179651,"url":"https://github.com/php-llm/llm-chain","last_synced_at":"2025-04-04T21:05:33.659Z","repository":{"id":229697776,"uuid":"777432944","full_name":"php-llm/llm-chain","owner":"php-llm","description":"PHP library for building LLM-based features and applications.","archived":false,"fork":false,"pushed_at":"2025-03-20T21:53:59.000Z","size":1466,"stargazers_count":67,"open_issues_count":14,"forks_count":11,"subscribers_count":7,"default_branch":"main","last_synced_at":"2025-03-28T20:05:33.011Z","etag":null,"topics":["llm","php"],"latest_commit_sha":null,"homepage":"","language":"PHP","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/php-llm.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-03-25T20:57:23.000Z","updated_at":"2025-03-27T16:37:33.000Z","dependencies_parsed_at":"2024-08-03T21:42:39.146Z","dependency_job_id":"1a003b74-9e1b-4cbf-850e-6be6f12a2e95","html_url":"https://github.com/php-llm/llm-chain","commit_stats":null,"previous_names":["php-llm/llm-chain"],"tags_count":35,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/php-llm%2Fllm-chain","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/php-llm%2Fllm-chain/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/php-llm%2Fllm-chain/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/php-llm%2Fllm-chain/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/php-llm","download_url":"https://codeload.github.com/php-llm/llm-chain/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247249524,"owners_count":20908212,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["llm","php"],"created_at":"2024-11-20T17:32:53.856Z","updated_at":"2025-04-04T21:05:33.652Z","avatar_url":"https://github.com/php-llm.png","language":"PHP","readme":"# LLM Chain\n\nPHP library for building LLM-based features and applications.\n\nThis library is not a stable yet, but still rather experimental. Feel free to try it out, give feedback, ask questions, contribute or share your use cases.\nAbstractions, concepts and interfaces are not final and potentially subject of change.\n\n## Requirements\n\n* PHP 8.2 or higher\n\n## Installation\n\nThe recommended way to install LLM Chain is through [Composer](http://getcomposer.org/):\n\n```bash\ncomposer require php-llm/llm-chain\n```\n\nWhen using Symfony Framework, check out the integration bundle [php-llm/llm-chain-bundle](https://github.com/php-llm/llm-chain-bundle).\n\n## Examples\n\nSee [examples](examples) folder to run example implementations using this library.\nDepending on the example you need to export different environment variables\nfor API keys or deployment configurations or create a `.env.local` based on `.env` file.\n\nTo run all examples, use `make run-examples` or `php example`.\n\nFor a more sophisticated demo, see the [Symfony Demo Application](https://github.com/php-llm/symfony-demo).\n\n## Basic Concepts \u0026 Usage\n\n### Models \u0026 Platforms\n\nLLM Chain categorizes two main types of models: **Language Models** and **Embeddings Models**.\n\nLanguage Models, like GPT, Claude and Llama, as essential centerpiece of LLM applications\nand Embeddings Models as supporting models to provide vector representations of text.\n\nThose models are provided by different **platforms**, like OpenAI, Azure, Google, Replicate, and others.\n\n#### Example Instantiation\n\n```php\nuse PhpLlm\\LlmChain\\Bridge\\OpenAI\\Embeddings;\nuse PhpLlm\\LlmChain\\Bridge\\OpenAI\\GPT;\nuse PhpLlm\\LlmChain\\Bridge\\OpenAI\\PlatformFactory;\n\n// Platform: OpenAI\n$platform = PlatformFactory::create($_ENV['OPENAI_API_KEY']);\n\n// Language Model: GPT (OpenAI)\n$llm = new GPT(GPT::GPT_4O_MINI); \n\n// Embeddings Model: Embeddings (OpenAI)\n$embeddings = new Embeddings();\n```\n\n#### Supported Models \u0026 Platforms\n\n* Language Models\n  * [OpenAI's GPT](https://platform.openai.com/docs/models/overview) with [OpenAI](https://platform.openai.com/docs/overview) and [Azure](https://learn.microsoft.com/azure/ai-services/openai/concepts/models) as Platform\n  * [Anthropic's Claude](https://www.anthropic.com/claude) with [Anthropic](https://www.anthropic.com/) as Platform\n  * [Meta's Llama](https://www.llama.com/) with [Ollama](https://ollama.com/) and [Replicate](https://replicate.com/) as Platform\n  * [Google's Gemini](https://gemini.google.com/) with [Google](https://ai.google.dev/) as Platform\n  * [Google's Gemini](https://gemini.google.com/) with [OpenRouter](https://www.openrouter.com/) as Platform\n  * [DeepSeek's R1](https://www.deepseek.com/) with [OpenRouter](https://www.openrouter.com/) as Platform\n* Embeddings Models\n  * [OpenAI's Text Embeddings](https://platform.openai.com/docs/guides/embeddings/embedding-models) with [OpenAI](https://platform.openai.com/docs/overview) and [Azure](https://learn.microsoft.com/azure/ai-services/openai/concepts/models) as Platform\n  * [Voyage's Embeddings](https://docs.voyageai.com/docs/embeddings) with [Voyage](https://www.voyageai.com/) as Platform\n* Other Models\n  * [OpenAI's Dall·E](https://platform.openai.com/docs/guides/image-generation) with [OpenAI](https://platform.openai.com/docs/overview) as Platform\n  * [OpenAI's Whisper](https://platform.openai.com/docs/guides/speech-to-text) with [OpenAI](https://platform.openai.com/docs/overview) and [Azure](https://learn.microsoft.com/azure/ai-services/openai/concepts/models) as Platform\n\nSee [issue #28](https://github.com/php-llm/llm-chain/issues/28) for planned support of other models and platforms.\n\n### Chain \u0026 Messages\n\nThe core feature of LLM Chain is to interact with language models via messages. This interaction is done by sending\na **MessageBag** to a **Chain**, which takes care of LLM invocation and response handling.\n\nMessages can be of different types, most importantly `UserMessage`, `SystemMessage`, or `AssistantMessage`, and can also\nhave different content types, like `Text`, `Image` or `Audio`.\n\n#### Example Chain call with messages\n\n```php\nuse PhpLlm\\LlmChain\\Chain;\nuse PhpLlm\\LlmChain\\Model\\Message\\Message;\nuse PhpLlm\\LlmChain\\Model\\Message\\MessageBag;\n\n// Platform \u0026 LLM instantiation\n\n$chain = new Chain($platform, $llm);\n$messages = new MessageBag(\n    Message::forSystem('You are a helpful chatbot answering questions about LLM Chain.'),\n    Message::ofUser('Hello, how are you?'),\n);\n$response = $chain-\u003ecall($messages);\n\necho $response-\u003egetContent(); // \"I'm fine, thank you. How can I help you today?\"\n```\n\nThe `MessageInterface` and `Content` interface help to customize this process if needed, e.g. additional state handling.\n\n#### Options\n\nThe second parameter of the `call` method is an array of options, which can be used to configure the behavior of the\nchain, like `stream`, `output_structure`, or `response_format`. This behavior is a combination of features provided by\nthe underlying model and platform, or additional features provided by processors registered to the chain.\n\nOptions design for additional features provided by LLM Chain can be found in this documentation. For model and platform\nspecific options, please refer to the respective documentation.\n\n```php\n// Chain and MessageBag instantiation\n\n$response = $chain-\u003ecall($messages, [\n    'temperature' =\u003e 0.5, // example option controlling the randomness of the response, e.g. GPT and Claude\n    'n' =\u003e 3,             // example option controlling the number of responses generated, e.g. GPT\n]);\n```\n\n#### Code Examples\n\n1. **Anthropic's Claude**: [chat-claude-anthropic.php](examples/chat-claude-anthropic.php)\n1. **OpenAI's GPT with Azure**: [chat-gpt-azure.php](examples/chat-gpt-azure.php)\n1. **OpenAI's GPT**: [chat-gpt-openai.php](examples/chat-gpt-openai.php)\n1. **OpenAI's o1**: [chat-o1-openai.php](examples/chat-o1-openai.php)\n1. **Meta's Llama with Ollama**: [chat-llama-ollama.php](examples/chat-llama-ollama.php)\n1. **Meta's Llama with Replicate**: [chat-llama-replicate.php](examples/chat-llama-replicate.php)\n1. **Google's Gemini with OpenRouter**: [chat-gemini-openrouter.php](examples/chat-gemini-openrouter.php)\n\n### Tools\n\nTo integrate LLMs with your application, LLM Chain supports [tool calling](https://platform.openai.com/docs/guides/function-calling) out of the box.\nTools are services that can be called by the LLM to provide additional features or process data.\n\nTool calling can be enabled by registering the processors in the chain:\n\n```php\nuse PhpLlm\\LlmChain\\Chain\\Toolbox\\ChainProcessor;\nuse PhpLlm\\LlmChain\\Chain\\Toolbox\\Toolbox;\n\n// Platform \u0026 LLM instantiation\n\n$yourTool = new YourTool();\n\n$toolbox = Toolbox::create($yourTool);\n$toolProcessor = new ChainProcessor($toolbox);\n\n$chain = new Chain($platform, $llm, inputProcessor: [$toolProcessor], outputProcessor: [$toolProcessor]);\n```\n\nCustom tools can basically be any class, but must configure by the `#[AsTool]` attribute.\n\n```php\nuse PhpLlm\\LlmChain\\Toolbox\\Attribute\\AsTool;\n\n#[AsTool('company_name', 'Provides the name of your company')]\nfinal class CompanyName\n{\n    public function __invoke(): string\n    {\n        return 'ACME Corp.'\n    }\n}\n```\n\n#### Tool Return Value\n\nIn the end, the tool's response needs to be a string, but LLM Chain converts arrays and objects, that implement the\n`JsonSerializable` interface, to JSON strings for you. So you can return arrays or objects directly from your tool.\n\n#### Tool Methods\n\nYou can configure the method to be called by the LLM with the `#[AsTool]` attribute and have multiple tools per class:\n\n```php\nuse PhpLlm\\LlmChain\\Toolbox\\Attribute\\AsTool;\n\n#[AsTool(\n    name: 'weather_current',\n    description: 'get current weather for a location',\n    method: 'current',\n)]\n#[AsTool(\n    name: 'weather_forecast',\n    description: 'get weather forecast for a location',\n    method: 'forecast',\n)]\nfinal readonly class OpenMeteo\n{\n    public function current(float $latitude, float $longitude): array\n    {\n        // ...\n    }\n\n    public function forecast(float $latitude, float $longitude): array\n    {\n        // ...\n    }\n}\n```\n\n#### Tool Parameters\n\nLLM Chain generates a JSON Schema representation for all tools in the `Toolbox` based on the `#[AsTool]` attribute and\nmethod arguments and param comments in the doc block. Additionally, JSON Schema support validation rules, which are\npartially support by LLMs like GPT.\n\nTo leverage this, configure the `#[With]` attribute on the method arguments of your tool:\n\n```php\nuse PhpLlm\\LlmChain\\Chain\\JsonSchema\\Attribute\\With;\nuse PhpLlm\\LlmChain\\Chain\\Toolbox\\Attribute\\AsTool;\n\n#[AsTool('my_tool', 'Example tool with parameters requirements.')]\nfinal class MyTool\n{\n    /**\n     * @param string $name   The name of an object\n     * @param int    $number The number of an object\n     */\n    public function __invoke(\n        #[With(pattern: '/([a-z0-1]){5}/')]\n        string $name,\n        #[With(minimum: 0, maximum: 10)]   \n        int $number,\n    ): string {\n        // ...\n    }\n}\n```\n\nSee attribute class [With](src/Chain/JsonSchema/Attribute/With.php) for all available options.\n\n\u003e [!NOTE]\n\u003e Please be aware, that this is only converted in a JSON Schema for the LLM to respect, but not validated by LLM Chain.\n\n#### Third-Party Tools\n\nIn some cases you might want to use third-party tools, which are not part of your application. Adding the `#[AsTool]`\nattribute to the class is not possible in those cases, but you can explicitly register the tool in the `MemoryFactory`:\n\n```php\nuse PhpLlm\\LlmChain\\Chain\\Toolbox\\Toolbox;\nuse PhpLlm\\LlmChain\\Chain\\Toolbox\\MetadataFactory\\MemoryFactory;\nuse Symfony\\Component\\Clock\\Clock;\n\n$metadataFactory = (new MemoryFactory())\n    -\u003eaddTool(Clock::class, 'clock', 'Get the current date and time', 'now');\n$toolbox = new Toolbox($metadataFactory, [new Clock()]);\n```\n\n\u003e [!NOTE]\n\u003e Please be aware that not all return types are supported by the toolbox, so a decorator might still be needed.\n\nThis can be combined with the `ChainFactory` which enables you to use explicitly registered tools and `#[AsTool]` tagged\ntools in the same chain - which even enables you to overwrite the pre-existing configuration of a tool:\n\n```php\nuse PhpLlm\\LlmChain\\Chain\\Toolbox\\Toolbox;\nuse PhpLlm\\LlmChain\\Chain\\Toolbox\\MetadataFactory\\ChainFactory;\nuse PhpLlm\\LlmChain\\Chain\\Toolbox\\MetadataFactory\\MemoryFactory;\nuse PhpLlm\\LlmChain\\Chain\\Toolbox\\MetadataFactory\\ReflectionFactory;\n\n$reflectionFactory = new ReflectionFactory(); // Register tools with #[AsTool] attribute\n$metadataFactory = (new MemoryFactory())      // Register or overwrite tools explicitly\n    -\u003eaddTool(...);\n$toolbox = new Toolbox(new ChainFactory($metadataFactory, $reflectionFactory), [...]);\n```\n\n\u003e [!NOTE]\n\u003e The order of the factories in the `ChainFactory` matters, as the first factory has the highest priority.\n\n#### Chain in Chain 🤯\n\nSimilar to third-party tools, you can also use a chain as a tool in another chain. This can be useful to encapsulate\ncomplex logic or to reuse a chain in multiple places or hide sub-chains from the LLM.\n\n```php\nuse PhpLlm\\LlmChain\\Chain\\Toolbox\\MetadataFactory\\MemoryFactory;\nuse PhpLlm\\LlmChain\\Chain\\Toolbox\\Toolbox;\nuse PhpLlm\\LlmChain\\Chain\\Toolbox\\Tool\\Chain;\n\n// Chain was initialized before\n\n$chainTool = new Chain($chain);\n$metadataFactory = (new MemoryFactory())\n    -\u003eaddTool($chainTool, 'research_agent', 'Meaningful description for sub-chain');\n$toolbox = new Toolbox($metadataFactory, [$chainTool]);\n```\n\n#### Fault Tolerance\n\nTo gracefully handle errors that occur during tool calling, e.g. wrong tool names or runtime errors, you can use the\n`FaultTolerantToolbox` as a decorator for the `Toolbox`. It will catch the exceptions and return readable error messages\nto the LLM.\n\n```php\nuse PhpLlm\\LlmChain\\Chain\\Toolbox\\ChainProcessor;\nuse PhpLlm\\LlmChain\\Chain\\Toolbox\\FaultTolerantToolbox;\n\n// Platform, LLM \u0026 Toolbox instantiation\n\n$toolbox = new FaultTolerantToolbox($innerToolbox);\n$toolProcessor = new ChainProcessor($toolbox);\n\n$chain = new Chain($platform, $llm, inputProcessor: [$toolProcessor], outputProcessor: [$toolProcessor]);\n```\n\n#### Tool Filtering\n\nTo limit the tools provided to the LLM in a specific chain call to a subset of the configured tools, you can use the\n`tools` option with a list of tool names:\n\n```php\n$this-\u003echain-\u003ecall($messages, ['tools' =\u003e ['tavily_search']]);\n```\n\n#### Tool Result Interception\n\nTo react to the result of a tool, you can implement an EventListener or EventSubscriber, that listens to the\n`ToolCallsExecuted` event. This event is dispatched after the `Toolbox` executed all current tool calls and enables\nyou to skip the next LLM call by setting a response yourself:\n\n```php\n$eventDispatcher-\u003eaddListener(ToolCallsExecuted::class, function (ToolCallsExecuted $event): void {\n    foreach ($event-\u003etoolCallResults as $toolCallResult) {\n        if (str_starts_with($toolCallResult-\u003etoolCall-\u003ename, 'weather_')) {\n            $event-\u003eresponse = new StructuredResponse($toolCallResult-\u003eresult);\n        }\n    }\n});\n```\n\n#### Code Examples (with built-in tools)\n\n1. **Clock Tool**: [toolbox-clock.php](examples/toolbox-clock.php)\n1. **SerpAPI Tool**: [toolbox-serpapi.php](examples/toolbox-serpapi.php)\n1. **Tavily Tool**: [toolbox-tavily.php](examples/toolbox-tavily.php)\n1. **Weather Tool with Event Listener**: [toolbox-weather-event.php](examples/toolbox-weather-event.php)\n1. **Wikipedia Tool**: [toolbox-wikipedia.php](examples/toolbox-wikipedia.php)\n1. **YouTube Transcriber Tool**: [toolbox-youtube.php](examples/toolbox-youtube.php) (with streaming)\n\n### Document Embedding, Vector Stores \u0026 Similarity Search (RAG)\n\nLLM Chain supports document embedding and similarity search using vector stores like ChromaDB, Azure AI Search, MongoDB\nAtlas Search, or Pinecone.\n\nFor populating a vector store, LLM Chain provides the service `Embedder`, which requires an instance of an\n`EmbeddingsModel` and one of `StoreInterface`, and works with a collection of `Document` objects as input:\n\n```php\nuse PhpLlm\\LlmChain\\Embedder;\nuse PhpLlm\\LlmChain\\Bridge\\OpenAI\\Embeddings;\nuse PhpLlm\\LlmChain\\Bridge\\OpenAI\\PlatformFactory;\nuse PhpLlm\\LlmChain\\Bridge\\Pinecone\\Store;\nuse Probots\\Pinecone\\Pinecone;\nuse Symfony\\Component\\HttpClient\\HttpClient;\n\n$embedder = new Embedder(\n    PlatformFactory::create($_ENV['OPENAI_API_KEY']),\n    new Embeddings(),\n    new Store(Pinecone::client($_ENV['PINECONE_API_KEY'], $_ENV['PINECONE_HOST']),\n);\n$embedder-\u003eembed($documents);\n```\n\nThe collection of `Document` instances is usually created by text input of your domain entities:\n\n```php\nuse PhpLlm\\LlmChain\\Document\\Metadata;\nuse PhpLlm\\LlmChain\\Document\\TextDocument;\n\nforeach ($entities as $entity) {\n    $documents[] = new TextDocument(\n        id: $entity-\u003egetId(),                       // UUID instance\n        content: $entity-\u003etoString(),               // Text representation of relevant data for embedding\n        metadata: new Metadata($entity-\u003etoArray()), // Array representation of entity to be stored additionally\n    );\n}\n```\n\u003e [!NOTE]\n\u003e Not all data needs to be stored in the vector store, but you could also hydrate the original data entry based\n\u003e on the ID or metadata after retrieval from the store.*\n\nIn the end the chain is used in combination with a retrieval tool on top of the vector store, e.g. the built-in\n`SimilaritySearch` tool provided by the library:\n\n```php\nuse PhpLlm\\LlmChain\\Chain;\nuse PhpLlm\\LlmChain\\Model\\Message\\Message;\nuse PhpLlm\\LlmChain\\Model\\Message\\MessageBag;\nuse PhpLlm\\LlmChain\\Chain\\Toolbox\\ChainProcessor;\nuse PhpLlm\\LlmChain\\Chain\\Toolbox\\Tool\\SimilaritySearch;\nuse PhpLlm\\LlmChain\\Chain\\Toolbox\\Toolbox;\n\n// Initialize Platform \u0026 Models\n\n$similaritySearch = new SimilaritySearch($embeddings, $store);\n$toolbox = Toolbox::create($similaritySearch);\n$processor = new ChainProcessor($toolbox);\n$chain = new Chain($platform, $llm, [$processor], [$processor]);\n\n$messages = new MessageBag(\n    Message::forSystem(\u003c\u003c\u003cPROMPT\n        Please answer all user questions only using the similary_search tool. Do not add information and if you cannot\n        find an answer, say so.\n        PROMPT),\n    Message::ofUser('...') // The user's question.\n);\n$response = $chain-\u003ecall($messages);\n```\n\n#### Code Examples\n\n1. **MongoDB Store**: [store-mongodb-similarity-search.php](examples/store-mongodb-similarity-search.php)\n1. **Pinecone Store**: [store-pinecone-similarity-search.php](examples/store-pinecone-similarity-search.php)\n\n#### Supported Stores\n\n* [ChromaDB](https://trychroma.com) (requires `codewithkyrian/chromadb-php` as additional dependency)\n* [Azure AI Search](https://azure.microsoft.com/en-us/products/ai-services/ai-search)\n* [MongoDB Atlas Search](https://mongodb.com/products/platform/atlas-vector-search) (requires `mongodb/mongodb` as additional dependency)\n* [Pinecone](https://pinecone.io) (requires `probots-io/pinecone-php` as additional dependency)\n\nSee [issue #28](https://github.com/php-llm/llm-chain/issues/28) for planned support of other models and platforms. \n\n## Advanced Usage \u0026 Features\n\n### Structured Output\n\nA typical use-case of LLMs is to classify and extract data from unstructured sources, which is supported by some models\nby features like **Structured Output** or providing a **Response Format**.\n\n#### PHP Classes as Output\n\nLLM Chain support that use-case by abstracting the hustle of defining and providing schemas to the LLM and converting\nthe response back to PHP objects.\n\nTo achieve this, a specific chain processor needs to be registered:\n```php\nuse PhpLlm\\LlmChain\\Chain;\nuse PhpLlm\\LlmChain\\Model\\Message\\Message;\nuse PhpLlm\\LlmChain\\Model\\Message\\MessageBag;\nuse PhpLlm\\LlmChain\\Chain\\StructuredOutput\\ChainProcessor;\nuse PhpLlm\\LlmChain\\Chain\\StructuredOutput\\ResponseFormatFactory;\nuse PhpLlm\\LlmChain\\Tests\\Chain\\StructuredOutput\\Data\\MathReasoning;\nuse Symfony\\Component\\Serializer\\Encoder\\JsonEncoder;\nuse Symfony\\Component\\Serializer\\Normalizer\\ObjectNormalizer;\nuse Symfony\\Component\\Serializer\\Serializer;\n\n// Initialize Platform and LLM\n\n$serializer = new Serializer([new ObjectNormalizer()], [new JsonEncoder()]);\n$processor = new ChainProcessor(new ResponseFormatFactory(), $serializer);\n$chain = new Chain($platform, $llm, [$processor], [$processor]);\n\n$messages = new MessageBag(\n    Message::forSystem('You are a helpful math tutor. Guide the user through the solution step by step.'),\n    Message::ofUser('how can I solve 8x + 7 = -23'),\n);\n$response = $chain-\u003ecall($messages, ['output_structure' =\u003e MathReasoning::class]);\n\ndump($response-\u003egetContent()); // returns an instance of `MathReasoning` class\n```\n\n#### Array Structures as Output\n\nAlso PHP array structures as `response_format` are supported, which also requires the chain processor mentioned above:\n\n```php\nuse PhpLlm\\LlmChain\\Model\\Message\\Message;\nuse PhpLlm\\LlmChain\\Model\\Message\\MessageBag;\n\n// Initialize Platform, LLM and Chain with processors and Clock tool\n\n$messages = new MessageBag(Message::ofUser('What date and time is it?'));\n$response = $chain-\u003ecall($messages, ['response_format' =\u003e [\n    'type' =\u003e 'json_schema',\n    'json_schema' =\u003e [\n        'name' =\u003e 'clock',\n        'strict' =\u003e true,\n        'schema' =\u003e [\n            'type' =\u003e 'object',\n            'properties' =\u003e [\n                'date' =\u003e ['type' =\u003e 'string', 'description' =\u003e 'The current date in the format YYYY-MM-DD.'],\n                'time' =\u003e ['type' =\u003e 'string', 'description' =\u003e 'The current time in the format HH:MM:SS.'],\n            ],\n            'required' =\u003e ['date', 'time'],\n            'additionalProperties' =\u003e false,\n        ],\n    ],\n]]);\n\ndump($response-\u003egetContent()); // returns an array\n```\n\n#### Code Examples\n\n1. **Structured Output** (PHP class): [structured-output-math.php](examples/structured-output-math.php)\n1. **Structured Output** (array): [structured-output-clock.php](examples/structured-output-clock.php)\n\n### Response Streaming\n\nSince LLMs usually generate a response word by word, most of them also support streaming the response using Server Side\nEvents. LLM Chain supports that by abstracting the conversion and returning a Generator as content of the response.\n\n```php\nuse PhpLlm\\LlmChain\\Chain;\nuse PhpLlm\\LlmChain\\Message\\Message;\nuse PhpLlm\\LlmChain\\Message\\MessageBag;\n\n// Initialize Platform and LLM\n\n$chain = new Chain($llm);\n$messages = new MessageBag(\n    Message::forSystem('You are a thoughtful philosopher.'),\n    Message::ofUser('What is the purpose of an ant?'),\n);\n$response = $chain-\u003ecall($messages, [\n    'stream' =\u003e true, // enable streaming of response text\n]);\n\nforeach ($response-\u003egetContent() as $word) {\n    echo $word;\n}\n```\n\nIn a terminal application this generator can be used directly, but with a web app an additional layer like [Mercure](https://mercure.rocks)\nneeds to be used.\n\n#### Code Examples\n\n1. **Streaming Claude**: [stream-claude-anthropic.php](examples/stream-claude-anthropic.php)\n1. **Streaming GPT**: [stream-gpt-openai.php](examples/stream-gpt-openai.php)\n\n### Image Processing\n\nSome LLMs also support images as input, which LLM Chain supports as `Content` type within the `UserMessage`:\n\n```php\nuse PhpLlm\\LlmChain\\Model\\Message\\Content\\Image;\nuse PhpLlm\\LlmChain\\Model\\Message\\Message;\nuse PhpLlm\\LlmChain\\Model\\Message\\MessageBag;\n\n// Initialize Platform, LLM \u0026 Chain\n\n$messages = new MessageBag(\n    Message::forSystem('You are an image analyzer bot that helps identify the content of images.'),\n    Message::ofUser(\n        'Describe the image as a comedian would do it.',\n        new Image(dirname(__DIR__).'/tests/Fixture/image.jpg'), // Path to an image file\n        new Image('https://foo.com/bar.png'), // URL to an image\n        new Image('data:image/png;base64,...'), // Data URL of an image\n    ),\n);\n$response = $chain-\u003ecall($messages);\n```\n\n#### Code Examples\n\n1. **Image Description**: [image-describer-binary.php](examples/image-describer-binary.php) (with binary file)\n1. **Image Description**: [image-describer-url.php](examples/image-describer-url.php) (with URL)\n\n### Audio Processing\n\nSimilar to images, some LLMs also support audio as input, which is just another `Content` type within the `UserMessage`:\n\n```php\nuse PhpLlm\\LlmChain\\Model\\Message\\Content\\Audio;\nuse PhpLlm\\LlmChain\\Model\\Message\\Message;\nuse PhpLlm\\LlmChain\\Model\\Message\\MessageBag;\n\n// Initialize Platform, LLM \u0026 Chain\n\n$messages = new MessageBag(\n    Message::ofUser(\n        'What is this recording about?',\n        Audio:fromFile(dirname(__DIR__).'/tests/Fixture/audio.mp3'), // Path to an audio file\n    ),\n);\n$response = $chain-\u003ecall($messages);\n```\n\n#### Code Examples\n\n1. **Audio Description**: [audio-describer.php](examples/audio-describer.php)\n\n### Embeddings\n\nCreating embeddings of word, sentences or paragraphs is a typical use case around the interaction with LLMs and\ntherefore LLM Chain implements a `EmbeddingsModel` interface with various models, see above.\n\nThe standalone usage results in an `Vector` instance:\n\n```php\nuse PhpLlm\\LlmChain\\Bridge\\OpenAI\\Embeddings;\n\n// Initialize Platform\n\n$embeddings = new Embeddings($platform, Embeddings::TEXT_3_SMALL);\n\n$vectors = $platform-\u003erequest($embeddings, $textInput)-\u003egetContent();\n\ndump($vectors[0]-\u003egetData()); // Array of float values\n```\n\n#### Code Examples\n\n1. **OpenAI's Emebddings**: [embeddings-openai.php](examples/embeddings-openai.php)\n1. **Voyage's Embeddings**: [embeddings-voyage.php](examples/embeddings-voyage.php)\n\n### Parallel Platform Calls\n\nPlatform supports multiple model calls in parallel, which can be useful to speed up the processing:\n\n```php\n// Initialize Platform \u0026 Model\n\nforeach ($inputs as $input) {\n    $responses[] = $platform-\u003erequest($model, $input);\n}\n\nforeach ($responses as $response) {\n    echo $response-\u003egetContent().PHP_EOL;\n}\n```\n\n\u003e [!NOTE]\n\u003e This requires cURL and the `ext-curl` extension to be installed.\n\n#### Code Examples\n\n1. **Parallel GPT Calls**: [parallel-chat-gpt.php](examples/parallel-chat-gpt.php)\n1. **Parallel Embeddings Calls**: [parallel-embeddings.php](examples/parallel-embeddings.php)\n\n\u003e [!NOTE]\n\u003e Please be aware that some embeddings models also support batch processing out of the box.\n\n### Input \u0026 Output Processing\n\nThe behavior of the Chain is extendable with services that implement `InputProcessor` and/or `OutputProcessor`\ninterface. They are provided while instantiating the Chain instance:\n\n```php\nuse PhpLlm\\LlmChain\\Chain;\n\n// Initialize Platform, LLM and processors\n\n$chain = new Chain($platform, $llm, $inputProcessors, $outputProcessors);\n```\n\n#### InputProcessor\n\n`InputProcessor` instances are called in the chain before handing over the `MessageBag` and the `$options` array to the LLM and are\nable to mutate both on top of the `Input` instance provided.\n\n```php\nuse PhpLlm\\LlmChain\\Chain\\Input;\nuse PhpLlm\\LlmChain\\Chain\\InputProcessor;\nuse PhpLlm\\LlmChain\\Model\\Message\\AssistantMessage\n\nfinal class MyProcessor implements InputProcessor\n{\n    public function processInput(Input $input): void\n    {\n        // mutate options\n        $options = $input-\u003egetOptions();\n        $options['foo'] = 'bar';\n        $input-\u003esetOptions($options);\n        \n        // mutate MessageBag\n        $input-\u003emessages-\u003eappend(new AssistantMessage(sprintf('Please answer using the locale %s', $this-\u003elocale)));\n    }\n}\n```\n\n#### OutputProcessor\n\n`OutputProcessor` instances are called after the LLM provided a response and can - on top of options and messages -\nmutate or replace the given response:\n\n```php\nuse PhpLlm\\LlmChain\\Chain\\Output;\nuse PhpLlm\\LlmChain\\Chain\\OutputProcessor;\nuse PhpLlm\\LlmChain\\Model\\Message\\AssistantMessage\n\nfinal class MyProcessor implements OutputProcessor\n{\n    public function processOutput(Output $out): void\n    {\n        // mutate response\n        if (str_contains($output-\u003eresponse-\u003egetContent, self::STOP_WORD)) {\n            $output-\u003ereponse = new TextReponse('Sorry, we were unable to find relevant information.')\n        }\n    }\n}\n```\n\n#### Chain Awareness\n\nBoth, `Input` and `Output` instances, provide access to the LLM used by the Chain, but the chain itself is only\nprovided, in case the processor implemented the `ChainAwareProcessor` interface, which can be combined with using the\n`ChainAwareTrait`:\n\n```php\nuse PhpLlm\\LlmChain\\Chain\\ChainAwareProcessor;\nuse PhpLlm\\LlmChain\\Chain\\ChainAwareTrait;\nuse PhpLlm\\LlmChain\\Chain\\Output;\nuse PhpLlm\\LlmChain\\Chain\\OutputProcessor;\nuse PhpLlm\\LlmChain\\Model\\Message\\AssistantMessage\n\nfinal class MyProcessor implements OutputProcessor, ChainAwareProcessor\n{\n    use ChainAwareTrait;\n\n    public function processOutput(Output $out): void\n    {\n        // additional chain interaction\n        $response = $this-\u003echain-\u003ecall(...);\n    }\n}\n```\n\n## Contributions\n\nContributions are always welcome, so feel free to join the development of this library. To get started, please read the\n[contribution guidelines](CONTRIBUTING.md).\n\n### Current Contributors\n\n[![LLM Chain Contributors](https://contrib.rocks/image?repo=php-llm/llm-chain 'LLM Chain Contributors')](https://github.com/php-llm/llm-chain/graphs/contributors)\n\nMade with [contrib.rocks](https://contrib.rocks).\n\n### Fixture Licenses\n\nFor testing multi-modal features, the repository contains binary media content, with the following owners and licenses:\n\n* `tests/Fixture/image.jpg`: Chris F., Creative Commons, see [pexels.com](https://www.pexels.com/photo/blauer-und-gruner-elefant-mit-licht-1680755/)\n* `tests/Fixture/audio.mp3`: davidbain, Creative Commons, see [freesound.org](https://freesound.org/people/davidbain/sounds/136777/)\n","funding_links":[],"categories":["PHP","LLM Frameworks"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fphp-llm%2Fllm-chain","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fphp-llm%2Fllm-chain","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fphp-llm%2Fllm-chain/lists"}