{"id":28047866,"url":"https://github.com/llmrb/llm","last_synced_at":"2025-05-11T21:05:02.992Z","repository":{"id":258010735,"uuid":"867137291","full_name":"llmrb/llm","owner":"llmrb","description":"Ruby adapter for various LLM providers","archived":false,"fork":false,"pushed_at":"2025-05-09T01:08:38.000Z","size":38729,"stargazers_count":39,"open_issues_count":7,"forks_count":2,"subscribers_count":3,"default_branch":"main","last_synced_at":"2025-05-09T02:25:44.953Z","etag":null,"topics":["ai","llm","llm-agents","llm-framework","llm-frameworks","llms","ruby-lib","ruby-library"],"latest_commit_sha":null,"homepage":"","language":"Ruby","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/llmrb.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2024-10-03T14:12:21.000Z","updated_at":"2025-05-09T01:11:25.000Z","dependencies_parsed_at":"2025-05-09T02:22:09.728Z","dependency_job_id":"ba77c54a-d162-4f0d-b4fc-92e39b2ca46f","html_url":"https://github.com/llmrb/llm","commit_stats":null,"previous_names":["antaz/llm","llmrb/llm"],"tags_count":14,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/llmrb%2Fllm","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/llmrb%2Fllm/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/llmrb%2Fllm/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/llmrb%2Fllm/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/llmrb","download_url":"https://codeload.github.com/llmrb/llm/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253509787,"owners_count":21919589,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","llm","llm-agents","llm-framework","llm-frameworks","llms","ruby-lib","ruby-library"],"created_at":"2025-05-11T21:05:01.009Z","updated_at":"2025-05-11T21:05:02.963Z","avatar_url":"https://github.com/llmrb.png","language":"Ruby","readme":"## About\n\nllm.rb is a zero-dependency Ruby toolkit for Large Language Models that\nincludes OpenAI, Gemini, Anthropic, Ollama, and LlamaCpp. It’s fast, simple\nand composable – with full support for chat, tool calling, audio,\nimages, files, and JSON Schema generation.\n\n## Features\n\n#### General\n- ✅ A single unified interface for multiple providers\n- 📦 Zero dependencies outside Ruby's standard library\n- 🚀 Optimized for performance and low memory usage\n- 🔌 Retrieve models dynamically for introspection and selection\n\n#### Chat, Agents\n- 🧠 Stateless and stateful chat via completions and responses API\n- 🤖 Tool calling and function execution\n- 🗂️ JSON Schema support for structured, validated responses\n\n#### Media\n- 🗣️ Text-to-speech, transcription, and translation\n- 🖼️ Image generation, editing, and variation support\n- 📎 File uploads and prompt-aware file interaction\n- 💡 Multimodal prompts (text, images, PDFs, URLs, files)\n\n#### Embeddings\n- 🧮 Text embeddings and vector support\n\n## Demos\n\n\u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003e1. Tools: \"system\" function\u003c/b\u003e\u003c/summary\u003e\n  \u003cimg src=\"share/llm-shell/examples/toolcalls.gif\"\u003e\n\u003c/details\u003e\n\n\u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003e2. Files: import at boot time\u003c/b\u003e\u003c/summary\u003e\n  \u003cimg src=\"share/llm-shell/examples/files-boottime.gif\"\u003e\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003e3. Files: import at runtime\u003c/b\u003e\u003c/summary\u003e\n  \u003cimg src=\"share/llm-shell/examples/files-runtime.gif\"\u003e\n\u003c/details\u003e\n\n## Examples\n\n### Providers\n\n#### LLM::Provider\n\nAll providers inherit from [LLM::Provider](https://0x1eef.github.io/x/llm.rb/LLM/Provider.html) \u0026ndash;\nthey share a common interface and set of functionality. Each provider can be instantiated\nusing an API key (if required) and an optional set of configuration options via\n[the singleton methods of LLM](https://0x1eef.github.io/x/llm.rb/LLM.html). For example:\n\n```ruby\n#!/usr/bin/env ruby\nrequire \"llm\"\n\nllm = LLM.openai(key: \"yourapikey\")\nllm = LLM.gemini(key: \"yourapikey\")\nllm = LLM.anthropic(key: \"yourapikey\")\nllm = LLM.ollama(key: nil)\nllm = LLM.llamacpp(key: nil)\nllm = LLM.voyageai(key: \"yourapikey\")\n```\n\n### Conversations\n\n#### Completions\n\n\u003e This example uses the stateless chat completions API that all\n\u003e providers support. A similar example for OpenAI's stateful\n\u003e responses API is available in the [docs/](docs/OPENAI.md)\n\u003e directory.\n\nThe following example enables lazy mode for a\n[LLM::Chat](https://0x1eef.github.io/x/llm.rb/LLM/Chat.html)\nobject by entering into a conversation where messages are buffered and\nsent to the provider only when necessary. Both lazy and non-lazy conversations\nmaintain a message thread that can be reused as context throughout a conversation.\nThe example captures the spirit of llm.rb by demonstrating how objects cooperate\ntogether through composition, and it uses the stateless chat completions API that\nall LLM providers support:\n\n```ruby\n#!/usr/bin/env ruby\nrequire \"llm\"\n\nllm  = LLM.openai(key: ENV[\"KEY\"])\nbot  = LLM::Chat.new(llm).lazy\nmsgs = bot.chat do |prompt|\n  prompt.system File.read(\"./share/llm/prompts/system.txt\")\n  prompt.user \"Tell me the answer to 5 + 15\"\n  prompt.user \"Tell me the answer to (5 + 15) * 2\"\n  prompt.user \"Tell me the answer to ((5 + 15) * 2) / 10\"\nend\n\n# At this point, we execute a single request\nmsgs.each { print \"[#{_1.role}] \", _1.content, \"\\n\" }\n\n##\n# [system] You are my math assistant.\n#          I will provide you with (simple) equations.\n#          You will provide answers in the format \"The answer to \u003cequation\u003e is \u003canswer\u003e\".\n#          I will provide you a set of messages. Reply to all of them.\n#          A message is considered unanswered if there is no corresponding assistant response.\n#\n# [user] Tell me the answer to 5 + 15\n# [user] Tell me the answer to (5 + 15) * 2\n# [user] Tell me the answer to ((5 + 15) * 2) / 10\n#\n# [assistant] The answer to 5 + 15 is 20.\n#             The answer to (5 + 15) * 2 is 40.\n#             The answer to ((5 + 15) * 2) / 10 is 4.\n```\n\n### Schema\n\n#### Structured\n\nAll LLM providers except Anthropic allow a client to describe the structure\nof a response that a LLM emits according to a schema that is described by JSON.\nThe schema lets a client describe what JSON object (or value) an LLM should emit,\nand the LLM will abide by the schema. See also: [JSON Schema website](https://json-schema.org/overview/what-is-jsonschema).\n We will use the\n[llmrb/json-schema](https://github.com/llmrb/json-schema)\nlibrary for the sake of the examples \u0026ndash; the interface is designed so you\ncould drop in any other library in its place:\n\n```ruby\n#!/usr/bin/env ruby\nrequire \"llm\"\n\nllm = LLM.openai(key: ENV[\"KEY\"])\nschema = llm.schema.object({fruit: llm.schema.string.enum(\"Apple\", \"Orange\", \"Pineapple\")})\nbot = LLM::Chat.new(llm, schema:).lazy\nbot.chat \"Your favorite fruit is Pineapple\", role: :system\nbot.chat \"What fruit is your favorite?\", role: :user\nbot.messages.find(\u0026:assistant?).content! # =\u003e {fruit: \"Pineapple\"}\n\nschema = llm.schema.object({answer: llm.schema.integer.required})\nbot = LLM::Chat.new(llm, schema:).lazy\nbot.chat \"Tell me the answer to ((5 + 5) / 2)\", role: :user\nbot.messages.find(\u0026:assistant?).content! # =\u003e {answer: 5}\n\nschema = llm.schema.object({probability: llm.schema.number.required})\nbot = LLM::Chat.new(llm, schema:).lazy\nbot.chat \"Does the earth orbit the sun?\", role: :user\nbot.messages.find(\u0026:assistant?).content! # =\u003e {probability: 1}\n```\n\n### Tools\n\n#### Functions\n\nThe OpenAI, Anthropic, Gemini and Ollama providers support a powerful feature known as\ntool calling, and although it is a little complex to understand at first,\nit can be powerful for building agents. The following example demonstrates how we\ncan define a local function (which happens to be a tool), and OpenAI can\nthen detect when we should call the function.\n\nThe\n[LLM::Chat#functions](https://0x1eef.github.io/x/llm.rb/LLM/Chat.html#functions-instance_method)\nmethod returns an array of functions that can be called after sending a message and\nit will only be populated if the LLM detects a function should be called. Each function\ncorresponds to an element in the \"tools\" array. The array is emptied after a function call,\nand potentially repopulated on the next message.\n\nThe following example defines an agent that can run system commands based on natural language,\nand it is only intended to be a fun demo of tool calling - it is not recommended to run\narbitrary commands from a LLM without sanitizing the input first :) Without further ado:\n\n```ruby\n#!/usr/bin/env ruby\nrequire \"llm\"\n\nllm  = LLM.openai(key: ENV[\"KEY\"])\ntool = LLM.function(:system) do |fn|\n  fn.description \"Run a shell command\"\n  fn.params do |schema|\n    schema.object(command: schema.string.required)\n  end\n  fn.define do |params|\n    ro, wo = IO.pipe\n    re, we = IO.pipe\n    Process.wait Process.spawn(params.command, out: wo, err: we)\n    [wo,we].each(\u0026:close)\n    {stderr: re.read, stdout: ro.read}\n  end\nend\n\nbot = LLM::Chat.new(llm, tools: [tool]).lazy\nbot.chat \"Your task is to run shell commands via a tool.\", role: :system\n\nbot.chat \"What is the current date?\", role: :user\nbot.chat bot.functions.map(\u0026:call) # report return value to the LLM\n\nbot.chat \"What operating system am I running? (short version please!)\", role: :user\nbot.chat bot.functions.map(\u0026:call) # report return value to the LLM\n\n##\n# {stderr: \"\", stdout: \"Thu May  1 10:01:02 UTC 2025\"}\n# {stderr: \"\", stdout: \"FreeBSD\"}\n```\n\n### Audio\n\n#### Speech\n\nSome but not all providers implement audio generation capabilities that\ncan create speech from text, transcribe audio to text, or translate\naudio to text (usually English). The following example uses the OpenAI provider\nto create an audio file from a text prompt. The audio is then moved to\n`${HOME}/hello.mp3` as the final step:\n\n```ruby\n#!/usr/bin/env ruby\nrequire \"llm\"\n\nllm = LLM.openai(key: ENV[\"KEY\"])\nres = llm.audio.create_speech(input: \"Hello world\")\nIO.copy_stream res.audio, File.join(Dir.home, \"hello.mp3\")\n```\n\n#### Transcribe\n\nThe following example transcribes an audio file to text. The audio file\n(`${HOME}/hello.mp3`) was theoretically created in the previous example,\nand the result is printed to the console. The example uses the OpenAI\nprovider to transcribe the audio file:\n\n```ruby\n#!/usr/bin/env ruby\nrequire \"llm\"\n\nllm = LLM.openai(key: ENV[\"KEY\"])\nres = llm.audio.create_transcription(\n  file: File.join(Dir.home, \"hello.mp3\")\n)\nprint res.text, \"\\n\" # =\u003e \"Hello world.\"\n```\n\n#### Translate\n\nThe following example translates an audio file to text. In this example\nthe audio file (`${HOME}/bomdia.mp3`) is theoretically in Portuguese,\nand it is translated to English. The example uses the OpenAI provider,\nand at the time of writing, it can only translate to English:\n\n```ruby\n#!/usr/bin/env ruby\nrequire \"llm\"\n\nllm = LLM.openai(key: ENV[\"KEY\"])\nres = llm.audio.create_translation(\n  file: File.join(Dir.home, \"bomdia.mp3\")\n)\nprint res.text, \"\\n\" # =\u003e \"Good morning.\"\n```\n\n### Images\n\n#### Create\n\nSome but not all LLM providers implement image generation capabilities that\ncan create new images from a prompt, or edit an existing image with a\nprompt. The following example uses the OpenAI provider to create an\nimage of a dog on a rocket to the moon. The image is then moved to\n`${HOME}/dogonrocket.png` as the final step:\n\n```ruby\n#!/usr/bin/env ruby\nrequire \"llm\"\nrequire \"open-uri\"\nrequire \"fileutils\"\n\nllm = LLM.openai(key: ENV[\"KEY\"])\nres = llm.images.create(prompt: \"a dog on a rocket to the moon\")\nres.urls.each do |url|\n  FileUtils.mv OpenURI.open_uri(url).path,\n               File.join(Dir.home, \"dogonrocket.png\")\nend\n```\n\n#### Edit\n\nThe following example is focused on editing a local image with the aid\nof a prompt. The image (`/images/cat.png`) is returned to us with the cat\nnow wearing a hat. The image is then moved to `${HOME}/catwithhat.png` as\nthe final step:\n\n```ruby\n#!/usr/bin/env ruby\nrequire \"llm\"\nrequire \"open-uri\"\nrequire \"fileutils\"\n\nllm = LLM.openai(key: ENV[\"KEY\"])\nres = llm.images.edit(\n  image: \"/images/cat.png\",\n  prompt: \"a cat with a hat\",\n)\nres.urls.each do |url|\n  FileUtils.mv OpenURI.open_uri(url).path,\n               File.join(Dir.home, \"catwithhat.png\")\nend\n```\n\n#### Variations\n\nThe following example is focused on creating variations of a local image.\nThe image (`/images/cat.png`) is returned to us with five different variations.\nThe images are then moved to `${HOME}/catvariation0.png`, `${HOME}/catvariation1.png`\nand so on as the final step:\n\n```ruby\n#!/usr/bin/env ruby\nrequire \"llm\"\nrequire \"open-uri\"\nrequire \"fileutils\"\n\nllm = LLM.openai(key: ENV[\"KEY\"])\nres = llm.images.create_variation(\n  image: \"/images/cat.png\",\n  n: 5\n)\nres.urls.each.with_index do |url, index|\n  FileUtils.mv OpenURI.open_uri(url).path,\n               File.join(Dir.home, \"catvariation#{index}.png\")\nend\n```\n\n### Files\n\n#### Create\n\nMost LLM providers provide a Files API where you can upload files\nthat can be referenced from a prompt and llm.rb has first-class support\nfor this feature. The following example uses the OpenAI provider to describe\nthe contents of a PDF file after it has been uploaded. The file (an instance\nof [LLM::Response::File](https://0x1eef.github.io/x/llm.rb/LLM/Response/File.html))\nis passed directly to the chat method, and generally any object a prompt supports\ncan be given to the chat method:\n\n\n```ruby\n#!/usr/bin/env ruby\nrequire \"llm\"\n\nllm = LLM.openai(key: ENV[\"KEY\"])\nbot = LLM::Chat.new(llm).lazy\nfile = llm.files.create(file: \"/documents/openbsd_is_awesome.pdf\")\nbot.chat(file)\nbot.chat(\"What is this file about?\")\nbot.messages.select(\u0026:assistant?).each { print \"[#{_1.role}] \", _1.content, \"\\n\" }\n\n##\n# [assistant] This file is about OpenBSD, a free and open-source Unix-like operating system\n#             based on the Berkeley Software Distribution (BSD). It is known for its\n#             emphasis on security, code correctness, and code simplicity. The file\n#             contains information about the features, installation, and usage of OpenBSD.\n```\n\n### Prompts\n\n#### Multimodal\n\nGenerally all providers accept text prompts but some providers can\nalso understand URLs, and various file types (eg images, audio, video,\netc). The llm.rb approach to multimodal prompts is to let you pass `URI`\nobjects to describe links, `LLM::File` | `LLM::Response::File` objects\nto describe files, `String` objects to describe text blobs, or an array\nof the aforementioned objects to describe multiple objects in a single\nprompt. Each object is a first class citizen that can be passed directly\nto a prompt:\n\n```ruby\n#!/usr/bin/env ruby\nrequire \"llm\"\n\nllm = LLM.openai(key: ENV[\"KEY\"])\nbot = LLM::Chat.new(llm).lazy\n\nbot.chat [URI(\"https://example.com/path/to/image.png\"), \"Describe the image in the link\"]\nbot.messages.select(\u0026:assistant?).each { print \"[#{_1.role}] \", _1.content, \"\\n\" }\n\nfile = llm.files.create(file: \"/documents/openbsd_is_awesome.pdf\")\nbot.chat [file, \"What is this file about?\"]\nbot.messages.select(\u0026:assistant?).each { print \"[#{_1.role}] \", _1.content, \"\\n\" }\n\nbot.chat [LLM.File(\"/images/puffy.png\"), \"What is this image about?\"]\nbot.messages.select(\u0026:assistant?).each { print \"[#{_1.role}] \", _1.content, \"\\n\" }\n\nbot.chat [LLM.File(\"/images/beastie.png\"), \"What is this image about?\"]\nbot.messages.select(\u0026:assistant?).each { print \"[#{_1.role}] \", _1.content, \"\\n\" }\n```\n\n### Embeddings\n\n#### Text\n\nThe\n[`LLM::Provider#embed`](https://0x1eef.github.io/x/llm.rb/LLM/Provider.html#embed-instance_method)\nmethod generates a vector representation of one or more chunks\nof text. Embeddings capture the semantic meaning of text \u0026ndash;\na common use-case for them is to store chunks of text in a\nvector database, and then to query the database for *semantically\nsimilar* text. These chunks of similar text can then support the\ngeneration of a prompt that is used to query a large language model,\nwhich will go on to generate a response:\n\n```ruby\n#!/usr/bin/env ruby\nrequire \"llm\"\n\nllm = LLM.openai(key: ENV[\"KEY\"])\nres = llm.embed([\"programming is fun\", \"ruby is a programming language\", \"sushi is art\"])\nprint res.class, \"\\n\"\nprint res.embeddings.size, \"\\n\"\nprint res.embeddings[0].size, \"\\n\"\n\n##\n# LLM::Response::Embedding\n# 3\n# 1536\n```\n\n### Models\n\n#### List\n\nAlmost all LLM providers provide a models endpoint that allows a client to\nquery the list of models that are available to use. The list is dynamic,\nmaintained by LLM providers, and it is independent of a specific llm.rb release.\n[LLM::Model](https://0x1eef.github.io/x/llm.rb/LLM/Model.html)\nobjects can be used instead of a string that describes a model name (although\neither works). Let's take a look at an example:\n\n```ruby\n#!/usr/bin/env ruby\nrequire \"llm\"\n\n##\n# List all models\nllm = LLM.openai(key: ENV[\"KEY\"])\nllm.models.all.each do |model|\n  print \"model: \", model.id, \"\\n\"\nend\n\n##\n# Select a model\nmodel = llm.models.all.find { |m| m.id == \"gpt-3.5-turbo\" }\nbot = LLM::Chat.new(llm, model:)\nbot.chat \"Hello #{model.id} :)\"\nbot.messages.select(\u0026:assistant?).each { print \"[#{_1.role}] \", _1.content, \"\\n\" }\n```\n\n## Documentation\n\n### API\n\nThe README tries to provide a high-level overview of the library. For everything\nelse there's the API reference. It covers classes and methods that the README glances\nover or doesn't cover at all. The API reference is available at\n[0x1eef.github.io/x/llm.rb](https://0x1eef.github.io/x/llm.rb).\n\n\n### Guides\n\nThe [docs/](docs/) directory contains some additional documentation that\ndidn't quite make it into the README. It covers the design guidelines that\nthe library follows, some strategies for memory management, and other\nprovider-specific features.\n\n## See also\n\n**[llmrb/llm-shell](https://github.com/llmrb/llm-shell)**\n\nAn extensible, developer-oriented command line utility that is powered by\nllm.rb and serves as a demonstration of the library's capabilities. The\n[demo](https://github.com/llmrb/llm-shell#demos) section has a number of GIF\npreviews might be especially interesting.\n\n## Install\n\nllm.rb can be installed via rubygems.org:\n\n\tgem install llm.rb\n\n## License\n\n[BSD Zero Clause](https://choosealicense.com/licenses/0bsd/)\n\u003cbr\u003e\nSee [LICENSE](./LICENSE)\n","funding_links":[],"categories":["Ruby"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fllmrb%2Fllm","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fllmrb%2Fllm","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fllmrb%2Fllm/lists"}