{"id":13593953,"url":"https://github.com/drnic/groq-ruby","last_synced_at":"2025-04-05T20:06:15.938Z","repository":{"id":234809353,"uuid":"789341984","full_name":"drnic/groq-ruby","owner":"drnic","description":"Groq Cloud runs LLM models fast and cheap. This is a convenience client library for Ruby.","archived":false,"fork":false,"pushed_at":"2024-07-30T21:42:07.000Z","size":269,"stargazers_count":117,"open_issues_count":3,"forks_count":7,"subscribers_count":2,"default_branch":"develop","last_synced_at":"2025-03-29T19:05:18.827Z","etag":null,"topics":["ai","groq","llm","rubygem"],"latest_commit_sha":null,"homepage":"","language":"Ruby","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/drnic.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-04-20T09:14:01.000Z","updated_at":"2025-03-15T00:31:21.000Z","dependencies_parsed_at":"2024-10-30T00:25:10.341Z","dependency_job_id":null,"html_url":"https://github.com/drnic/groq-ruby","commit_stats":null,"previous_names":["drnic/groq-ruby"],"tags_count":4,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/drnic%2Fgroq-ruby","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/drnic%2Fgroq-ruby/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/drnic%2Fgroq-ruby/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/drnic%2Fgroq-ruby/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/drnic","download_url":"https://codeload.github.com/drnic/groq-ruby/tar.gz/refs/heads/develop","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247393569,"owners_count":20931812,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","groq","llm","rubygem"],"created_at":"2024-08-01T16:01:26.831Z","updated_at":"2025-04-05T20:06:15.917Z","avatar_url":"https://github.com/drnic.png","language":"Ruby","readme":"# Groq\n\nGroq Cloud runs LLM models fast and cheap. Llama 3.1, Mixtrel, Gemma, and more at hundreds of tokens per second, at cents per million tokens.\n\n[![speed-pricing](docs/images/groq-speed-price-20240421.png)](https://wow.groq.com/)\n\nSpeed and pricing at 2024-04-21. Also see their [changelog](https://console.groq.com/docs/changelog) for new models and features.\n\n## Groq Cloud API\n\nYou can interact with their API using any Ruby HTTP library by following their documentation at \u003chttps://console.groq.com/docs/quickstart\u003e. Also use their [Playground](https://console.groq.com/playground) and watch the API traffic in the browser's developer tools.\n\nThe Groq Cloud API looks to be copying a subset of the OpenAI API. For example, you perform chat completions at `https://api.groq.com/openai/v1/chat/completions` with the same POST body schema as OpenAI. The Tools support looks to have the same schema for defining tools/functions.\n\nSo you can write your own Ruby client code to interact with the Groq Cloud API.\n\nOr you can use this convenience RubyGem with some nice helpers to get you started.\n\n```ruby\n@client = Groq::Client.new\n@client.chat(\"Hello, world!\")\n=\u003e {\"role\"=\u003e\"assistant\", \"content\"=\u003e\"Hello there! It's great to meet you!\"}\n\ninclude Groq::Helpers\n@client.chat([\n  User(\"Hi\"),\n  Assistant(\"Hello back. Ask me anything. I'll reply with 'cat'\"),\n  User(\"Favourite food?\")\n])\n# =\u003e {\"role\"=\u003e\"assistant\", \"content\"=\u003e\"Um... CAT\"}\n# =\u003e {\"role\"=\u003e\"assistant\", \"content\"=\u003e\"Not a cat! It's a pizza!\"}\n# =\u003e {\"role\"=\u003e\"assistant\", \"content\"=\u003e\"Pizza\"}\n# =\u003e {\"role\"=\u003e\"assistant\", \"content\"=\u003e\"Cat\"}\n\n@client.chat([\n  System(\"I am an obedient AI\"),\n  U(\"Hi\"),\n  A(\"Hello back. Ask me anything. I'll reply with 'cat'\"),\n  U(\"Favourite food?\")\n])\n# =\u003e {\"role\"=\u003e\"assistant\", \"content\"=\u003e\"Cat\"}\n# =\u003e {\"role\"=\u003e\"assistant\", \"content\"=\u003e\"cat\"}\n# =\u003e {\"role\"=\u003e\"assistant\", \"content\"=\u003e\"Cat\"}\n```\n\nJSON mode:\n\n```ruby\nresponse = @client.chat([\n  S(\"Reply with JSON. Use {\\\"number\\\": 7} for the answer.\"),\n  U(\"What's 3+4?\")\n], json: true)\n# =\u003e {\"role\"=\u003e\"assistant\", \"content\"=\u003e\"{\\\"number\\\": 7}\"}\n\nJSON.parse(response[\"content\"])\n# =\u003e {\"number\"=\u003e7}\n```\n\n## Installation\n\nInstall the gem and add to the application's Gemfile by executing:\n\n```plain\nbundle add groq\n```\n\nIf bundler is not being used to manage dependencies, install the gem by executing:\n\n```plain\ngem install groq\n```\n\n## Usage\n\n- Get your API key from [console.groq.com/keys](https://console.groq.com/keys)\n- Place in env var `GROQ_API_KEY`, or explicitly pass into configuration below.\n- Use the `Groq::Client` to interact with Groq and your favourite model.\n\n```ruby\nclient = Groq::Client.new # uses ENV[\"GROQ_API_KEY\"] and \"llama-3.1-8b-instant\"\nclient = Groq::Client.new(api_key: \"...\", model_id: \"llama-3.1-8b-instant\")\n\nGroq.configure do |config|\n  config.api_key = \"...\"\n  config.model_id = \"llama-3.1-70b-versatile\"\nend\nclient = Groq::Client.new\n```\n\nIn a Rails application, you can generate a `config/initializer/groq.rb` file with:\n\n```plain\nrails g groq:install\n```\n\nThere is a simple chat function to send messages to a model:\n\n```ruby\n# either pass a single message and get a single response\nclient.chat(\"Hello, world!\")\n=\u003e {\"role\"=\u003e\"assistant\", \"content\"=\u003e\"Hello there! It's great to meet you!\"}\n\n# or pass in a messages array containing multiple messages between user and assistant\nclient.chat([\n    {role: \"user\", content: \"What's the next day after Wednesday?\"},\n    {role: \"assistant\", content: \"The next day after Wednesday is Thursday.\"},\n    {role: \"user\", content: \"What's the next day after that?\"}\n])\n# =\u003e {\"role\" =\u003e \"assistant\", \"content\" =\u003e \"The next day after Thursday is Friday.\"}\n```\n\n### Interactive console (IRb)\n\n```plain\nbin/console\n```\n\nThis repository has a `bin/console` script to start an interactive console to play with the Groq API. The `@client` variable is setup using `$GROQ_API_KEY` environment variable; and the `U`, `A`, `T` helpers are already included.\n\n```ruby\n@client.chat(\"Hello, world!\")\n{\"role\"=\u003e\"assistant\",\n \"content\"=\u003e\"Hello there! It's great to meet you! Is there something you'd like to talk about or ask? I'm here to listen and help if I can!\"}\n```\n\nThe remaining examples below will use `@client` variable to allow you to copy+paste into `bin/console`.\n\n### Message helpers\n\nWe also have some handy `U`, `A`, `S`, and `F` methods to produce the `{role:, content:}` hashes:\n\n```ruby\ninclude Groq::Helpers\n@client.chat([\n  S(\"I am an obedient AI\"),\n  U(\"Hi\"),\n  A(\"Hello back. Ask me anything. I'll reply with 'cat'\"),\n  U(\"Favourite food?\")\n])\n# =\u003e {\"role\"=\u003e\"assistant\", \"content\"=\u003e\"Cat\"}\n```\n\nThe `T()` is to provide function/tool responses:\n\n```\nT(\"tool\", tool_call_id: \"call_b790\", name: \"get_weather_report\", content: \"25 degrees celcius\")\n# =\u003e {\"role\"=\u003e\"function\", \"tool_call_id\"=\u003e\"call_b790\", \"name\"=\u003e\"get_weather_report\", \"content\"=\u003e\"25 degrees celcius\"}\n```\n\nThere are also aliases for each helper function:\n\n- `U(content)` is also `User(content)`\n- `A(content)` is also `Assistant(content)`\n- `S(content)` is also `System(content)`\n- `T(content, ...)` is also `Tool`, `ToolReply`, `Function`, `F`\n\n### Specifying an LLM model\n\nAt the time of writing, Groq Cloud service supports a limited number of models. They've suggested they'll allow uploading custom models in future.\n\nTo get the list of known model IDs:\n\n```ruby\nGroq::Model.load_models(client:)\n=\u003e {\"object\"=\u003e\"list\", \"data\"=\u003e\n  [{\"id\"=\u003e\"gemma2-9b-it\", \"object\"=\u003e\"model\", \"created\"=\u003e1693721698, \"owned_by\"=\u003e\"Google\", \"active\"=\u003etrue, \"context_window\"=\u003e8192, \"public_apps\"=\u003enil},\n   {\"id\"=\u003e\"gemma-7b-it\", \"object\"=\u003e\"model\", \"created\"=\u003e1693721698, \"owned_by\"=\u003e\"Google\", \"active\"=\u003etrue, \"context_window\"=\u003e8192, \"public_apps\"=\u003enil},\n   {\"id\"=\u003e\"llama-3.1-70b-versatile\", \"object\"=\u003e\"model\", \"created\"=\u003e1693721698, \"owned_by\"=\u003e\"Meta\", \"active\"=\u003etrue, \"context_window\"=\u003e131072, \"public_apps\"=\u003enil},\n   {\"id\"=\u003e\"llama-3.1-8b-instant\", \"object\"=\u003e\"model\", \"created\"=\u003e1693721698, \"owned_by\"=\u003e\"Meta\", \"active\"=\u003etrue, \"context_window\"=\u003e131072, \"public_apps\"=\u003enil},\n   ...\n```\n\nAs above, you can specify the default model to use for all `chat()` calls:\n\n```ruby\nclient = Groq::Client.new(model_id: \"llama-3.1-70b-versatile\")\n# or\nGroq.configure do |config|\n  config.model_id = \"llama-3.1-70b-versatile\"\nend\n```\n\nYou can also specify the model within the `chat()` call:\n\n```ruby\n@client.chat(\"Hello, world!\", model_id: \"llama-3.1-70b-versatile\")\n```\n\nTo see all known models reply:\n\n```ruby\nputs \"User message: Hello, world!\"\nGroq::Model.model_ids.each do |model_id|\n  puts \"Assistant reply with model #{model_id}:\"\n  p @client.chat(\"Hello, world!\", model_id: model_id)\nend\n```\n\nThe output might looks similar to:\n\n```plain\n\u003e User message: Hello, world!\nAssistant reply with model llama-3.1-8b-instant:\nAssistant reply with model llama-3.1-70b-versatile:\n{\"role\"=\u003e\"assistant\", \"content\"=\u003e\"The classic \\\"Hello, world!\\\" It's great to see you here! Is there something I can help you with, or would you like to just chat?\"}\nAssistant reply with model llama2-70b-4096:\n{\"role\"=\u003e\"assistant\", \"content\"=\u003e\"Hello, world!\"}\nAssistant reply with model mixtral-8x7b-32768:\n{\"role\"=\u003e\"assistant\", \"content\"=\u003e\"Hello! It's nice to meet you. Is there something specific you would like to know or talk about? I'm here to help answer any questions you have to the best of my ability. I can provide information on a wide variety of topics, so feel free to ask me anything. I'm here to assist you.\"}\nAssistant reply with model gemma-7b-it:\n{\"role\"=\u003e\"assistant\", \"content\"=\u003e\"Hello to you too! 👋🌎 It's great to hear from you. What would you like to talk about today? 😊\"}\n```\n\n### JSON mode\n\nJSON mode is a beta feature that guarantees all chat completions are valid JSON.\n\nTo use JSON mode:\n\n1. Pass `json: true` to the `chat()` call\n2. Provide a system message that contains `JSON` in the content, e.g. `S(\"Reply with JSON\")`\n\nA good idea is to provide an example JSON schema in the system message that you'd prefer to receive.\n\nOther suggestions at [JSON mode (beta)](https://console.groq.com/docs/text-chat#json-mode-object-object) Groq docs page.\n\n```ruby\nresponse = @client.chat([\n  S(\"Reply with JSON. Use {\\n\\\"number\\\": 7\\n} for the answer.\"),\n  U(\"What's 3+4?\")\n], json: true)\n# =\u003e {\"role\"=\u003e\"assistant\", \"content\"=\u003e\"{\\n\\\"number\\\": 7\\n}\"}\n\nJSON.parse(response[\"content\"])\n# =\u003e {\"number\"=\u003e7}\n```\n\n### Using dry-schema with JSON mode\n\nAs a bonus, the `S` or `System` helper can take a `json_schema:` argument and the system message will include the `JSON` keyword and the formatted schema in its content.\n\nFor example, if you're using [dry-schema](https://dry-rb.org/gems/dry-schema/1.13/extensions/json_schema/) with its `:json_schema` extension you can use Ruby to describe JSON schema.\n\n```ruby\nrequire \"dry-schema\"\nDry::Schema.load_extensions(:json_schema)\n\nperson_schema_defn = Dry::Schema.JSON do\n  required(:name).filled(:string)\n  optional(:age).filled(:integer)\n  optional(:email).filled(:string)\nend\nperson_schema = person_schema_defn.json_schema\n\nresponse = @client.chat([\n  S(\"You're excellent at extracting personal information\", json_schema: person_schema),\n  U(\"I'm Dr Nic and I'm almost 50.\")\n], json: true)\nJSON.parse(response[\"content\"])\n# =\u003e {\"name\"=\u003e\"Dr Nic\", \"age\"=\u003e49}\n```\n\nNOTE: `bin/console` already loads the `dry-schema` library and the `json_schema` extension because its handy.\n\n### Tools/Functions\n\nLLMs are increasingly supporting deferring to tools or functions to fetch data, perform calculations, or store structured data. Groq Cloud in turn then supports their tool implementations through its API.\n\nSee the [Using Tools](https://console.groq.com/docs/tool-use) documentation for the list of models that currently support tools. Others might support it sometimes and raise errors other times.\n\n```ruby\n@client = Groq::Client.new(model_id: \"mixtral-8x7b-32768\")\n```\n\nThe Groq/OpenAI schema for defining a tool/function (which differs from the Anthropic/Claude3 schema) is:\n\n```ruby\ntools = [{\n  type: \"function\",\n  function: {\n    name: \"get_weather_report\",\n    description: \"Get the weather report for a city\",\n    parameters: {\n    type: \"object\",\n    properties: {\n      city: {\n        type: \"string\",\n        description: \"The city or region to get the weather report for\"\n      }\n    },\n    required: [\"city\"]\n    }\n  }\n}]\n```\n\nPass the `tools` array into the `chat()` call:\n\n```ruby\n@client = Groq::Client.new(model_id: \"mixtral-8x7b-32768\")\n\ninclude Groq::Helpers\nmessages = [U(\"What's the weather in Paris?\")]\nresponse = @client.chat(messages, tools: tools)\n# =\u003e {\"role\"=\u003e\"assistant\", \"tool_calls\"=\u003e[{\"id\"=\u003e\"call_b790\", \"type\"=\u003e\"function\", \"function\"=\u003e{\"name\"=\u003e\"get_weather_report\", \"arguments\"=\u003e\"{\\\"city\\\":\\\"Paris\\\"}\"}}]}\n```\n\nYou'd then invoke the Ruby implementation of `get_weather_report` to return the weather report for Paris as the next message in the chat.\n\n```ruby\nmessages \u003c\u003c response\n\ntool_call_id = response[\"tool_calls\"].first[\"id\"]\nmessages \u003c\u003c T(\"25 degrees celcius\", tool_call_id: tool_call_id, name: \"get_weather_report\")\n@client.chat(messages)\n# =\u003e {\"role\"=\u003e\"assistant\", \"content\"=\u003e \"I'm glad you called the function!\\n\\nAs of your current location, the weather in Paris is indeed 25°C (77°F)...\"}\n```\n\n### Max Tokens \u0026 Temperature\n\nMax tokens is the maximum number of tokens that the model can process in a single response. This limits ensures computational efficiency and resource management.\n\nTemperature setting for each API call controls randomness of responses. A lower temperature leads to more predictable outputs while a higher temperature results in more varies and sometimes more creative outputs. The range of values is 0 to 2.\n\nEach API call includes a `max_token:` and `temperature:` value.\n\nThe defaults are:\n\n```ruby\n@client.max_tokens\n=\u003e 1024\n@client.temperature\n=\u003e 1\n```\n\nYou can override them in the `Groq.configure` block, or with each `chat()` call:\n\n```ruby\nGroq.configure do |config|\n  config.max_tokens = 512\n  config.temperature = 0.5\nend\n# or\n@client.chat(\"Hello, world!\", max_tokens: 512, temperature: 0.5)\n```\n\n### Debugging API calls\n\nThe underlying HTTP library being used is faraday, and you can enabled debugging, or configure other faraday internals by passing a block to the `Groq::Client.new` constructor.\n\n```ruby\nrequire 'logger'\n\n# Create a logger instance\nlogger = Logger.new(STDOUT)\nlogger.level = Logger::DEBUG\n\n@client = Groq::Client.new do |faraday|\n  # Log request and response bodies\n  faraday.response :logger, logger, bodies: true\nend\n```\n\nIf you pass `--debug` to `bin/console` you will have this logger setup for you.\n\n```plain\nbin/console --debug\n```\n\n### Streaming\n\nIf your AI assistant responses are being telecast live to a human, then that human might want some progressive responses. The Groq API supports streaming responses.\n\nPass a block to `chat()` with either one or two arguments.\n\n1. The first argument is the string content chunk of the response.\n2. The optional second argument is the full response object from the API containing extra metadata.\n\nThe final block call will be the last chunk of the response:\n\n1. The first argument will be `nil`\n2. The optional second argument, the full response object, contains a summary of the Groq API usage, such as prompt tokens, prompt time, etc.\n\n```ruby\nputs \"🍕 \"\nmessages = [\n  S(\"You are a pizza sales person.\"),\n  U(\"What do you sell?\")\n]\n@client.chat(messages) do |content|\n  print content\nend\nputs\n```\n\nEach chunk of the response will be printed to the console as it is received. It will look pretty.\n\nThe default `llama3-7b-8192` model is very very fast and you might not see any streaming. Try a slower model like `llama-3.1-70b-versatile` or `mixtral-8x7b-32768`.\n\n```ruby\n@client = Groq::Client.new(model_id: \"llama-3.1-70b-versatile\")\n@client.chat(\"Write a long poem about patience\") do |content|\n  print content\nend\nputs\n```\n\nYou can pass in a second argument to get the full response JSON object:\n\n```ruby\n@client.chat(\"Write a long poem about patience\") do |content, response|\n  pp content\n  pp response\nend\n```\n\nAlternately, you can pass a `Proc` or any object that responds to `call` via a `stream:` keyword argument:\n\n```ruby\n@client.chat(\"Write a long poem about patience\", stream: -\u003e(content) { print content })\n```\n\nYou could use a class with a `call` method with either one or two arguments, like the `Proc` discussion above.\n\n```ruby\nclass MessageBits\n  def initialize(emoji)\n    print \"#{emoji} \"\n    @bits = []\n  end\n\n  def call(content)\n    if content.nil?\n      puts\n    else\n      print(content)\n      @bits \u003c\u003c content\n    end\n  end\n\n  def to_s\n    @bits.join(\"\")\n  end\n\n  def to_assistant_message\n    Assistant(to_s)\n  end\nend\n\nbits = MessageBits.new(\"🍕\")\n@client.chat(\"Write a long poem about pizza\", stream: bits)\n```\n\n## Examples\n\nHere are some example uses of Groq, of the `groq` gem and its syntax.\n\nAlso, see the [`examples/`](examples/) folder for more example apps.\n\n### Pizzeria agent\n\nTalking with a pizzeria.\n\nOur pizzeria agent can be as simple as a function that combines a system message and the current messages array:\n\n```ruby\n@agent_message = \u003c\u003c~EOS\n  You are an employee at a pizza store.\n\n  You sell hawaiian, and pepperoni pizzas; in small and large sizes for $10, and $20 respectively.\n\n  Pick up only in. Ready in 10 mins. Cash on pickup.\nEOS\n\ndef chat_pizza_agent(messages)\n  @client.chat([\n    System(@agent_message),\n    *messages\n  ])\nend\n```\n\nNow for our first interaction:\n\n```ruby\nmessages = [U(\"Is this the pizza shop? Do you sell hawaiian?\")]\n\nresponse = chat_pizza_agent(messages)\nputs response[\"content\"]\n```\n\nThe output might be:\n\n\u003e Yeah! This is the place! Yes, we sell Hawaiian pizzas here! We've got both small and large sizes available for you. The small Hawaiian pizza is $10, and the large one is $20. Plus, because we're all about getting you your pizza fast, our pick-up time is only 10 minutes! So, what can I get for you today? Would you like to order a small or large Hawaiian pizza?\n\nContinue with user's reply.\n\nNote, we build the `messages` array with the previous user and assistant messages and the new user message:\n\n```ruby\nmessages \u003c\u003c response \u003c\u003c U(\"Yep, give me a large.\")\nresponse = chat_pizza_agent(messages)\nputs response[\"content\"]\n```\n\nResponse:\n\n\u003e I'll get that ready for you. So, to confirm, you'd like to order a large Hawaiian pizza for $20, and I'll have it ready for you in 10 minutes. When you come to pick it up, please have the cash ready as we're a cash-only transaction. See you in 10!\n\nMaking a change:\n\n```ruby\nmessages \u003c\u003c response \u003c\u003c U(\"Actually, make it two smalls.\")\nresponse = chat_pizza_agent(messages)\nputs response[\"content\"]\n```\n\nResponse:\n\n\u003e I've got it! Two small Hawaiian pizzas on the way! That'll be $20 for two small pizzas. Same deal, come back in 10 minutes to pick them up, and bring cash for the payment. See you soon!\n\n### Pizza customer agent\n\nOh my. Let's also have an agent that represents the customer.\n\n```ruby\n@customer_message = \u003c\u003c~EOS\n  You are a customer at a pizza store.\n\n  You want to order a pizza. You can ask about the menu, prices, sizes, and pickup times.\n\n  You'll agree with the price and terms of the pizza order.\n\n  You'll make a choice of the available options.\n\n  If you're first in the conversation, you'll say hello and ask about the menu.\nEOS\n\ndef chat_pizza_customer(messages)\n  @client.chat([\n    System(@customer_message),\n    *messages\n  ])\nend\n```\n\nFirst interaction starts with no user or assistant messages. We're generating the customer's first message:\n\n```ruby\ncustomer_messages = []\nresponse = chat_pizza_customer(customer_messages)\nputs response[\"content\"]\n```\n\nCustomer's first message:\n\n\u003e Hello! I'd like to order a pizza. Could you tell me more about the menu and prices? What kind of pizzas do you have available?\n\nNow we need to pass this to the pizzeria agent:\n\n```ruby\ncustomer_message = response[\"content\"]\npizzeria_messages = [U(customer_message)]\nresponse = chat_pizza_agent(pizzeria_messages)\nputs response[\"content\"]\n```\n\nPizzeria agent response:\n\n\u003e Hi there! Yeah, sure thing! We've got two delicious options to choose from: Hawaiian and Pepperoni. Both come in small and large sizes. The small pizzas are $10 and the large pizzas are $20.\n\u003e\n\u003e Our Hawaiian pizza features fresh ham and pineapple on a bed of melted mozzarella. And if you're in the mood for something classic, our Pepperoni pizza is loaded with plenty of sliced pepperoni and melted mozzarella cheese.\n\nNow let's add this response to the customer agent's message array, and generate the customer's next response to the pizzera:\n\n```ruby\ncustomer_messages \u003c\u003c U(response[\"content\"])\nresponse = chat_pizza_customer(customer_messages)\nputs response[\"content\"]\n```\n\nCustomer agent response:\n\n\u003e Wow, those both sound delicious! I'm intrigued by the Hawaiian combo, I never thought of putting ham and pineapple on a pizza before. How would you recommend I customize it? Can I add any extra toppings or keep it as is? And do you have any recommendations for the size? Small or large?\n\nAdd this to the pizzeria agent's message array, and generate the pizzeria's response:\n\n```ruby\npizzeria_messages \u003c\u003c U(response[\"content\"])\nresponse = chat_pizza_agent(pizzeria_messages)\nputs response[\"content\"]\n```\n\nPizzeria agent response:\n\n\u003e The Hawaiian pizza is definitely a unique twist on traditional toppings! You can definitely customize it to your liking. We allow two extra toppings of your choice for an additional $1 each. If you want to add any other toppings beyond that, it's $2 per topping.\n\u003e\n\u003e As for recommends, I'd say the small size is a great starting point, especially if you're trying something new like the Hawaiian pizza. The small size is $10 and it's a great bite-sized portion. But if you're looking for a bigger pie, the large size is $20 and would be a great option if you're feeding a crowd or want leftovers.\n\u003e\n\u003e Keep in mind that our pizzas are cooked fresh in 10 minutes, so it's ready when it's ready! Would you like to place an order now?\n\nWill the customer actually buy anything now?\n\n\u003e I think I'd like to go with the Hawaiian pizza in the small size, so the total would be $10. And I'll take advantage of the extra topping option. I think I'll add some mushrooms to it. So, that's an extra $1 for the mushroom topping. Would that be $11 total? And do you have a pickup time available soon?\n\nOMG, the customer bought something.\n\nPizzeria agent response:\n\n\u003e That sounds like a great choice! Yeah, the total would be $11, the small Hawaiian pizza with mushrooms. And yes, we do have pickup available shortly. It'll be ready in about 10 minutes. Cash on pickup, okay? Would you like to pay when you pick up your pizza?\n\nMaybe these two do not know how to stop talking. The Halting Problem exists in pizza shops too.\n\n## Development\n\nAfter checking out the repo, run `bin/setup` to install dependencies. Then, run `rake test` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.\n\nTo install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and the created tag, and push the `.gem` file to [rubygems.org](https://rubygems.org).\n\n## Contributing\n\nBug reports and pull requests are welcome on GitHub at https://github.com/drnic/groq-ruby. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [code of conduct](https://github.com/drnic/groq-ruby/blob/develop/CODE_OF_CONDUCT.md).\n\n## License\n\nThe gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).\n\n## Code of Conduct\n\nEveryone interacting in the Groq project's codebases, issue trackers, chat rooms and mailing lists is expected to follow the [code of conduct](https://github.com/drnic/groq-ruby/blob/develop/CODE_OF_CONDUCT.md).\n","funding_links":[],"categories":["Open Source","Ruby","其他LLM框架","Other LLM Frameworks"],"sub_categories":["API Libraries","文章","Videos Playlists"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdrnic%2Fgroq-ruby","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdrnic%2Fgroq-ruby","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdrnic%2Fgroq-ruby/lists"}