{"id":19212342,"url":"https://github.com/leafo/lua-openai","last_synced_at":"2026-01-18T11:17:22.018Z","repository":{"id":160135425,"uuid":"635075559","full_name":"leafo/lua-openai","owner":"leafo","description":"OpenAI API bindings for Lua","archived":false,"fork":false,"pushed_at":"2026-01-17T04:09:36.000Z","size":154,"stargazers_count":78,"open_issues_count":8,"forks_count":8,"subscribers_count":3,"default_branch":"main","last_synced_at":"2026-01-17T17:12:42.602Z","etag":null,"topics":["chatgpt","gpt","lapis","lua","moonscript","openai","openresty"],"latest_commit_sha":null,"homepage":"","language":"MoonScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/leafo.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2023-05-01T23:05:57.000Z","updated_at":"2026-01-03T23:32:31.000Z","dependencies_parsed_at":null,"dependency_job_id":"f53d8a38-d6b8-4936-8514-e5403b198745","html_url":"https://github.com/leafo/lua-openai","commit_stats":null,"previous_names":[],"tags_count":9,"template":false,"template_full_name":null,"purl":"pkg:github/leafo/lua-openai","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leafo%2Flua-openai","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leafo%2Flua-openai/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leafo%2Flua-openai/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leafo%2Flua-openai/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/leafo","download_url":"https://codeload.github.com/leafo/lua-openai/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leafo%2Flua-openai/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28535161,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-18T10:13:46.436Z","status":"ssl_error","status_checked_at":"2026-01-18T10:13:11.045Z","response_time":98,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["chatgpt","gpt","lapis","lua","moonscript","openai","openresty"],"created_at":"2024-11-09T13:46:35.928Z","updated_at":"2026-01-18T11:17:22.008Z","avatar_url":"https://github.com/leafo.png","language":"MoonScript","readme":"# lua-openai\n\nBindings to the [OpenAI HTTP\nAPI](https://platform.openai.com/docs/api-reference) for Lua. Compatible with\nany HTTP library that supports LuaSocket's http request interface. Compatible\nwith OpenResty using\n[`lapis.nginx.http`](https://leafo.net/lapis/reference/utilities.html#making-http-requests).\nThis project implements both the classic Chat Completions API in addition to\nthe modern Responses API.\n\n\u003cdetails\u003e\n\u003csummary\u003eAI Generated Disclaimer\u003c/summary\u003e\n\nThe large majority of this library was written using Generative AI models like\nChatGPT and Claude Sonnet. Human review and guidance is provided where needed.\n\n\u003c/details\u003e\n\n## Install\n\nInstall using LuaRocks:\n\n```bash\nluarocks install lua-openai\n```\n\n## Quick Usage\n\nUsing the Responses API:\n\n```lua\nlocal openai = require(\"openai\")\nlocal client = openai.new(os.getenv(\"OPENAI_API_KEY\"))\n\nlocal status, response = client:create_response({\n  {role = \"system\", content = \"You are a Lua programmer\"},\n  {role = \"user\", content = \"Write a 'Hello world' program in Lua\"}\n}, {\n  model = \"gpt-4.1\",\n  temperature = 0.5\n})\n\nif status == 200 then\n  -- the JSON response is automatically parsed into a Lua object\n  print(response.output[1].content[1].text)\nend\n```\n\nUsing the Chat Completions API:\n\n```lua\nlocal openai = require(\"openai\")\nlocal client = openai.new(os.getenv(\"OPENAI_API_KEY\"))\n\nlocal status, response = client:create_chat_completion({\n  {role = \"system\", content = \"You are a Lua programmer\"},\n  {role = \"user\", content = \"Write a 'Hello world' program in Lua\"}\n}, {\n  model = \"gpt-3.5-turbo\",\n  temperature = 0.5\n})\n\nif status == 200 then\n  -- the JSON response is automatically parsed into a Lua object\n  print(response.choices[1].message.content)\nend\n```\n\n## Chat Session Example\n\nA chat session instance can be created to simplify managing the state of a back\nand forth conversation with the ChatGPT Chat Completions API. Note that chat\nstate is stored locally in memory, each new message is appended to the list of\nmessages, and the output is automatically appended to the list for the next\nrequest.\n\n```lua\nlocal openai = require(\"openai\")\nlocal client = openai.new(os.getenv(\"OPENAI_API_KEY\"))\n\nlocal chat = client:new_chat_session({\n  -- provide an initial set of messages\n  messages = {\n    {role = \"system\", content = \"You are an artist who likes colors\"}\n  }\n})\n\n-- returns the string response\nprint(chat:send(\"List your top 5 favorite colors\"))\n\n-- the chat history is sent on subsequent requests to continue the conversation\nprint(chat:send(\"Excluding the colors you just listed, tell me your favorite color\"))\n\n-- the entire chat history is stored in the messages field\nfor idx, message in ipairs(chat.messages) do\n  print(message.role, message.content)\nend\n\n-- You can stream the output by providing a callback as the second argument\n-- the full response concatenated is also returned by the function\nlocal response = chat:send(\"What's the most boring color?\", function(chunk)\n  io.stdout:write(chunk.content)\n  io.stdout:flush()\nend)\n```\n\n\n## Streaming Response Example\n\nUnder normal circumstances the API will wait until the entire response is\navailable before returning the response. Depending on the prompt this may take\nsome time. The streaming API can be used to read the output one chunk at a\ntime, allowing you to display content in real time as it is generated.\n\nUsing the Responses API:\n\n```lua\nlocal openai = require(\"openai\")\nlocal client = openai.new(os.getenv(\"OPENAI_API_KEY\"))\n\nclient:create_response({\n  {role = \"system\", content = \"You work for Streak.Club, a website to track daily creative habits\"},\n  {role = \"user\", content = \"Who do you work for?\"}\n}, {\n  stream = true\n}, function(chunk)\n  -- Raw event object from API: check type and access delta directly\n  if chunk.type == \"response.output_text.delta\" then\n    io.stdout:write(chunk.delta)\n    io.stdout:flush()\n  end\nend)\n\nprint() -- print a newline\n```\n\nUsing the Chat Completions API:\n\n\n```lua\nlocal openai = require(\"openai\")\nlocal client = openai.new(os.getenv(\"OPENAI_API_KEY\"))\n\nclient:create_chat_completion({\n  {role = \"system\", content = \"You work for Streak.Club, a website to track daily creative habits\"},\n  {role = \"user\", content = \"Who do you work for?\"}\n}, {\n  stream = true\n}, function(chunk)\n  -- Raw event object from API: access content via choices[1].delta.content\n  local delta = chunk.choices and chunk.choices[1] and chunk.choices[1].delta\n  if delta and delta.content then\n    io.stdout:write(delta.content)\n    io.stdout:flush()\n  end\nend)\n\nprint() -- print a newline\n```\n\n## Documentation\n\nThe `openai` module returns a table with the following fields:\n\n- `OpenAI`: A client for sending requests to the OpenAI API.\n- `new`: An alias to `OpenAI` to create a new instance of the OpenAI client\n- `ChatSession`: A class for managing chat sessions and history with the OpenAI API.\n- `VERSION = \"1.5.0\"`: The current version of the library\n\n### Classes\n\n#### OpenAI\n\nThis class initializes a new OpenAI API client.\n\n##### `new(api_key, config)`\n\nConstructor for the OpenAI client.\n\n- `api_key`: Your OpenAI API key.\n- `config`: An optional table of configuration options, with the following shape:\n  - `http_provider`: A string specifying the HTTP module name used for requests, or `nil`. If not provided, the library will automatically use \"lapis.nginx.http\" in an ngx environment, or \"socket.http\" otherwise.\n\n```lua\nlocal openai = require(\"openai\")\nlocal api_key = \"your-api-key\"\nlocal client = openai.new(api_key)\n```\n\n##### `client:new_chat_session(...)`\n\nCreates a new [ChatSession](#chatsession) instance. A chat session is an\nabstraction over the chat completions API that stores the chat history. You can\nappend new messages to the history and request completions to be generated from\nit. By default, the completion is appended to the history.\n\n##### `client:new_responses_chat_session(...)`\n\nCreates a new ResponsesChatSession instance for the Responses API. Similar to\nChatSession but uses OpenAI's Responses API which handles conversation state\nserver-side via `previous_response_id`.\n\n- `opts`: Optional configuration table\n  - `model`: Model to use (defaults to client's default_model)\n  - `instructions`: System instructions for the conversation\n  - `tools`: Array of tool definitions\n  - `previous_response_id`: Resume from a previous response\n\n##### `client:create_chat_completion(messages, opts, chunk_callback)`\n\nSends a request to the `/chat/completions` endpoint.\n\n- `messages`: An array of message objects.\n- `opts`: Additional options for the chat, passed directly to the API (eg. model, temperature, etc.) https://platform.openai.com/docs/api-reference/chat\n- `chunk_callback`: A function to be called for each raw event object when `stream = true` is passed to `opts`. Each chunk is the parsed API response (eg. `{object = \"chat.completion.chunk\", choices = {{delta = {content = \"...\"}, index = 0}}}`).\n\nReturns HTTP status, response object, and output headers. The response object\nwill be decoded from JSON if possible, otherwise the raw string is returned.\n\n##### `client:chat(messages, opts, chunk_callback)`\n\nLegacy alias for `create_chat_completion` with filtered streaming chunks. When streaming, the callback receives parsed chunks in the format `{content = \"...\", index = ...}` instead of raw event objects.\n\n##### `client:completion(prompt, opts)`\n\nSends a request to the `/completions` endpoint.\n\n- `prompt`: The prompt for the completion.\n- `opts`: Additional options for the completion, passed directly to the API (eg. model, temperature, etc.) https://platform.openai.com/docs/api-reference/completions\n\nReturns HTTP status, response object, and output headers. The response object\nwill be decoded from JSON if possible, otherwise the raw string is returned.\n\n##### `client:embedding(input, opts)`\n\nSends a request to the `/embeddings` endpoint.\n\n- `input`: A single string or an array of strings\n- `opts`: Additional options for the completion, passed directly to the API (eg. model) https://platform.openai.com/docs/api-reference/embeddings\n\nReturns HTTP status, response object, and output headers. The response object\nwill be decoded from JSON if possible, otherwise the raw string is returned.\n\n##### `client:create_response(input, opts, stream_callback)`\n\nSends a request to the `/responses` endpoint (Responses API).\n\n- `input`: A string or array of message objects (with `role` and `content` fields)\n- `opts`: Additional options passed directly to the API (eg. model, temperature, instructions, tools, previous_response_id, etc.) https://platform.openai.com/docs/api-reference/responses\n- `stream_callback`: Optional function called for each raw event object when `stream = true` is passed in opts (eg. `{type = \"response.output_text.delta\", delta = \"Hello\"}`)\n\nReturns HTTP status, response object, and output headers. The response object\nwill be decoded from JSON if possible, otherwise the raw string is returned.\n\n##### `client:response(response_id)`\n\nRetrieves a stored response by ID from the `/responses/{id}` endpoint.\n\n- `response_id`: The ID of the response to retrieve\n\nReturns HTTP status, response object, and output headers.\n\n##### `client:delete_response(response_id)`\n\nDeletes a stored response.\n\n- `response_id`: The ID of the response to delete\n\nReturns HTTP status, response object, and output headers.\n\n##### `client:cancel_response(response_id)`\n\nCancels an in-progress streaming response.\n\n- `response_id`: The ID of the response to cancel\n\nReturns HTTP status, response object, and output headers.\n\n##### `client:moderation(input, opts)`\n\nSends a request to the `/moderations` endpoint to check content against OpenAI's content policy.\n\n- `input`: A string or array of strings to classify\n- `opts`: Additional options passed directly to the API\n\nReturns HTTP status, response object, and output headers.\n\n##### `client:models()`\n\nLists available models from the `/models` endpoint.\n\nReturns HTTP status, response object, and output headers.\n\n##### `client:files()`\n\nLists uploaded files from the `/files` endpoint.\n\nReturns HTTP status, response object, and output headers.\n\n##### `client:file(file_id)`\n\nRetrieves information about a specific file.\n\n- `file_id`: The ID of the file to retrieve\n\nReturns HTTP status, response object, and output headers.\n\n##### `client:delete_file(file_id)`\n\nDeletes a file.\n\n- `file_id`: The ID of the file to delete\n\nReturns HTTP status, response object, and output headers.\n\n##### `client:image_generation(params)`\n\nSends a request to the `/images/generations` endpoint to generate images.\n\n- `params`: Parameters for image generation (prompt, n, size, etc.) https://platform.openai.com/docs/api-reference/images/create\n\nReturns HTTP status, response object, and output headers.\n\n#### ResponsesChatSession\n\nThis class manages chat sessions using OpenAI's Responses API. Unlike\nChatSession, conversation state is maintained server-side via\n`previous_response_id`. Typically created with `new_responses_chat_session`.\n\nThe field `response_history` stores an array of response objects from past\ninteractions. The field `current_response_id` holds the ID of the most recent\nresponse, used to maintain conversation continuity.\n\n##### `new(client, opts)`\n\nConstructor for the ResponsesChatSession.\n\n- `client`: An instance of the OpenAI client.\n- `opts`: An optional table of options.\n  - `model`: Model to use (defaults to client's default_model)\n  - `instructions`: System instructions for the conversation\n  - `tools`: Array of tool definitions\n  - `previous_response_id`: Resume from a previous response\n\n##### `session:send(input, stream_callback)`\n\nSends input and returns the response, maintaining conversation state\nautomatically.\n\n- `input`: A string or array of message objects.\n- `stream_callback`: Optional function for streaming responses.\n\nReturns a response object on success (or accumulated text string when\nstreaming). On failure, returns `nil`, an error message, and the raw response.\n\nResponse objects have helper methods:\n- `response:get_output_text()`: Extract all text content as a string\n- `response:get_images()`: Extract generated images (when using image_generation tool)\n- `tostring(response)`: Converts to text string\n\nThe `stream_callback` receives two arguments: a parsed chunk object and the raw\nevent object. Each call provides an incremental piece of the response text.\n\nThe parsed chunk has a `content` field and supports `tostring()`:\n\n```lua\nsession:send(\"Hello\", function(chunk, raw_event)\n  io.write(tostring(chunk)) -- or chunk.content\n  io.flush()\nend)\n```\n\n##### `session:create_response(input, opts, stream_callback)`\n\nLower-level method to create a response with additional options.\n\n- `input`: A string or array of message objects.\n- `opts`: Additional options (model, temperature, tools, previous_response_id, etc.)\n- `stream_callback`: Optional function for streaming responses.\n\nReturns a response object on success. On failure, returns `nil`, an error\nmessage, and the raw response.\n\n#### ChatSession\n\nThis class manages chat sessions and history with the OpenAI API. Typically\ncreated with `new_chat_session`\n\nThe field `messages` stores an array of chat messages representing the chat\nhistory. Each message object must conform to the following structure:\n\n- `role`: A string representing the role of the message sender. It must be one of the following values: \"system\", \"user\", or \"assistant\".\n- `content`: A string containing the content of the message.\n- `name`: An optional string representing the name of the message sender. If not provided, it should be `nil`.\n\nFor example, a valid message object might look like this:\n\n```lua\n{\n  role = \"user\",\n  content = \"Tell me a joke\",\n  name = \"John Doe\"\n}\n```\n\n##### `new(client, opts)`\n\nConstructor for the ChatSession.\n\n- `client`: An instance of the OpenAI client.\n- `opts`: An optional table of options.\n  - `messages`: An initial array of chat messages\n  - `functions`: A list of function declarations\n  - `temperature`: temperature setting\n  - `model`: Which chat completion model to use, eg. `gpt-4`, `gpt-3.5-turbo`\n\n##### `chat:append_message(m, ...)`\n\nAppends a message to the chat history.\n\n- `m`: A message object.\n\n##### `chat:last_message()`\n\nReturns the last message in the chat history.\n\n##### `chat:send(message, stream_callback=nil)`\n\nAppends a message to the chat history and triggers a completion with\n`generate_response` and returns the response as a string. On failure, returns\n`nil`, an error message, and the raw request response.\n\nIf the response includes a `function_call`, then the entire message object is\nreturned instead of a string of the content. You can return the result of the\nfunction by passing `role = \"function\"` object to the `send` method\n\n- `message`: A message object or a string.\n- `stream_callback`: (optional) A function to enable streaming output.\n\nBy providing a `stream_callback`, the request will run in streaming mode. The\ncallback receives two arguments: a parsed chunk object and the raw event object.\n\nThe parsed chunk has the following fields:\n\n- `content`: A string containing the text of the assistant's generated response.\n- `index`: The index of the choice (usually 0).\n\nThe chunk supports `tostring()` to easily print the content:\n\n```lua\nchat:send(\"Hello\", function(chunk, raw_event)\n  io.write(tostring(chunk)) -- or chunk.content\n  io.flush()\nend)\n```\n\n##### `chat:generate_response(append_response, stream_callback=nil)`\n\nCalls the OpenAI API to generate the next response for the stored chat history.\nReturns the response as a string. On failure, returns `nil`, an error message,\nand the raw request response.\n\n- `append_response`: Whether the response should be appended to the chat history (default: true).\n- `stream_callback`: (optional) A function to enable streaming output.\n\nSee `chat:send` for details on the `stream_callback`\n\n\n## Using with Google Gemini\n\nThis library includes a compatibility layer for Google's Gemini API through\ntheir [OpenAI-compatible\nendpoint](https://ai.google.dev/gemini-api/docs/openai). The `Gemini` client\nextends the `OpenAI` class and supports chat completions, chat sessions,\nembeddings, and structured output.\n\n```lua\nlocal Gemini = require(\"openai.compat.gemini\")\nlocal client = Gemini.new(os.getenv(\"GEMINI_API_KEY\"))\n\n-- Use chat completions\nlocal status, response = client:create_chat_completion({\n  {role = \"user\", content = \"Hello, how are you?\"}\n}, {\n  model = \"gemini-2.5-flash\" -- this is the default model\n})\n\nif status == 200 then\n  print(response.choices[1].message.content)\nend\n```\n\n### Chat Sessions with Gemini\n\n```lua\nlocal Gemini = require(\"openai.compat.gemini\")\nlocal client = Gemini.new(os.getenv(\"GEMINI_API_KEY\"))\n\nlocal chat = client:new_chat_session({\n  messages = {\n    {role = \"system\", content = \"You are a helpful assistant.\"}\n  }\n})\n\nprint(chat:send(\"What is the capital of France?\"))\nprint(chat:send(\"What is its population?\")) -- follow-up with context\n```\n\n### Embeddings with Gemini\n\n```lua\nlocal Gemini = require(\"openai.compat.gemini\")\nlocal client = Gemini.new(os.getenv(\"GEMINI_API_KEY\"))\n\nlocal status, response = client:embedding(\"Hello world\", {\n  model = \"gemini-embedding-001\"\n})\n\nif status == 200 then\n  print(\"Dimensions:\", #response.data[1].embedding)\nend\n```\n\nSee the `examples/gemini/` directory for more examples including structured\noutput with JSON schemas.\n\n## Appendix\n\n### Chat Session With Functions\n\n\u003e Note: Functions are the legacy format for what is now known as tools, this\n\u003e example is left here just as a reference\n\nOpenAI allows [sending a list of function\ndeclarations](https://openai.com/blog/function-calling-and-other-api-updates)\nthat the LLM can decide to call based on the prompt. The function calling\ninterface must be used with chat completions and the `gpt-4-0613` or\n`gpt-3.5-turbo-0613` models or later.\n\n\u003e See \u003chttps://github.com/leafo/lua-openai/blob/main/examples/example5.lua\u003e for\n\u003e a full example that implements basic math functions to compute the standard\n\u003e deviation of a list of numbers\n\nHere's a quick example of how to use functions in a chat exchange. First you\nwill need to create a chat session with the `functions` option containing an\narray of available functions.\n\n\u003e The functions are stored on the `functions` field on the chat object. If the\n\u003e functions need to be adjusted for future message, the field can be modified.\n\n```lua\nlocal chat = openai:new_chat_session({\n  model = \"gpt-3.5-turbo-0613\",\n  functions = {\n    {\n      name = \"add\",\n      description =  \"Add two numbers together\",\n      parameters = {\n        type = \"object\",\n        properties = {\n          a = { type = \"number\" },\n          b = { type = \"number\" }\n        }\n      }\n    }\n  }\n})\n```\n\nAny prompt you send will be aware of all available functions, and may request\nany of them to be called. If the response contains a function call request,\nthen an object will be returned instead of the standard string return value.\n\n```lua\nlocal res = chat:send(\"Using the provided function, calculate the sum of 2923 + 20839\")\n\nif type(res) == \"table\" and res.function_call then\n  -- The function_call object has the following fields:\n  --   function_call.name --\u003e name of function to be called\n  --   function_call.arguments --\u003e A string in JSON format that should match the parameter specification\n  -- Note that res may also include a content field if the LLM produced a textual output as well\n\n  local cjson = require \"cjson\"\n  local name = res.function_call.name\n  local arguments = cjson.decode(res.function_call.arguments)\n  -- ... compute the result and send it back ...\nend\n```\n\nYou can evaluate the requested function \u0026 arguments and send the result back to\nthe client so it can resume operation with a `role=function` message object:\n\n\u003e Since the LLM can hallucinate every part of the function call, you'll want to\n\u003e do robust type validation to ensure that function name and arguments match\n\u003e what you expect. Assume every stage can fail, including receiving malformed\n\u003e JSON for the arguments.\n\n```lua\nlocal name, arguments = ... -- the name and arguments extracted from above\n\nif name == \"add\" then\n  local value = arguments.a + arguments.b\n\n  -- send the response back to the chat bot using a `role = function` message\n\n  local cjson = require \"cjson\"\n\n  local res = chat:send({\n    role = \"function\",\n    name = name,\n    content = cjson.encode(value)\n  })\n\n  print(res) -- Print the final output\nelse\n  error(\"Unknown function: \" .. name)\nend\n```\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fleafo%2Flua-openai","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fleafo%2Flua-openai","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fleafo%2Flua-openai/lists"}