{"id":13636505,"url":"https://github.com/ledgetech/lua-resty-qless","last_synced_at":"2025-04-19T08:32:23.602Z","repository":{"id":13619532,"uuid":"16312736","full_name":"ledgetech/lua-resty-qless","owner":"ledgetech","description":"Lua binding to Qless (Queue / Pipeline management) for OpenResty / Redis","archived":false,"fork":false,"pushed_at":"2022-07-08T13:32:48.000Z","size":181,"stargazers_count":95,"open_issues_count":3,"forks_count":12,"subscribers_count":16,"default_branch":"master","last_synced_at":"2024-08-17T14:02:36.553Z","etag":null,"topics":["job-queue","lua","luajit","nginx","openresty","queue","redis","redis-queue"],"latest_commit_sha":null,"homepage":"","language":"Lua","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ledgetech.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":".github/FUNDING.yml","license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null},"funding":{"github":"pintsized"}},"created_at":"2014-01-28T13:37:04.000Z","updated_at":"2024-06-05T10:47:44.000Z","dependencies_parsed_at":"2022-09-11T16:31:59.473Z","dependency_job_id":null,"html_url":"https://github.com/ledgetech/lua-resty-qless","commit_stats":null,"previous_names":["pintsized/lua-resty-qless"],"tags_count":12,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ledgetech%2Flua-resty-qless","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ledgetech%2Flua-resty-qless/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ledgetech%2Flua-resty-qless/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ledgetech%2Flua-resty-qless/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ledgetech","download_url":"https://codeload.github.com/ledgetech/lua-resty-qless/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":249650209,"owners_count":21305981,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["job-queue","lua","luajit","nginx","openresty","queue","redis","redis-queue"],"created_at":"2024-08-02T00:01:02.090Z","updated_at":"2025-04-19T08:32:23.355Z","avatar_url":"https://github.com/ledgetech.png","language":"Lua","readme":"lua-resty-qless\n===============\n\n**lua-resty-qless** is a binding to [qless-core](https://github.com/seomoz/qless-core) from [Moz](https://github.com/seomoz) - a powerful Redis based job queueing system inspired by\n[resque](https://github.com/defunkt/resque#readme), but instead implemented as a collection of Lua scripts for Redis.\n\nThis binding provides a full implementation of **Qless** via Lua script running in [OpenResty](http://openresty.org/) / [lua-nginx-module](https://github.com/openresty/lua-nginx-module), including workers which can be started during the `init_worker_by_lua` phase.\n\nEssentially, with this module and a modern Redis instance, you can turn your OpenResty server into a quite sophisticated yet lightweight job queuing system, which is also compatible with the reference Ruby implementation, [Qless](https://github.com/seomoz/qless).\n\n*Note: This module is not designed to work in a pure Lua environment.*\n\nStatus\n======\n\nThis module should be considered experimental.\n\n\nRequirements\n============\n\n* Redis \u003e= 2.8.x\n* OpenResty \u003e= 1.9.x\n* [lua-resty-redis-connector](https://github.com/pintsized/lua-resty-redis-connector) \u003e= 0.05\n\n\nPhilosophy and Nomenclature\n===========================\nA `job` is a unit of work identified by a job id or `jid`. A `queue` can contain\nseveral jobs that are scheduled to be run at a certain time, several jobs that are\nwaiting to run, and jobs that are currently running. A `worker` is a process on a\nhost, identified uniquely, that asks for jobs from the queue, performs some process\nassociated with that job, and then marks it as complete. When it's completed, it\ncan be put into another queue.\n\nJobs can only be in one queue at a time. That queue is whatever queue they were last\nput in. So if a worker is working on a job, and you move it, the worker's request to\ncomplete the job will be ignored.\n\nA job can be `canceled`, which means it disappears into the ether, and we'll never\npay it any mind ever again. A job can be `dropped`, which is when a worker fails\nto heartbeat or complete the job in a timely fashion, or a job can be `failed`,\nwhich is when a host recognizes some systematically problematic state about the\njob. A worker should only fail a job if the error is likely not a transient one;\notherwise, that worker should just drop it and let the system reclaim it.\n\nFeatures\n========\n\n1. __Jobs don't get dropped on the floor__ Sometimes workers drop jobs. Qless\n  automatically picks them back up and gives them to another worker\n1. __Tagging / Tracking__ Some jobs are more interesting than others. Track those\n  jobs to get updates on their progress.\n1. __Job Dependencies__ One job might need to wait for another job to complete\n1. __Stats__ Qless automatically keeps statistics about how long jobs wait\n  to be processed and how long they take to be processed. Currently, we keep\n  track of the count, mean, standard deviation, and a histogram of these times.\n1. __Job data is stored temporarily__ Job info sticks around for a configurable\n  amount of time so you can still look back on a job's history, data, etc.\n1. __Priority__ Jobs with the same priority get popped in the order they were\n  inserted; a higher priority means that it gets popped faster\n1. __Retry logic__ Every job has a number of retries associated with it, which are\n  renewed when it is put into a new queue or completed. If a job is repeatedly\n  dropped, then it is presumed to be problematic, and is automatically failed.\n1. __Web App__ [lua-resty-qless-web](https://github.com/hamishforbes/lua-resty-qless-web) gives you visibility and control over certain operational issues\n1. __Scheduled Work__ Until a job waits for a specified delay (defaults to 0),\n  jobs cannot be popped by workers\n1. __Recurring Jobs__ Scheduling's all well and good, but we also support\n  jobs that need to recur periodically.\n1. __Notifications__ Tracked jobs emit events on pubsub channels as they get\n  completed, failed, put, popped, etc. Use these events to get notified of\n  progress on jobs you're interested in.\n\nConnecting\n=============\nFirst things first, require `resty.qless` and create a client, specifying your Redis connection details.\n\n```lua\nlocal qless = require(\"resty.qless\").new({\n    host = \"127.0.0.1\",\n    port = 6379,\n})\n```\n\nParameters passed to `new` are forwarded to [lua-resty-redis-connector](https://github.com/pintsized/lua-resty-redis-connector). Please review the documentation there for connection options, including how to use Redis Sentinel etc.\n\nAdditionally, if your application has a Redis connection that you wish to reuse, there are two ways you can integrate this:\n\n1) Using an already established connection directly\n\n```lua\nlocal qless = require(\"resty.qless\").new({\n    redis_client = my_redis,\n})\n```\n\n2) Providing callbacks for connecting and closing the connection\n\n```lua\nlocal qless = require(\"resty.qless\").new({\n    get_redis_client = my_connection_callback,\n    close_redis_client = my_close_callback,\n})\n```\n\nWhen finished with Qless, you should call `qless:set_keepalive()` which will attempt to put Redis back on the keepalive pool, either using settings you provide directly, or via parameters sent to `lua-resty-redis-connector`, or by calling your `close_redis_client` callback.\n\n\nEnqueing Jobs\n=============\n\nJobs themselves are modules, which must be loadable via `require` and provide a `perform` function, which accepts a single `job` argument.\n\n\n```lua\n-- my/test/job.lua (the job's \"klass\" becomes \"my.test.job\")\n\nlocal _M = {}\n\nfunction _M.perform(job)\n    -- job is an instance of Qless_Job and provides access to\n    -- job.data (which is a Lua table), a means to cancel the\n    -- job (job:cancel()), and more.\n\n    -- return \"nil, err_type, err_msg\" to indicate an unexpected failure\n\n    if not job.data then\n        return nil, \"job-error\", \"data missing\"\n    end\n\n    -- Do work\nend\n\nreturn _M\n```\n\nNow you can access a queue, and add a job to that queue.\n\n```lua\n-- This references a new or existing queue 'testing'\nlocal queue = qless.queues['testing']\n\n-- Let's add a job, with some data. Returns Job ID\nlocal jid = queue:put(\"my.test.job\", { hello = \"howdy\" })\n-- = \"0c53b0404c56012f69fa482a1427ab7d\"\n\n-- Now we can ask for a job\nlocal job = queue:pop()\n\n-- And we can do the work associated with it!\njob:perform()\n```\n\nThe job data must be a table (which is serialised to JSON internally).\n\nThe value returned by `queue:put()` is the job ID, or jid. Every Qless\njob has a unique jid, and it provides a means to interact with an\nexisting job:\n\n```lua\n-- find an existing job by it's jid\nlocal job = qless.jobs:get(jid)\n\n-- Query it to find out details about it:\njob.klass -- the class of the job\njob.queue -- the queue the job is in\njob.data  -- the data for the job\njob.history -- the history of what has happened to the job sofar\njob.dependencies -- the jids of other jobs that must complete before this one\njob.dependents -- the jids of other jobs that depend on this one\njob.priority -- the priority of this job\njob.tags -- table of tags for this job\njob.original_retries -- the number of times the job is allowed to be retried\njob.retries_left -- the number of retries left\n\n-- You can also change the job in various ways:\njob:requeue(\"some_other_queue\") -- move it to a new queue\njob:cancel() -- cancel the job\njob:tag(\"foo\") -- add a tag\njob:untag(\"foo\") -- remove a tag\n```\n\nRunning Workers\n================\n\nTraditionally, Qless offered a forking Ruby worker script inspired by Resque.\n\nIn lua-resty-qless, we take advantage of the `init_lua_by_worker` phase \nand `ngx.timer.at` API in order run workers in independent \"light threads\",\nscalable across your worker processes.\n\nYou can run many light threads concurrently per worker process, which Nginx\nwill schedule for you.\n\n```lua\ninit_worker_by_lua '\n    local resty_qless_worker = require \"resty.qless.worker\"\n    \n    local worker = resty_qless_worker.new(redis_params)\n    \n    worker:start({\n        interval = 1,\n        concurrency = 4,\n        reserver = \"ordered\",\n        queues = { \"my_queue\", \"my_other_queue\" },\n    })\n';\n```\n\nWorkers support three strategies (reservers) for what order to pop jobs off the queues: **ordered**, **round-robin** and **shuffled round-robin**.\n\nThe ordered reserver will keep popping jobs off the first queue until\nit is empty, before trying to pop jobs off the second queue. The\nround-robin reserver will pop a job off the first queue, then the second\nqueue, and so on. Shuffled simply ensures the rounb-robin selection is unpredictable.\n\nYou could also easily implement your own. Follow the other reservers as a guide, and ensure yours\nis \"requireable\" with `require \"resty.qless.reserver.myreserver\"`.\n\nMiddleware\n=========\n\nWorkers also support middleware which can be used to inject\nlogic around the processing of a single job. This can be useful, for example, when you need to re-establish a database connection.\n\nTo do this you set the worker's `middleware` to a function, and call `coroutine.yield` where you want\nthe job to be performed.\n\n```lua\nlocal worker = resty_qless_worker.new(redis_params)\n\nworker.middleware = function(job)\n    -- Do pre job work\n    coroutine.yield()\n    -- Do post job work\nend\n\nworker:start({ queues = \"my_queue\" })\n```\n\n\nJob Dependencies\n================\nLet's say you have one job that depends on another, but the task definitions are\nfundamentally different. You need to cook a turkey, and you need to make stuffing,\nbut you can't make the turkey until the stuffing is made:\n\n```lua\nlocal queue = qless.queues['cook']\nlocal stuffing_jid = queue:put(\"jobs.make.stuffing\", \n  { lots = \"of butter\" }\n)\nlocal turkey_jid  = queue:put(\"jobs.make.turkey\", \n  { with = \"stuffing\" }, \n  { depends = stuffing_jid }\n)\n```\n\nWhen the stuffing job completes, the turkey job is unlocked and free to be processed.\n\nPriority\n========\nSome jobs need to get popped sooner than others. Whether it's a trouble ticket, or\ndebugging, you can do this pretty easily when you put a job in a queue:\n\n```lua\nqueue:put(\"jobs.test\", { foo = \"bar\" }, { priority = 10 })\n```\n\nWhat happens when you want to adjust a job's priority while it's still waiting in\na queue?\n\n```lua\nlocal job = qless.jobs:get(\"0c53b0404c56012f69fa482a1427ab7d\")\njob.priority = 10\n-- Now this will get popped before any job of lower priority\n```\n\n*Note: Setting the priority field above is all you need to do, thanks to Lua metamethods which are invoked to update\nRedis. This may look a little \"auto-magic\", but the intention is to retain API design compatibility with the Ruby\nclient as much as possible.*\n \nScheduled Jobs\n==============\nIf you don't want a job to be run right away but some time in the future, you can\nspecify a delay:\n\n```lua\n-- Run at least 10 minutes from now\nqueue:put(\"jobs.test\", { foo = \"bar\" }, { delay = 600 })\n```\n\nThis doesn't guarantee that job will be run exactly at 10 minutes. You can accomplish\nthis by changing the job's priority so that once 10 minutes has elapsed, it's put before\nlesser-priority jobs:\n\n```lua\n-- Run in 10 minutes\nqueue:put(\"jobs.test\", \n  { foo = \"bar\" }, \n  { delay = 600, priority = 100 }\n)\n```\n\nRecurring Jobs\n==============\nSometimes it's not enough simply to schedule one job, but you want to run jobs regularly.\nIn particular, maybe you have some batch operation that needs to get run once an hour and\nyou don't care what worker runs it. Recurring jobs are specified much like other jobs:\n\n```lua\n-- Run every hour\nlocal recurring_jid = queue:recur(\"jobs.test\", { widget = \"warble\" }, 3600)\n-- = 22ac75008a8011e182b24cf9ab3a8f3b\n```\n\nYou can even access them in much the same way as you would normal jobs:\n\n```lua\nlocal job = qless.jobs:get(\"22ac75008a8011e182b24cf9ab3a8f3b\")\n```\n\nChanging the interval at which it runs after the fact is trivial:\n\n```lua\n-- I think I only need it to run once every two hours\njob.interval = 7200\n```\n\nIf you want it to run every hour on the hour, but it's 2:37 right now, you can specify\nan offset which is how long it should wait before popping the first job:\n\n```lua\n-- 23 minutes of waiting until it should go\nqueue:recur(\"jobs.test\", \n  { howdy = \"hello\" }, \n  3600,\n  { offset = (23 * 60) }\n)\n```\n\nRecurring jobs also have priority, a configurable number of retries, and tags. These\nsettings don't apply to the recurring jobs, but rather the jobs that they spawn. In the\ncase where more than one interval passes before a worker tries to pop the job, __more than\none job is created__. The thinking is that while it's completely client-managed, the state\nshould not be dependent on how often workers are trying to pop jobs.\n\n```lua\n-- Recur every minute\nqueue:recur(\"jobs.test\", { lots = \"of jobs\" }, 60)\n \n-- Wait 5 minutes\n\nlocal jobs = queue:pop(10)\nngx.say(#jobs, \" jobs got popped\")\n\n-- = 5 jobs got popped\n```\n\nConfiguration Options\n=====================\nYou can get and set global (in the context of the same Redis instance) configuration\nto change the behaviour for heartbeating, and so forth. There aren't a tremendous number\nof configuration options, but an important one is how long job data is kept around. Job\ndata is expired after it has been completed for `jobs-history` seconds, but is limited to\nthe last `jobs-history-count` completed jobs. These default to 50k jobs, and 30 days, but\ndepending on volume, your needs may change. To only keep the last 500 jobs for up to 7 days:\n\n```lua\nqless:config_set(\"jobs-history\", 7 * 86400)\nqless:config_get(\"jobs-history-count\", 500)\n```\n\nTagging / Tracking\n==================\nIn qless, 'tracking' means flagging a job as important. Tracked jobs emit subscribable events as they make progress\n(more on that below).\n\n```lua\nlocal job = qless.jobs:get(\"b1882e009a3d11e192d0b174d751779d\")\njob:track()\n```\n\nJobs can be tagged with strings which are indexed for quick searches. For example, jobs\nmight be associated with customer accounts, or some other key that makes sense for your\nproject.\n\n```lua\nqueue:put(\"jobs.test\", {}, \n  { tags = { \"12345\", \"foo\", \"bar\" } }\n)\n```\n\nThis makes them searchable in the Ruby / Sinatra web interface, or from code:\n\n```lua\nlocal jids = qless.jobs:tagged(\"foo\")\n```\n\nYou can add or remove tags at will, too:\n\n```lua\nlocal job = qless.jobs:get('b1882e009a3d11e192d0b174d751779d')\njob:tag(\"howdy\", \"hello\")\njob:untag(\"foo\", \"bar\")\n```\n\nNotifications\n=============\n**Tracked** jobs emit events on specific pubsub channels as things happen to them. Whether\nit's getting popped off of a queue, completed by a worker, etc.\n\nThose familiar with Redis pub/sub will note that a Redis connection can only be used\nfor pubsub-y commands once listening. For this reason, the events module is passed Redis connection\nparameters independently.\n\n```lua\nlocal events = qless.events(redis_params)\n\nevents:listen({ \"canceled\", \"failed\" }, function(channel, jid)\n    ngx.log(ngx.INFO, jid, \": \", channel)\n    -- logs \"b1882e009a3d11e192d0b174d751779d: canceled\" etc.\nend\n```\n\nYou can also listen to the \"log\" channel, whilch gives a JSON structure of all logged events.\n\n```lua\nlocal events = qless.events(redis_params)\n\nevents:listen({ \"log\" }, function(channel, message)\n    local message = cjson.decode(message)\n    ngx.log(ngx.INFO, message.event, \" \", message.jid)\nend\n```\n\nHeartbeating\n============\nWhen a worker is given a job, it is given an exclusive lock to that job. That means\nthat job won't be given to any other worker, so long as the worker checks in with\nprogress on the job. By default, jobs have to either report back progress every 60\nseconds, or complete it, but that's a configurable option. For longer jobs, this\nmay not make sense.\n\n``` lua\n-- Hooray! We've got a piece of work!\nlocal job = queue:pop()\n\n-- How long until I have to check in?\njob:ttl()\n-- = 59\n\n-- Hey! I'm still working on it!\njob:heartbeat()\n-- = 1331326141.0\n\n-- Ok, I've got some more time. Oh! Now I'm done!\njob:complete()\n```\n\nIf you want to increase the heartbeat in all queues,\n\n```lua\n-- Now jobs get 10 minutes to check in\nqless:set_config(\"heartbeat\", 600)\n\n-- But the testing queue doesn't get as long.\nqless.queues[\"testing\"].heartbeat = 300\n```\n\nWhen choosing a heartbeat interval, note that this is the amount of time that\ncan pass before qless realizes if a job has been dropped. At the same time, you don't\nwant to burden qless with heartbeating every 10 seconds if your job is expected to\ntake several hours.\n\nAn idiom you're encouraged to use for long-running jobs that want to check in their\nprogress periodically:\n\n``` lua\n-- Wait until we have 5 minutes left on the heartbeat, and if we find that\n-- we've lost our lock on a job, then honorably fall on our sword\nif job:ttl() \u003c 300 and not job:heartbeat() then\n  -- exit\nend\n```\n\nStats\n=====\nOne nice feature of Qless is that you can get statistics about usage. Stats are\naggregated by day, so when you want stats about a queue, you need to say what queue\nand what day you're talking about. By default, you just get the stats for today.\nThese stats include information about the mean job wait time, standard deviation,\nand histogram. This same data is also provided for job completion:\n\n```lua\n-- So, how're we doing today?\nlocal stats = queue:stats()\n-- = { 'run' = { 'mean' = ..., }, 'wait' = {'mean' = ..., } }\n```\n\nTime\n====\nIt's important to note that Redis doesn't allow access to the system time if you're\ngoing to be making any manipulations to data (which our scripts do). And yet, we\nhave heartbeating. This means that the clients actually send the current time when\nmaking most requests, and for consistency's sake, means that your workers must be\nrelatively synchronized. This doesn't mean down to the tens of milliseconds, but if\nyou're experiencing appreciable clock drift, you should investigate NTP.\n\nEnsuring Job Uniqueness\n=======================\n\nAs mentioned above, Jobs are uniquely identied by an id--their jid.\nQless will generate a UUID for each enqueued job or you can specify\none manually:\n\n```lua\nqueue:put(\"jobs.test\", { hello = 'howdy' }, { jid = 'my-job-jid' })\n```\n\nThis can be useful when you want to ensure a job's uniqueness: simply\ncreate a jid that is a function of the Job's class and data, it'll\nguaranteed that Qless won't have multiple jobs with the same class\nand data.\n\n\n\n## Author\n\nJames Hurst \u003cjames@pintsized.co.uk\u003e\n\nBased on the Ruby [Qless reference implementation](https://github.com/seomoz/qless). Documentation also adapted from the\noriginal project.\n\n## Licence\n\nThis module is licensed under the 2-clause BSD license.\n\nCopyright (c) James Hurst \u003cjames@pintsized.co.uk\u003e\n\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:\n\n* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.\n\n* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n","funding_links":["https://github.com/sponsors/pintsized"],"categories":["Libraries","Lua"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fledgetech%2Flua-resty-qless","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fledgetech%2Flua-resty-qless","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fledgetech%2Flua-resty-qless/lists"}