{"id":17132708,"url":"https://github.com/vduseev/raquel","last_synced_at":"2026-02-27T15:05:12.448Z","repository":{"id":257825112,"uuid":"867361703","full_name":"vduseev/raquel","owner":"vduseev","description":"Distributed task queue for Python with SQL","archived":false,"fork":false,"pushed_at":"2024-11-25T20:54:06.000Z","size":264,"stargazers_count":1,"open_issues_count":0,"forks_count":1,"subscribers_count":1,"default_branch":"main","last_synced_at":"2024-11-30T13:44:59.233Z","etag":null,"topics":["distributed","job-queue","job-scheduler","python","queue-workers","sql","task-queue","task-scheduler"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/vduseev.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-10-03T23:19:47.000Z","updated_at":"2024-11-25T20:54:09.000Z","dependencies_parsed_at":"2024-11-20T11:58:01.850Z","dependency_job_id":null,"html_url":"https://github.com/vduseev/raquel","commit_stats":null,"previous_names":["vduseev/raquel"],"tags_count":5,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vduseev%2Fraquel","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vduseev%2Fraquel/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vduseev%2Fraquel/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vduseev%2Fraquel/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/vduseev","download_url":"https://codeload.github.com/vduseev/raquel/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":227350403,"owners_count":17768408,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["distributed","job-queue","job-scheduler","python","queue-workers","sql","task-queue","task-scheduler"],"created_at":"2024-10-14T19:27:55.559Z","updated_at":"2026-02-27T15:05:07.428Z","avatar_url":"https://github.com/vduseev.png","language":"Python","readme":"# raquel\n\n\u003cp\u003e\n  \u003ca href=\"https://pypi.org/pypi/raquel\"\u003e\u003cimg alt=\"Package version\" src=\"https://img.shields.io/pypi/v/raquel?logo=python\u0026logoColor=white\u0026color=blue\"\u003e\u003c/a\u003e\n  \u003ca href=\"https://pypi.org/pypi/raquel\"\u003e\u003cimg alt=\"Supported python versions\" src=\"https://img.shields.io/pypi/pyversions/raquel?logo=python\u0026logoColor=white\"\u003e\u003c/a\u003e\n\u003c/p\u003e\n\n*Simple and elegant Job Queues for Python using SQL.*\n\nTired of complex job queues for distributed computing or event-based systems?\nDo you want full visibility and complete reliability of your job queue?\nRaquel is a perfect solution for a distributed task queue and background workers.\n\n* **Simple**: Use **any** existing or standalone SQL database. Requires\n  a **single** table!\n* **Flexible**: Schedule whatever you want however you want. No frameworks,\n  no restrictions.\n* **Reliable**: Uses SQL transactions and handles exceptions, retries, and\n  \"at least once\" execution. SQL guarantees persistent jobs.\n* **Transparent**: Full visibility into which jobs are running, which failed\n  and why, which are pending, etc. Query anything using SQL.\n\nTable of contents\n\n* [Installation](#installation)\n* [Usage](#usage)\n  * [Schedule jobs](#schedule-jobs)\n    * [Using enqueue()](#using-enqueue)\n    * [Using SQL insert](#using-sql-insert)\n  * [Pick up jobs](#pick-up-jobs)\n    * [Using dequeue()](#using-dequeue)\n  * [Failed jobs](#failed-jobs)\n  * [Reschedule jobs](#reschedule-jobs)\n  * [Reject jobs](#reject-jobs)\n  * [Async support](#async-support)\n  * [Stats](#stats)\n* [How it works](#how-it-works)\n  * [Jobs table](#jobs-table)\n  * [Job status](#job-status)\n  * [One job per worker](#one-job-per-worker)\n  * [Database transactions](#database-transactions)\n  * [Sudden shutdown](#sudden-shutdown)\n  * [Retry delay](#retry-delay)\n* [Create jobs table](#create-jobs-table)\n  * [Using create_all()](#using-create_all)\n  * [Using SQL create table](#using-sql-create-table)\n  * [Using Alembic migrations](#using-alembic-migrations)\n* [Production ready](#production-ready)\n* [Fun facts](#fun-facts)\n* [Contribute](#contribute)\n\n## Installation\n\n```bash\npip install raquel\n```\n\nTo install with async support, specify the `asyncio` extra. This simply\nadds the `greenlet` package as a dependency.\n\n```bash\npip install raquel[asyncio]\n```\n\n## Usage\n\n### Schedule jobs\n\nIn order for the job to be scheduled it needs to be added to the `jobs` table\nin the database. As long as it has the right status and timestamp, it will be\npicked up by the workers.\n\nJobs can be scheduled using the library or by inserting a row into the `jobs`\ntable directly.\n\n#### Using enqueue()\n\nThe easiest way to schedule a job is using the `enqueue()` method. By default,\nthe job is scheduled for immediate execution.\n\n```python\nfrom raquel import Raquel\n\n# Raquel uses SQLAlchemy to connect to most SQL databases. You can pass\n# a connection string or a SQLAlchemy engine.\nrq = Raquel(\"postgresql+psycopg2://postgres:postgres@localhost/postgres\")\n\n# Enqueing a job is as simple as this\nrq.enqueue(queue=\"messages\", payload=\"Hello, World!\")\nrq.enqueue(queue=\"tasks\", payload={\"data\": [1, 2]})\n```\n\nPayload can be any JSON-serializable object or simply a string. It can even\nbe empty. In database, the payload is stored as UTF-8 encoded text for\nmaximum compatibility with all SQL databases, so anything that can be\nserialized to text can be used as a payload.\n\nBy default, jobs end up in the `\"default\"` queue. Use the `queue` parameter\nto place jobs into different queues.\n\n#### Using SQL insert\n\nWe can also schedule jobs using plain SQL by simply inserting a row into the\n`jobs` table. For example, in PostgreSQL:\n\n```sql\n-- Schedule 3 jobs in the \"my-jobs\" queue for immediate processing\nINSERT INTO jobs \n    (id, queue, status, payload)\nVALUES\n    (uuid_generate_v4(), 'my-jobs', 'queued', '{\"my\": \"payload\"}'),\n    (uuid_generate_v4(), 'my-jobs', 'queued', '101'),\n    (uuid_generate_v4(), 'my-jobs', 'queued', 'Is this the real life?');\n```\n\n### Pick up jobs\n\nWhile you can manually claim, process, and update the job, you'd also need to\nhandle exceptions, retries and other edge cases. The library provides\nconvenient ways to do this.\n\n#### Using dequeue()\n\nThe `dequeue()` method is a context manager that yields a `Job` object for you\nto work with. If there is no job to process, it will yield `None` instead.\n\n```python\nwhile True:\n    with rq.dequeue(\"tasks\") as job:\n        if job:\n            do_work(job.payload)\n        else:\n          time.sleep(1)\n```\n\nThe `dequeue()` will find the next job and claim it. It will also handle\nthe job status, exceptions, retries and everything else automatically.\n\n### Failed jobs\n\nJobs are retried when they fail. When an exception is caught by the\n`dequeue()` context manager, the job is rescheduled with an exponential\nbackoff delay.\n\nBy default, the job will be retried indefinitely. You can set the\n`max_retry_count` or `max_age` fields to limit the number of retries or the\nmaximum age of the job.\n\n```python\nwith rq.dequeue(\"my-queue\") as job:\n    # Let the context manager handle the exception for you.\n    # The exception will be caught and the job will be retried.\n    # Under the hood, context manager will call `job.fail()` for you.\n    raise Exception(\"Oh no\")\n    do_work(job.payload)\n```\n\nYou can always handle the exception manually:\n\n```python\nwith rq.dequeue(\"my-queue\") as job:\n    # Catch an exception manually\n    try:\n        do_work(job.payload)\n    except Exception as e:\n        # If you mark the job as failed, it will be retried.\n        job.fail(str(e))\n```\n\nWhenever job fails, the error and the traceback are stored in the `error` and\n`error_trace` columns. The job status is set to `failed` and the job will\nbe retried. The attempt number is incremented.\n\n### Reschedule jobs\n\nThe `reschedule()` method is used to reprocess the job at a later time.\nThe job will remain in the queue with a new scheduled execution time, and the\ncurrent attempt won't count towards the maximum number of retries.\n\nThis method should only be called inside the `dequeue()` context manager.\n\n```python\nwith rq.dequeue(\"my-queue\") as job:\n    # Check if we have everything ready to process the job, and if not,\n    # reschedule the job to run 10 minutes from now\n    if not is_everything_ready_to_process(job.payload):\n        job.reschedule(delay=timedelta(minutes=10))\n    else:\n        # Otherwise, process the job\n        do_work(job.payload)\n```\n\nWhen you reschedule a job, its `scheduled_at` field is either updated with\nthe new `at` and `delay` values or left unchanged. And the `finished_at` field\nis cleared. If the `Job` object had any `error` or `error_trace` values, they\nare saved to the database. The `attempts` field is incremented.\n\nHere are some fancy ways to reschedule a job using `reschedule()`:\n\n```python\n# Run when the next day starts\nwith rq.dequeue(\"my-queue\") as job:\n    job.reschedule(\n        at=datetime.now().replace(\n            hour=0, minute=0, second=0, microsecond=0,\n        ) + timedelta(days=1)\n    )\n\n# Same but using the `delay` parameter\nwith rq.dequeue(\"my-queue\") as job:\n    job.reschedule(\n        at=datetime.now().replace(hour=0, minute=0, second=0, microsecond=0),\n        delta=timedelta(days=1),\n    )\n\n# Run in 500 milliseconds\nwith rq.dequeue(\"my-queue\") as job:\n    job.reschedule(delay=500)\n\n# Run in `min_retry_delay` milliseconds, as configured for this job\n# (default is 1 second)\nwith rq.dequeue(\"my-queue\") as job:\n    job.reschedule()\n```\n\n### Reject jobs\n\nIn case your worker can't process the job for some reason, you can reject it,\nallowing it to be immediately claimed by another worker.\n\nThis method should only be called inside the `dequeue()` context manager.\n\nIt is very similar to rescheduling the job to run immediately. When you reject\nthe job, the `scheduled_at` field is left unchanged, but the `claimed_at` and\n`claimed_by` fields are cleared. The job status is set to `queued`. And the\n`attempts` field is incremented.\n\n```python\nwith rq.dequeue(\"my-queue\") as job:\n    if job.payload.get(\"requires_admin\"):\n        # Reject the job if the worker can't process it.\n        job.reject()\n    else:\n        # Otherwise, process the job\n        do_work(job.payload)\n```\n\n### Async support\n\nEverything in Raquel is designed to work with both sync and async code.\nYou can use the `AsyncRaquel` class to enqueue and dequeue jobs in an async\nmanner.\n\n*Just don't forget the `asyncio` extra when installing the package:*\n`raquel[asyncio]`.\n\n```python\nimport asyncio\nfrom raquel import AsyncRaquel\n\nrq = AsyncRaquel(\"postgresql+asyncpg://postgres:postgres@localhost/postgres\")\n\nasync def main():\n    await rq.enqueue(\"tasks\", {'my': {'name_is': 'Slim Shady'}})\n\nasyncio.run(main())\n```\n\nIn async mode, the `dequeue()` context manager works the same way:\n\n```python\nasync def main():\n    async with rq.dequeue(\"tasks\") as job:\n        if job:\n            await do_work(job.payload)\n        else:\n            await asyncio.sleep(1)\n\nasyncio.run(main())\n```\n\n### Stats\n\n* List of queues\n\n  ```python\n  \u003e\u003e\u003e rq.queues()\n  ['default', 'tasks']\n  ```\n\n  ```sql\n  SELECT queue FROM jobs GROUP BY queue\n  ```\n\n* Number of jobs per queue\n\n  ```python\n  \u003e\u003e\u003e rq.count(\"default\")\n  10\n  ```\n\n  ```sql\n  SELECT queue, COUNT(*) FROM jobs WHERE queue = 'default' GROUP BY queue\n  ```\n\n* Number of jobs per status\n\n  ```python\n  \u003e\u003e\u003e rq.stats()\n  {'default': QueueStats(name='default', total=10, queued=10, claimed=0, success=0, failed=0, expired=0, exhausted=0, cancelled=0)}\n  ```\n\n  ```sql\n  SELECT queue, status, COUNT(*) FROM jobs GROUP BY queue, status\n  ```\n\n* Failed jobs\n\n  Note that the `failed` jobs are still going to be picked up and reprocessed\n  until they are marked as `success`, `exhausted`, `expired`, or `cancelled`.\n\n  ```python\n  \u003e\u003e\u003e rq.count(\"default\", rq.FAILED)\n  5\n  ```\n\n  ```sql\n  SELECT * FROM jobs WHERE queue = 'default' AND status = 'failed'\n  ```\n\n* Pending jobs, ready to be picked up by a worker\n\n  ```python\n  \u003e\u003e\u003e rq.count(\"default\", [rq.QUEUED, rq.FAILED])\n  5\n  ```\n\n  ```sql\n  SELECT * FROM jobs WHERE queue = 'default' AND status IN ('queued', 'failed')\n  ```\n\n* Claimed jobs that are currently being processed by a worker\n\n  ```python\n  \u003e\u003e\u003e rq.count(\"default\", rq.CLAIMED)\n  5\n  ```\n\n  ```sql\n  SELECT * FROM jobs WHERE queue = 'default' AND status = 'claimed'\n  ```\n\n* Rescheduled jobs\n\n  You can find all rescheduled jobs using SQL by filtering for those that are\n  queued, but have attempts and were claimed before.\n\n  ```sql\n  SELECT * FROM jobs\n  WHERE status = 'queued' AND attempts \u003e 0 AND claimed_at IS NOT NULL\n  ```\n\n* Rejected jobs\n\n  ```sql\n  SELECT * FROM jobs\n  WHERE status = 'queued' AND attempts \u003e 0 AND claimed_at IS NULL\n  ```\n\n## How it works\n\n### Jobs table\n\nRaquel uses **a single database table** called `jobs`.\nThis is all it needs. Can you believe it?\n\nHere is the schema of the `jobs` table:\n\n| Column | Type | Description | Default | Nullable |\n|--------|------|-------------|-------------|--------|\n| id | UUID | Unique identifier of the job. | | No |\n| queue | TEXT | Name of the queue. | `\"default\"` | No |\n| payload | TEXT | Payload of the job. It can be anything. Just needs to be serializable to text. | Null | Yes |\n| status | TEXT | Status of the job. | `\"queued\"` | No |\n| max_age | INTEGER | Maximum age of the job in milliseconds. | Null | Yes |\n| max_retry_count | INTEGER | Maximum number of retries. | Null | Yes |\n| min_retry_delay | INTEGER | Minimum delay between retries in milliseconds. | `1000` | Yes |\n| max_retry_delay | INTEGER | Maximum delay between retries in milliseconds. | `12 * 3600 * 1000` | Yes |\n| backoff_base | INTEGER | Base in milliseconds for exponential retry backoff. | `1000` | Yes |\n| enqueued_at | BIGINT | Time when the job was enqueued in milliseconds since epoch (UTC). | `now` | No |\n| scheduled_at | BIGINT | Time when the job is scheduled to run in milliseconds since epoch (UTC). | `now` | No |\n| attempts | INTEGER | Number of attempts to execute the job. | `0` | No |\n| error | TEXT | Error message if the job failed. | Null | Yes |\n| error_trace | TEXT | Error traceback if the job failed. | Null | Yes |\n| claimed_by | TEXT | ID or name of the worker that claimed the job. | Null | Yes |\n| claimed_at | BIGINT | Time when the job was claimed in milliseconds since epoch (UTC). | Null | Yes |\n| finished_at | BIGINT | Time when the job was finished in milliseconds since epoch (UTC). | Null | Yes |\n\nCheck out all ways to create the `jobs` table in the\n[Create jobs table](#create-jobs-table) section.\n\n### Job status\n\n![Job status](https://raw.githubusercontent.com/vduseev/raquel/master/docs/job_status.png)\n\nJobs can have the following statuses:\n\n* `queued` - Job is waiting to be picked up by a worker.\n* `claimed` - Job is currently locked and is being processed by a worker\n  (in databases such as PostgreSQL, MySQL, etc., once the job is claimed,\n  its row is locked until the worker is done with it).\n* `success` - Job was successfully executed.\n* `failed` - Job failed to execute. This happens when an exception was\n  caught by the `dequeue()` context manager. The last error message and\n  traceback are stored in the `error` and `error_trace` columns. Job will be\n  retried again, meaning it will be rescheduled with an exponential\n  backoff delay.\n* `cancelled` - Job was manually cancelled.\n* `expired` - Job was not picked up by a worker in time (when `max_age`\n  is set).\n* `exhausted` - Job has reached the maximum number of retries (when\n  `max_retry_count` is set).\n\nThe job can be picked up by a worker in either of the following three states:\n`queued`, `failed`, or `claimed`.\n\nThe first two are most common. The job will be picked up if:\n\n1. Job status is `queued` (scheduled or rejected) or `failed`\n  (failed to be processed and is being retried);\n1. And its `scheduled_at` time is in the past;\n1. And its `max_age` is not set or its `scheduled_at + max_age` is in the future.\n\nThe job in `claimed` state is a special case and happens when some worker\nmarked the job as claimed but failed to process it. In this case the row,\nrepresenting the job, is not locked in the database. If more than a minute\npassed since the job was claimed (`claimed_at + 1 minute`) and the row is\nnot locked (meaning the worker that claimed it is dead), any worker can\nreclaim it for itself.\n\n### One job per worker\n\nHow do we guarantee that the same job is not picked up by multiple workers?\n\nShort answer: by locking the row and using the `claimed` status.\n\nIn PostgreSQL, the `SELECT FOR UPDATE SKIP LOCKED` statement is used when\nselecting the job. This statement locks the row for the duration of the\ntransaction and allows other workers to see that the row is locked.\nIn other databases that support this (such as Oracle, MySQL) a similar\napproach is used.\n\nIn extremely simple databases, such as SQLite, the fact that the whole\ndatabase is locked during a write operation guarantees that no other worker\nwill be able to set the job status to `claimed` at the same time.\n\n### Database transactions\n\nThe `dequeue()` context manager works by making three consecutive and\nindependent SQL transactions:\n\n* **Transaction 1: Expire old jobs**: Looks for jobs whose `max_age` is not\n  null and whose `scheduled_at + max_age` is in the past and updates their\n  status to `expired`.\n* **Transaction 2: Claim a job**: Selects the next job from the queue and\n  sets its status to `claimed`, all in one go. It either succeeds in claiming\n  the job or not.\n* **Transaction 3: Process a job**: Places a database lock on that \"claimed\"\n  row with the job details for the entire duration of the processing and then\n  updates the job with an appropriate status value:\n\n  * `success` if the job is processed successfully and we are done with it.\n  * `failed` if an exception was caught by the context manager or the job was\n    manually marked as failed. The job will be rescheduled for a retry.\n  * `queued` if the job was manually rejected or manually rescheduled for a\n    later time.\n  * `cancelled` if the job is manually cancelled.\n  * `exhausted` if the job has reached the maximum number of retries.\n\nAll of that happens inside the context manager itself.\n\n### Sudden shutdown\n\nIf a worker dies while attempting to claim a job, the transaction opened by\nthe worker is rolled back and the row is unlocked by the database. Another\nworker can claim it and process it.\n\nIf a worker dies while processing a job, the row is unlocked by the database\nbut remains in the `claimed` status. Another worker can pick it up and\nprocess it.\n\n### Retry delay\n\nThe next retry time after a failed job is calculated as follows:\n\n* Take the current `scheduled_at` time.\n* Add the time it took to process the job (aka duration).\n* Add the retry delay.\n\nThe retry delay itself is calculated as follows:\n\n```python\nbackoff_base * 2 ^ attempt\n```\n\nBut this is just a *planned* retry delay. The *actual* retry delay is capped\nbetween the `min_retry_delay` and `max_retry_delay` values. The `min_retry_delay`\ndefaults to 1 second and the `max_retry_delay` defaults to 12 hours. The\n`backoff_base` defaults to 1 second.\n\nIn other words, here is how your job will be retried (assuming there is\nalways a worker available and the job takes almost no time to process):\n\n| Retry   | delay |\n|---------|-------------|\n| 1       | 1 second after 1st attempt |\n| 2       | 2 seconds after 2nd attempt |\n| 3       | after 4 seconds |\n| ...     | ... |\n| 6       | after ~2 minutes |\n| ...     | ... |\n| 10      | after ~30 minutes |\n| ...     | ... |\n| 14      | after ~9 hours |\n| ...     | ... |\n\nand so on with the maximum delay of 12 hours, or based on the maximum value\nyou set for this job using the `max_retry_delay` setting.\n\nFor certain types of jobs, it makes sense to chill out for a bit before\nretrying. For example, an API might have a rate limit you've just hit or\nsome data might not be ready yet. In such cases, you can set the\n`min_retry_delay` to a higher value, such as 10 or 30 seconds.\n\nAll durations and timestamps are in milliseconds. So 10 seconds is\n`10 * 1000 = 10000` milliseconds. 1 minute is `60 * 1000 = 60000`\nmilliseconds.\n\n## Create jobs table\n\n### Using create_all()\n\nYou can configure the table using the `create_all()` method, which will\nautomatically use the supported syntax for the database you are using (it is\nsafe to run it multiple times, it only creates the table once.).\n\n```python\n# Works for all databases\nrq.create_all()\n```\n\n### Using SQL create table\n\nAlternatively, the `jobs` table can be created **manually** using SQL.\nFor **Postgres** you can use\n[this example](examples/create_jobs_table/create_table_postgres.sql).\n\n### Using Alembic migrations\n\n#### Autogenerate migration\n\nIf you are using Alembic, the only thing you need to do is to import\nRaquel's metadata object and add it as a target inside the\n`context.configure()` calls in Alembic's `env.py` file.\n\n```python\n# Alembic's env.py configuration file\n\n# Import Raquel's base metadata object\nfrom raquel.models.base_sql import BaseSQL as RaquelBaseSQL\n\n# This code already exists in the Alembic configuration file\ndef run_migrations_offline() -\u003e None:\n    # ...\n    context.configure(\n        # ...\n        target_metadata=[\n            target_metadata,\n            # Add Raquel's metadata\n            RaquelBaseSQL.metadata,\n        ],\n        # ...\n    )\n\n# Same for online migrations\ndef run_migrations_online() -\u003e None:\n    # ...\n    context.configure(\n        # ...\n        target_metadata=[\n            target_metadata,\n            # Add Raquel's metadata\n            RaquelBaseSQL.metadata,\n        ],\n        # ...\n    )\n```\n\nYou only need to do this once. The first time you auto-generate a migration\nafter that, Alembic will automatically create a proper migration for you.\n\n```shell\nalembic revision --autogenerate -m \"raquel\"\n```\n\nIf Raquel is ever updated to add new columns or indexes, you can always\nupgrade the Raquel package and generate a follow-up migration that will\nadd the new changes.\n\nCurrently, there are no plans to change the schema of the `jobs` table. Any\nnew changes are expected to be backward compatible.\n\n#### Manual migration\n\nIf you are writing Alembic migrations manually, you can use the\n[example](examples/create_jobs_table/alembic.py) of one written for the\ncurrent version of Raquel.\n\n## Production ready\n\nCan you trust Raquel with your production? Yes, you can! Here is why:\n\n* Raquel is dead simple.\n\n  You can rewrite the whole library in any programming language using plain SQL\n  in about a day. We keep the code simple and maintainable.\n\n* It's reliable.\n\n  The jobs are stored in a relational database and exclusive row locks and\n  rollbacks are handled through ACID transactions.\n\n* Already used in production by several companies.\n\n  * [Dynatrace](https://www.dynatrace.com)\n\n* Licensed under the [Apache 2.0 license](LICENSE).\n\n  You can use it in any project or fork it and do whatever you want with it.\n\n* Actively maintained.\n\n  The library is actively maintained by\n  [Vagiz Duseev](https://github.com/vduseev). who is also the author of some\n  other popular Python packages such as\n  [opensearch-logger](https://github.com/vduseev/opensearch-logger).\n\n* Platform and database agnostic.\n\n  Save yourself the pain of migrating between database vendors or versions.\n  All timestamps are stored as milliseconds since epoch (UTC timezone).\n  Payloads are stored as text. Job IDs are random UUIDs to allow migration\n  between databases and HA setups.\n\n## Fun facts\n\n* Raquel is named after the famous actress Raquel Welch. Many years ago, I\n  used to attend a local gym, where there was a shrine dedicated to Arnold\n  Schwarznegger. Posters, memorabilia, and a small statue of him. Apparently,\n  Welch was once considered as Schwarznegger's costar for the movie\n  \"Conan the Barbarian\". The gym owner liked this alternative casting so much\n  that he hanged a poster from the \"One Million Years B.C.\" with her right\n  beside the statue of Arnold. I didn't watch either of these movies and\n  only found out they were never in the same film together when I sat down\n  to write this library.\n* The name Raquel is also a play on the words \"queue\" and \"SQL\".\n* The library exists because solutions like Celery and Dramatiq are too\n  complex for small scale projects, too opinionated, unpredicatable, and\n  opaque.\n\n## Contribute\n\nContributions are welcome 🎉! See [CONTRIBUTING.md](docs/CONTRIBUTING.md) for\ndetails. We follow the [Code of Conduct](docs/CODE_OF_CONDUCT.md).\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fvduseev%2Fraquel","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fvduseev%2Fraquel","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fvduseev%2Fraquel/lists"}