{"id":14371997,"url":"https://github.com/janbjorge/pgqueuer","last_synced_at":"2026-03-12T23:05:36.564Z","repository":{"id":236383269,"uuid":"788904329","full_name":"janbjorge/pgqueuer","owner":"janbjorge","description":"PgQueuer is a Python library leveraging PostgreSQL for efficient job queuing.","archived":false,"fork":false,"pushed_at":"2026-03-01T17:50:56.000Z","size":1664,"stargazers_count":1441,"open_issues_count":7,"forks_count":28,"subscribers_count":8,"default_branch":"main","last_synced_at":"2026-03-01T19:51:13.412Z","etag":null,"topics":["postgres","python","queue"],"latest_commit_sha":null,"homepage":"https://janbjorge.github.io/pgqueuer/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/janbjorge.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":"AGENTS.md","dco":null,"cla":null},"funding":{"github":["janbjorge"],"patreon":null,"open_collective":null,"ko_fi":null,"tidelift":null,"community_bridge":null,"liberapay":null,"issuehunt":null,"lfx_crowdfunding":null,"polar":null,"buy_me_a_coffee":"jeybee","thanks_dev":null,"custom":null}},"created_at":"2024-04-19T10:11:43.000Z","updated_at":"2026-03-01T17:51:44.000Z","dependencies_parsed_at":"2024-08-02T01:26:15.967Z","dependency_job_id":"824ddec9-782c-46ff-a655-564abd614b10","html_url":"https://github.com/janbjorge/pgqueuer","commit_stats":{"total_commits":195,"total_committers":3,"mean_commits":65.0,"dds":0.02051282051282055,"last_synced_commit":"a3802fd5b43f7f4c0ae5ca8fc0672a5f43c1dc48"},"previous_names":["janbjorge/pgqueuer"],"tags_count":99,"template":false,"template_full_name":null,"purl":"pkg:github/janbjorge/pgqueuer","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/janbjorge%2Fpgqueuer","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/janbjorge%2Fpgqueuer/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/janbjorge%2Fpgqueuer/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/janbjorge%2Fpgqueuer/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/janbjorge","download_url":"https://codeload.github.com/janbjorge/pgqueuer/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/janbjorge%2Fpgqueuer/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30448617,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-12T21:31:01.033Z","status":"ssl_error","status_checked_at":"2026-03-12T21:30:43.161Z","response_time":114,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["postgres","python","queue"],"created_at":"2024-08-27T23:01:32.907Z","updated_at":"2026-03-12T23:05:36.558Z","avatar_url":"https://github.com/janbjorge.png","language":"Python","readme":"# 🚀 PGQueuer – PostgreSQL‑powered job queues for Python\n\n[![CI](https://github.com/janbjorge/pgqueuer/actions/workflows/ci.yml/badge.svg)](https://github.com/janbjorge/pgqueuer/actions/workflows/ci.yml?query=branch%3Amain) [![pypi](https://img.shields.io/pypi/v/pgqueuer.svg)](https://pypi.python.org/pypi/pgqueuer) [![downloads](https://static.pepy.tech/badge/pgqueuer/month)](https://pepy.tech/project/pgqueuer) [![versions](https://img.shields.io/pypi/pyversions/pgqueuer.svg)](https://github.com/janbjorge/pgqueuer)\n\n[💻 Source](https://github.com/janbjorge/pgqueuer/) · [💬 Discord](https://discord.gg/C7YMBzcRMQ)\n\n## Overview\n\nPGQueuer turns your PostgreSQL database into a fast, reliable background job processor. Jobs live in the same database as your application data, so you scale without adding new infrastructure. No separate message broker required.\n\nBuilt on PostgreSQL's advanced concurrency features, PGQueuer uses `LISTEN/NOTIFY` for instant job notifications and `FOR UPDATE SKIP LOCKED` for efficient worker coordination. Its clean architecture supports everything from simple background tasks to complex workflows with rate limiting, deferred execution, and job tracking-all backed by your existing database.\n\n## Key Features\n\n### Core Capabilities\n- 💡 **Minimal integration**: Single Python package-bring your existing PostgreSQL connection and start enqueueing jobs\n- ⚛️ **PostgreSQL-powered concurrency**: Workers coordinate using `FOR UPDATE SKIP LOCKED` without stepping on each other\n- 🚧 **Instant notifications**: `LISTEN/NOTIFY` wakes idle workers as soon as jobs arrive (with polling backup for robustness)\n- 📦 **Clean architecture**: Built on ports and adapters pattern with support for multiple drivers (asyncpg, psycopg sync/async)\n\n### Performance \u0026 Control\n- 👨‍🎓 **Batch operations**: Enqueue or acknowledge thousands of jobs per round trip for maximum throughput\n- 🎛️ **Rate limiting**: Control requests per second per entrypoint to respect external API limits\n- 🔒 **Concurrency control**: Limit parallel execution and enable serialized dispatch for shared resources\n- ⏰ **Deferred execution**: Schedule jobs to run at specific times with `execute_after`\n\n### Production Ready\n- ⏳ **Built-in scheduling**: Cron-like recurring tasks with no separate scheduler process\n- 🛡️ **Graceful shutdown**: Clean worker termination with job completion guarantees\n- 📊 **Real-time tracking**: Wait for job completion using `CompletionWatcher` with live status updates\n- 🔧 **Observability**: Prometheus metrics, distributed tracing (Logfire, Sentry), and interactive dashboard\n\n## Why PGQueuer?\n\nPGQueuer is designed for teams who value simplicity and want to leverage PostgreSQL as their job queue infrastructure. If you're already running PostgreSQL, PGQueuer lets you add background job processing without introducing new services to deploy, monitor, or coordinate.\n\n**Zero additional infrastructure**: Your jobs live in the same database as your application data, backed by ACID guarantees and familiar PostgreSQL tooling. No separate message broker to provision, scale, or keep in sync with your database.\n\n**Real-time with PostgreSQL primitives**: `LISTEN/NOTIFY` delivers sub-second job latency without polling loops. Workers wake instantly when jobs arrive, and `FOR UPDATE SKIP LOCKED` coordinates parallel workers without contention.\n\n**Built for modern Python**: First-class async/await support with clean shutdown semantics. Rate limiting, concurrency control, and scheduling are built in-not bolted on. Write entrypoints as regular async functions and let PGQueuer handle the orchestration.\n\n**When PGQueuer shines**: Single database stack, microservices that share a database, applications where job data needs transactional consistency with business data, teams who prefer fewer moving parts over distributed systems complexity.\n\nFor a detailed comparison with Celery and other approaches, see [docs/celery-comparison.md](docs/celery-comparison.md).\n\n## Installation\n\nPGQueuer targets Python 3.11+ and PostgreSQL 12+. Install the package and initialize the database schema:\n\n```bash\npip install pgqueuer\npgq install        # create tables and functions in your database\n```\n\nThe CLI reads `PGHOST`, `PGUSER`, `PGDATABASE` and related environment variables. Use `pgq install --dry-run` to preview SQL, or `--prefix myapp_` to namespace tables. Run `pgq uninstall` to remove the schema when done.\n\n## Quick Start\n\nPGQueuer uses **consumers** (workers that process jobs) and **producers** (code that enqueues jobs). Here's how both sides work together.\n\n### 1. Create a consumer\n\nThe consumer declares entrypoints (job handlers) and scheduled tasks. Each entrypoint corresponds to a job type that can be enqueued:\n\n```python\nfrom datetime import datetime\n\nimport asyncpg\nfrom pgqueuer import PgQueuer\nfrom pgqueuer.db import AsyncpgDriver\nfrom pgqueuer.models import Job, Schedule\n\nasync def main() -\u003e PgQueuer:\n    conn = await asyncpg.connect()\n    driver = AsyncpgDriver(conn)\n    pgq = PgQueuer(driver)\n\n    @pgq.entrypoint(\"fetch\")\n    async def process(job: Job) -\u003e None:\n        print(f\"Processed: {job!r}\")\n\n    @pgq.schedule(\"every_minute\", \"* * * * *\")\n    async def every_minute(schedule: Schedule) -\u003e None:\n        print(f\"Ran at {datetime.now():%H:%M:%S}\")\n\n    return pgq\n```\n\nRun the consumer with the CLI-it will start listening for work:\n\n```bash\npgq run examples.consumer:main\n```\n\n### 2. Enqueue jobs\n\nIn a separate process (your web app, script, etc.), create a `Queries` object and enqueue jobs by entrypoint name:\n\n```python\nimport asyncpg\nfrom pgqueuer.db import AsyncpgDriver\nfrom pgqueuer.queries import Queries\n\nasync def main() -\u003e None:\n    conn = await asyncpg.connect()\n    driver = AsyncpgDriver(conn)\n    queries = Queries(driver)\n\n    # Enqueue a job for the \"fetch\" entrypoint\n    await queries.enqueue(\"fetch\", b\"hello world\")\n```\n\nThe job arrives instantly via `LISTEN/NOTIFY`, and your consumer's `process` function handles it.\n\n## In-Memory Adapter\n\nPGQueuer can run entirely without PostgreSQL. The in-memory adapter is a drop-in replacement that implements the same port protocols as the real backend, so your job handlers and business logic stay identical.\n\n```python\nimport asyncio\nfrom pgqueuer import PgQueuer\nfrom pgqueuer.domain.models import Job\nfrom pgqueuer.domain.types import QueueExecutionMode\n\nasync def main():\n    pq = PgQueuer.in_memory()\n\n    @pq.entrypoint(\"send_email\")\n    async def send_email(job: Job) -\u003e None:\n        print(f\"Sending: {job.payload}\")\n\n    await pq.qm.queries.enqueue(\n        [\"send_email\"] * 3,\n        [b\"alice\", b\"bob\", b\"charlie\"],\n        [0] * 3,\n    )\n    await pq.qm.run(mode=QueueExecutionMode.drain)\n\nasyncio.run(main())\n```\n\nNo database, no Docker, no environment variables. This is useful for:\n\n- **Unit and integration tests** -- fast, deterministic, no CI infrastructure\n- **Local prototyping** -- try out queue logic before setting up Postgres\n- **Short-lived batch jobs** -- process a fixed set of jobs and exit\n\nThe in-memory adapter does not provide durability, multi-process coordination, or ACID transactions. For production workloads, use the PostgreSQL backend. See the [full in-memory reference](https://janbjorge.github.io/pgqueuer/reference/in-memory/) for details and limitations.\n\n## Common Patterns\n\n### Batch Operations\n\nEnqueue thousands of jobs in a single database round trip:\n\n```python\nfrom pgqueuer.queries import Queries\n\n# Enqueue 1000 jobs at once\nawait queries.enqueue(\n    [\"fetch\"] * 1000,\n    [f\"payload_{i}\".encode() for i in range(1000)],\n    [0] * 1000,  # priorities\n)\n```\n\n### Rate Limiting \u0026 Concurrency Control\n\nControl execution frequency and parallelism per entrypoint:\n\n```python\n# Limit to 10 requests per second (useful for external APIs)\n@pgq.entrypoint(\"api_calls\", requests_per_second=10)\nasync def call_external_api(job: Job) -\u003e None:\n    await http_client.post(\"https://api.example.com\", data=job.payload)\n\n# Limit to 5 concurrent executions\n@pgq.entrypoint(\"db_writes\", concurrency_limit=5)\nasync def write_to_db(job: Job) -\u003e None:\n    await db.execute(\"INSERT INTO data VALUES (%s)\", job.payload)\n\n# Process jobs one at a time (serialized)\n@pgq.entrypoint(\"ordered_processing\", serialized_dispatch=True)\nasync def process_in_order(job: Job) -\u003e None:\n    await process_sequentially(job.payload)\n```\n\n### Deferred Execution\n\nSchedule jobs to run at a specific time:\n\n```python\nfrom datetime import timedelta\n\n# Execute 1 hour from now\nawait queries.enqueue(\n    \"send_reminder\",\n    payload=b\"Meeting in 1 hour\",\n    execute_after=timedelta(hours=1),\n)\n\n# Execute at a specific timestamp\nfrom datetime import datetime, timezone\nexecute_time = datetime(2024, 12, 25, 9, 0, tzinfo=timezone.utc)\nawait queries.enqueue(\n    \"send_greeting\",\n    payload=b\"Merry Christmas!\",\n    execute_after=execute_time,\n)\n```\n\n### Job Completion Tracking\n\nWait for jobs to finish and get their final status:\n\n```python\nfrom pgqueuer.completion import CompletionWatcher\n\n# Enqueue a job\njob_ids = await queries.enqueue(\"process_video\", video_data)\n\n# Wait for completion\nasync with CompletionWatcher(driver) as watcher:\n    status = await watcher.wait_for(job_ids[0])\n    print(f\"Job finished with status: {status}\")  # \"successful\", \"exception\", etc.\n\n# Track multiple jobs concurrently\nfrom asyncio import gather\n\nimage_ids = await queries.enqueue([\"render_img\"] * 20, [b\"...\"] * 20, [0] * 20)\n\nasync with CompletionWatcher(driver) as watcher:\n    statuses = await gather(*[watcher.wait_for(jid) for jid in image_ids])\n```\n\n### Shared Resources\n\nInitialize heavyweight objects once and inject them into all jobs:\n\n```python\nimport asyncpg\nfrom pgqueuer import PgQueuer\nfrom pgqueuer.db import AsyncpgDriver\nfrom pgqueuer.models import Job\n\nasync def create_pgqueuer() -\u003e PgQueuer:\n    conn = await asyncpg.connect()\n    driver = AsyncpgDriver(conn)\n\n    # Initialize shared resources (DB pools, HTTP clients, ML models, etc.)\n    resources = {\n        \"db_pool\": await asyncpg.create_pool(),\n        \"http_client\": httpx.AsyncClient(),\n        \"feature_flags\": {\"beta_mode\": True},\n    }\n\n    pgq = PgQueuer(driver, resources=resources)\n\n    @pgq.entrypoint(\"process_user\")\n    async def process_user(job: Job) -\u003e None:\n        ctx = pgq.qm.get_context(job.id)\n\n        # Access shared resources\n        pool = ctx.resources[\"db_pool\"]\n        http = ctx.resources[\"http_client\"]\n        flags = ctx.resources[\"feature_flags\"]\n\n        # Use them without recreating\n        user_data = await http.get(f\"https://api.example.com/users/{job.payload}\")\n        await pool.execute(\"INSERT INTO users VALUES ($1)\", user_data)\n\n    return pgq\n```\n\n## Web Framework Integration\n\n### FastAPI\n\nIntegrate PGQueuer with FastAPI's lifespan context:\n\n```python\nfrom contextlib import asynccontextmanager\nimport asyncpg\nfrom fastapi import Depends, FastAPI, Request\nfrom pgqueuer.db import AsyncpgPoolDriver\nfrom pgqueuer.queries import Queries\n\ndef get_queries(request: Request) -\u003e Queries:\n    return request.app.extra[\"queries\"]\n\n@asynccontextmanager\nasync def lifespan(app: FastAPI):\n    async with asyncpg.create_pool() as pool:\n        app.extra[\"queries\"] = Queries(AsyncpgPoolDriver(pool))\n        yield\n\napp = FastAPI(lifespan=lifespan)\n\n@app.post(\"/enqueue\")\nasync def enqueue_job(\n    payload: str,\n    queries: Queries = Depends(get_queries),\n):\n    job_ids = await queries.enqueue(\"process_task\", payload.encode())\n    return {\"job_ids\": job_ids}\n```\n\nFull example: [examples/fastapi_usage.py](examples/fastapi_usage.py)\n\n### Flask (Synchronous)\n\nUse the synchronous driver for traditional WSGI apps:\n\n```python\nfrom flask import Flask, g, jsonify, request\nimport psycopg\nfrom pgqueuer.db import SyncPsycopgDriver\nfrom pgqueuer.queries import SyncQueries\n\napp = Flask(__name__)\n\ndef get_driver():\n    if \"driver\" not in g:\n        conn = psycopg.connect(autocommit=True)\n        g.driver = SyncPsycopgDriver(conn)\n    return g.driver\n\n@app.teardown_appcontext\ndef teardown_db(exception):\n    driver = g.pop(\"driver\", None)\n    if driver:\n        driver._connection.close()\n\n@app.route(\"/enqueue\", methods=[\"POST\"])\ndef enqueue():\n    queries = SyncQueries(get_driver())\n    data = request.get_json()\n\n    job_ids = queries.enqueue(\n        data[\"entrypoint\"],\n        data[\"payload\"].encode(),\n        data.get(\"priority\", 0),\n    )\n    return jsonify({\"job_ids\": job_ids})\n```\n\nFull example: [examples/flask_sync_usage.py](examples/flask_sync_usage.py)\n\n## Scheduling\n\nDefine recurring tasks with cron-style expressions:\n\n```python\nfrom pgqueuer.models import Schedule\n\n# Run daily at midnight\n@pgq.schedule(\"cleanup\", \"0 0 * * *\")\nasync def cleanup(schedule: Schedule) -\u003e None:\n    await perform_cleanup()\n    print(f\"Cleanup completed at {schedule.last_execution}\")\n\n# Run every 5 minutes\n@pgq.schedule(\"sync_data\", \"*/5 * * * *\")\nasync def sync_data(schedule: Schedule) -\u003e None:\n    await sync_with_external_service()\n\n# Run every weekday at 9 AM\n@pgq.schedule(\"morning_report\", \"0 9 * * 1-5\")\nasync def morning_report(schedule: Schedule) -\u003e None:\n    await generate_and_send_report()\n```\n\nSchedules are stored in PostgreSQL and survive restarts. For schedule-only workers (no job processing), use `SchedulerManager` directly-see [examples/scheduler.py](examples/scheduler.py).\n\n## Advanced Features\n\nPGQueuer includes many advanced capabilities for production use:\n\n- **Custom Executors**: Implement retry strategies with exponential backoff → [docs/pgqueuer.md#custom-executors](docs/pgqueuer.md#custom-job-executors)\n- **Distributed Tracing**: Integrate with Logfire, Sentry for request tracing → [docs/tracing.md](docs/tracing.md)\n- **Prometheus Metrics**: Export queue depth, latency, throughput metrics → [docs/prometheus-metrics-service.md](docs/prometheus-metrics-service.md)\n- **Job Cancellation**: Mark jobs for cancellation with PostgreSQL NOTIFY → [docs/pgqueuer.md#job-cancellation](docs/pgqueuer.md#job-cancellation)\n- **Heartbeat Monitoring**: Keep long-running jobs alive with periodic updates → [docs/pgqueuer.md#automatic-heartbeat](docs/pgqueuer.md#automatic-heartbeat)\n\n## Drivers\n\nPGQueuer works with multiple PostgreSQL drivers:\n\n**Async drivers** (for workers and enqueueing):\n- **AsyncpgDriver** – single `asyncpg` connection\n- **AsyncpgPoolDriver** – `asyncpg` connection pool (recommended for high throughput)\n- **PsycopgDriver** – psycopg 3 async interface\n\n**Sync driver** (enqueue-only):\n- **SyncPsycopgDriver** – blocking psycopg connection for traditional web apps\n\nExample with connection pool:\n\n```python\nimport asyncpg\nfrom pgqueuer import PgQueuer\nfrom pgqueuer.db import AsyncpgPoolDriver\n\npool = await asyncpg.create_pool()\ndriver = AsyncpgPoolDriver(pool)\npgq = PgQueuer(driver)\n```\n\nSee [docs/driver.md](docs/driver.md) for detailed driver documentation.\n\n## CLI Tools\n\nPGQueuer includes a command-line interface for common operations:\n\n```bash\n# Setup and migration\npgq install                         # Install schema\npgq install --prefix myapp_         # Install with table prefix\npgq install --dry-run               # Preview SQL without executing\npgq upgrade                         # Migrate schema to latest version\npgq uninstall                       # Remove schema\n\n# Running workers\npgq run examples.consumer:main      # Start worker from Python callable\n\n# Monitoring\npgq dashboard                       # Interactive terminal dashboard\npgq dashboard --interval 10         # Refresh every 10 seconds\npgq dashboard --tail 25             # Show 25 most recent jobs\n```\n\n## Monitor Your Queues\n\nLaunch the interactive dashboard to watch queue activity in real time:\n\n```bash\npgq dashboard --interval 10 --tail 25 --table-format grid\n```\n\nExample output:\n\n```text\n+---------------------------+-------+------------+--------------------------+------------+----------+\n|          Created          | Count | Entrypoint | Time in Queue (HH:MM:SS) |   Status   | Priority |\n+---------------------------+-------+------------+--------------------------+------------+----------+\n| 2024-05-05 16:44:26+00:00 |  49   |    sync    |         0:00:01          | successful |    0     |\n| 2024-05-05 16:44:27+00:00 |  12   |   fetch    |         0:00:03          | queued     |    0     |\n| 2024-05-05 16:44:28+00:00 |   3   |  api_call  |         0:00:00          | picked     |    5     |\n+---------------------------+-------+------------+--------------------------+------------+----------+\n```\n\nThe dashboard shows queue depth, processing times, job statuses, and priorities. See [docs/dashboard.md](docs/dashboard.md) for more options.\n\n## Documentation\n\n| Topic | Description |\n|-------|-------------|\n| [Architecture \u0026 Design](docs/architecture.md) | Clean architecture, ports and adapters, design decisions |\n| [Core Features Guide](docs/pgqueuer.md) | Shared resources, executors, cancellation, scheduling, tracking |\n| [Driver Selection](docs/driver.md) | Choosing and configuring asyncpg, psycopg, sync drivers |\n| [Celery Comparison](docs/celery-comparison.md) | Comparison with other job queue approaches |\n| [Distributed Tracing](docs/tracing.md) | Logfire and Sentry integration |\n| [Prometheus Metrics](docs/prometheus-metrics-service.md) | Exposing queue metrics for monitoring |\n| [Dashboard](docs/dashboard.md) | CLI dashboard options and usage |\n\n## Development and Testing\n\nPGQueuer uses [Testcontainers](https://testcontainers.com/?language=python) to launch an ephemeral PostgreSQL instance automatically for the integration test suite-no manual Docker Compose setup or pre‑provisioned database required. Just ensure Docker (or another supported container runtime) is running locally.\n\nTypical development workflow:\n\n1. Install dependencies (including extras): `uv sync --all-extras`\n2. Run lint \u0026 type checks: `uv run ruff check .` and `uv run mypy .`\n3. Run the test suite (will start/stop a disposable PostgreSQL container automatically): `uv run pytest`\n4. (Optional) Aggregate target (if you prefer the Makefile): `make check`\n\nThe containerized database lifecycle is fully automatic; tests handle creation, migrations, and teardown. This keeps your local environment clean and ensures consistent, isolated runs.\n\n## License\n\nPGQueuer is MIT licensed. See [LICENSE](LICENSE) for details.\n\n---\nReady to supercharge your workflows? Install PGQueuer today and start processing jobs with the database you already trust.\n","funding_links":["https://github.com/sponsors/janbjorge","https://buymeacoffee.com/jeybee"],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjanbjorge%2Fpgqueuer","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fjanbjorge%2Fpgqueuer","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjanbjorge%2Fpgqueuer/lists"}