{"id":43694810,"url":"https://github.com/vibhorkum/pg_background","last_synced_at":"2026-04-25T19:01:12.993Z","repository":{"id":45783751,"uuid":"60564907","full_name":"vibhorkum/pg_background","owner":"vibhorkum","description":"Production-grade PostgreSQL extension to execute arbitrary SQL in background worker processes — with async execution, autonomous transactions, cookie-protected handles, cancellation, progress reporting, and observability.","archived":false,"fork":false,"pushed_at":"2026-03-30T14:23:05.000Z","size":1306,"stargazers_count":238,"open_issues_count":4,"forks_count":41,"subscribers_count":12,"default_branch":"master","last_synced_at":"2026-03-30T16:28:41.412Z","etag":null,"topics":["async","audit-logging","autonomous","autonomous-transactions","background-worker","c","database","etl","parallel-queries","pgextension","plpgsql","postgres","postgresql","postgresql-extension","sql","vacuum"],"latest_commit_sha":null,"homepage":"https://github.com/vibhorkum/pg_background","language":"PLpgSQL","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/vibhorkum.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2016-06-06T22:25:42.000Z","updated_at":"2026-03-26T02:44:08.000Z","dependencies_parsed_at":"2024-10-27T20:14:27.178Z","dependency_job_id":"8f5ed65b-ce7c-4062-b34a-4434a4529357","html_url":"https://github.com/vibhorkum/pg_background","commit_stats":null,"previous_names":[],"tags_count":10,"template":false,"template_full_name":null,"purl":"pkg:github/vibhorkum/pg_background","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vibhorkum%2Fpg_background","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vibhorkum%2Fpg_background/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vibhorkum%2Fpg_background/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vibhorkum%2Fpg_background/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/vibhorkum","download_url":"https://codeload.github.com/vibhorkum/pg_background/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vibhorkum%2Fpg_background/sbom","scorecard":{"id":919942,"data":{"date":"2025-08-11","repo":{"name":"github.com/vibhorkum/pg_background","commit":"ea1bf29135fb1f6caa0d0917f46063cca344261d"},"scorecard":{"version":"v5.2.1-40-gf6ed084d","commit":"f6ed084d17c9236477efd66e5b258b9d4cc7b389"},"score":4.5,"checks":[{"name":"Packaging","score":-1,"reason":"packaging workflow not detected","details":["Warn: no GitHub/GitLab publishing workflow detected."],"documentation":{"short":"Determines if the project is published as a package that others can easily download, install, easily update, and uninstall.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#packaging"}},{"name":"Maintained","score":10,"reason":"10 commit(s) and 2 issue activity found in the last 90 days -- score normalized to 10","details":null,"documentation":{"short":"Determines if the project is \"actively maintained\".","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#maintained"}},{"name":"Binary-Artifacts","score":10,"reason":"no binaries found in the repo","details":null,"documentation":{"short":"Determines if the project has generated executable (binary) artifacts in the source repository.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#binary-artifacts"}},{"name":"Token-Permissions","score":-1,"reason":"No tokens found","details":null,"documentation":{"short":"Determines if the project's workflows follow the principle of least privilege.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#token-permissions"}},{"name":"Code-Review","score":1,"reason":"Found 2/15 approved changesets -- score normalized to 1","details":null,"documentation":{"short":"Determines if the project requires human code review before pull requests (aka merge requests) are merged.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#code-review"}},{"name":"Dangerous-Workflow","score":-1,"reason":"no workflows found","details":null,"documentation":{"short":"Determines if the project's GitHub Action workflows avoid dangerous patterns.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#dangerous-workflow"}},{"name":"Pinned-Dependencies","score":-1,"reason":"no dependencies found","details":null,"documentation":{"short":"Determines if the project has declared and pinned the dependencies of its build process.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#pinned-dependencies"}},{"name":"CII-Best-Practices","score":0,"reason":"no effort to earn an OpenSSF best practices badge detected","details":null,"documentation":{"short":"Determines if the project has an OpenSSF (formerly CII) Best Practices Badge.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#cii-best-practices"}},{"name":"Vulnerabilities","score":10,"reason":"0 existing vulnerabilities detected","details":null,"documentation":{"short":"Determines if the project has open, known unfixed vulnerabilities.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#vulnerabilities"}},{"name":"Security-Policy","score":0,"reason":"security policy file not detected","details":["Warn: no security policy file detected","Warn: no security file to analyze","Warn: no security file to analyze","Warn: no security file to analyze"],"documentation":{"short":"Determines if the project has published a security policy.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#security-policy"}},{"name":"Fuzzing","score":0,"reason":"project is not fuzzed","details":["Warn: no fuzzer integrations found"],"documentation":{"short":"Determines if the project uses fuzzing.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#fuzzing"}},{"name":"License","score":10,"reason":"license file detected","details":["Info: project has a license file: LICENSE:0","Info: FSF or OSI recognized license: GNU General Public License v3.0: LICENSE:0"],"documentation":{"short":"Determines if the project has defined a license.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#license"}},{"name":"Signed-Releases","score":0,"reason":"Project has not signed or included provenance with any releases.","details":["Warn: release artifact v1.0 not signed: https://api.github.com/repos/vibhorkum/pg_background/releases/42671247","Warn: release artifact v1.0 does not have provenance: https://api.github.com/repos/vibhorkum/pg_background/releases/42671247"],"documentation":{"short":"Determines if the project cryptographically signs release artifacts.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#signed-releases"}},{"name":"Branch-Protection","score":-1,"reason":"internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration","details":null,"documentation":{"short":"Determines if the default and release branches are protected with GitHub's branch protection settings.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#branch-protection"}},{"name":"SAST","score":0,"reason":"SAST tool is not run on all commits -- score normalized to 0","details":["Warn: 0 commits out of 27 are checked with a SAST tool"],"documentation":{"short":"Determines if the project uses static code analysis.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#sast"}}]},"last_synced_at":"2025-08-25T00:54:37.521Z","repository_id":45783751,"created_at":"2025-08-25T00:54:37.522Z","updated_at":"2025-08-25T00:54:37.522Z"},"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":32273223,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-25T18:29:39.964Z","status":"ssl_error","status_checked_at":"2026-04-25T18:29:32.149Z","response_time":59,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["async","audit-logging","autonomous","autonomous-transactions","background-worker","c","database","etl","parallel-queries","pgextension","plpgsql","postgres","postgresql","postgresql-extension","sql","vacuum"],"created_at":"2026-02-05T04:08:58.349Z","updated_at":"2026-04-25T19:01:12.974Z","avatar_url":"https://github.com/vibhorkum.png","language":"PLpgSQL","funding_links":[],"categories":["PLpgSQL"],"sub_categories":[],"readme":"# pg_background: Production-Grade Background SQL for PostgreSQL\n\n[![PostgreSQL](https://img.shields.io/badge/PostgreSQL-14--18-blue.svg)](https://www.postgresql.org/)\n[![Version](https://img.shields.io/badge/version-1.9-brightgreen.svg)](https://github.com/vibhorkum/pg_background)\n[![License](https://img.shields.io/badge/license-PostgreSQL-green.svg)](LICENSE)\n[![CI](https://github.com/vibhorkum/pg_background/actions/workflows/ci.yml/badge.svg)](https://github.com/vibhorkum/pg_background/actions/workflows/ci.yml)\n\nExecute arbitrary SQL commands in **background worker processes** within PostgreSQL. Built for production workloads requiring asynchronous execution, autonomous transactions, and long-running operations without blocking client sessions.\n\n---\n\n## Table of Contents\n\n- [Overview](#overview)\n- [Key Features](#key-features)\n- [PostgreSQL Version Compatibility](#postgresql-version-compatibility)\n- [Installation](#installation)\n- [Quick Start](#quick-start)\n  - [V2 API (Recommended)](#v2-api-recommended)\n  - [V1 API (Legacy)](#v1-api-legacy)\n- [Complete API Reference](#complete-api-reference)\n  - [V2 Functions](#v2-functions)\n  - [V1 Functions (Deprecated)](#v1-functions-deprecated)\n- [Critical Semantic Distinctions](#critical-semantic-distinctions)\n  - [Cancel vs Detach](#cancel-vs-detach)\n  - [V1 vs V2 API](#v1-vs-v2-api)\n  - [PID Reuse Protection](#pid-reuse-protection)\n  - [NOTIFY and Autonomous Commits](#notify-and-autonomous-commits)\n- [Security Model](#security-model)\n- [Use Cases with Examples](#use-cases-with-examples)\n- [Operational Guidance](#operational-guidance)\n  - [Resource Management](#resource-management)\n  - [Performance Tuning](#performance-tuning)\n  - [Monitoring](#monitoring)\n- [Troubleshooting](#troubleshooting)\n- [Architecture \u0026 Design](#architecture--design)\n- [Known Limitations](#known-limitations)\n- [Best Practices](#best-practices)\n- [Migration Guide](#migration-guide)\n- [Testing](#testing)\n- [Contributing](#contributing)\n- [License](#license)\n- [Author](#author)\n\n---\n\n## Overview\n\n`pg_background` enables PostgreSQL to execute SQL commands asynchronously in dedicated background worker processes. Unlike `dblink` (which creates a separate connection) or client-side async patterns, `pg_background` workers run **inside** the database server with full access to local resources while operating in **independent transactions**.\n\n**Production-Critical Benefits:**\n- **Non-blocking operations**: Launch long-running queries without holding client connections\n- **Autonomous transactions**: Commit/rollback independently of the caller's transaction\n- **Resource isolation**: Workers have their own memory context and error handling\n- **Observable lifecycle**: Track, cancel, and wait for completion with explicit operations\n- **Security-hardened**: NOLOGIN role-based access, SECURITY DEFINER helpers, no PUBLIC grants\n\n**Typical Production Use Cases:**\n- Background maintenance (VACUUM, ANALYZE, REINDEX)\n- Asynchronous audit logging\n- Long-running ETL pipelines\n- Independent notification delivery\n- Parallel query pattern implementation\n\n---\n\n## Key Features\n\n### Core Capabilities\n- ✅ **Async SQL Execution**: Offload queries to background workers  \n- ✅ **Result Retrieval**: Stream results back via shared memory queues  \n- ✅ **Autonomous Transactions**: Commit independently of calling session  \n- ✅ **Explicit Lifecycle Control**: Launch, wait, cancel, detach, and list operations  \n- ✅ **Production-Hardened Security**: NOLOGIN role, privilege helpers, zero PUBLIC access  \n\n### V2 API Enhancements (v1.6+)\n- **Cookie-Based Identity**: `(pid, cookie)` tuples prevent PID reuse confusion\n- **Explicit Cancellation**: `cancel_v2()` distinct from `detach_v2()`\n- **Synchronous Wait**: `wait_v2()` blocks until completion or timeout\n- **Worker Observability**: `list_v2()` for real-time monitoring and cleanup\n- **Fire-and-Forget Submit**: `submit_v2()` for side-effect queries\n\n### V1.8 Enhancements\n- **Session Statistics**: `stats_v2()` provides worker counts, success/failure rates, and execution times\n- **Progress Reporting**: Workers can report progress via `pg_background_progress()`\n- **GUC Configuration**: `pg_background.max_workers`, `worker_timeout`, `default_queue_size`\n- **Resource Limits**: Built-in max workers enforcement per session\n- **Enhanced Robustness**: Overflow protection, UTF-8 aware truncation, race condition fixes\n- **Relocatable Extension**: Full support for `CREATE EXTENSION ... WITH SCHEMA`\n\n### V1.9 Enhancements (Current)\n- **Worker Labels**: Optional `label` parameter on `launch_v2()`/`submit_v2()` for operational clarity\n- **Structured Error Returns**: `pg_background_error_info_v2()` returns SQLSTATE, message, detail, hint, context\n- **Result Metadata**: `pg_background_result_info_v2()` returns row_count, command_tag, completed, has_error\n- **Batch Operations**: `pg_background_detach_all_v2()` and `pg_background_cancel_all_v2()` for session cleanup\n\n---\n\n## PostgreSQL Version Compatibility\n\n| PostgreSQL Version | Support Status | Notes |\n|--------------------|----------------|-------|\n| **18** | ✅ Fully Supported | TupleDescAttr compatibility layer |\n| **17** | ✅ Fully Tested | Recommended for new deployments |\n| **16** | ✅ Fully Tested | Production-ready |\n| **15** | ✅ Fully Tested | pg_analyze_and_rewrite_fixedparams |\n| **14** | ✅ Fully Tested | Minimum supported version |\n| **13** | ❌ Not Supported | Use pg_background 1.6 or earlier |\n| **\u003c 13** | ❌ Not Supported | Use pg_background 1.4 or earlier |\n\n**Note**: Each PostgreSQL major version requires extension rebuild against its headers.\n\n---\n\n## Installation\n\n### Prerequisites\n\n- PostgreSQL 14+ with development headers (`postgresql-server-dev-*` or `postgresql##-devel`)\n- `pg_config` in `$PATH`\n- Build essentials: `gcc`, `make`\n- Superuser privileges for `CREATE EXTENSION`\n\n### Build from Source\n\n```bash\n# Clone repository\ngit clone https://github.com/vibhorkum/pg_background.git\ncd pg_background\n\n# Build extension\nmake clean\nmake\n\n# Install (requires appropriate privileges)\nsudo make install\n```\n\n### Enable Extension\n\n```sql\n-- Connect as superuser\nCREATE EXTENSION pg_background;\n\n-- Verify installation\nSELECT extname, extversion FROM pg_extension WHERE extname = 'pg_background';\n-- Expected output:\n--    extname     | extversion\n-- ---------------+------------\n--  pg_background | 1.9\n```\n\n### Library Loading\n\n`pg_background` does **not** require `shared_preload_libraries`. Workers are\nregistered dynamically (`RegisterDynamicBackgroundWorker`) and each worker\nprocess loads the library dynamically when it starts.\n\nAdding `pg_background` to `shared_preload_libraries` is **optional** and only\nneeded if you want the extension's GUC parameters\n(`pg_background.max_workers`, `pg_background.default_queue_size`,\n`pg_background.worker_timeout`) available in `postgresql.conf` and visible in\nall sessions from the start. Without SPL, the GUCs are registered on first\nuse (`CREATE EXTENSION`, `LOAD`, or the first `launch_v2` call). A session\n`SET` before that point raises an `unrecognized configuration parameter`\nerror. The warning behavior applies to configuration file entries (for\nexample, `postgresql.conf` or `ALTER SYSTEM`) that are read before the\nlibrary is loaded.\n\n| | Without SPL | With SPL |\n|---|---|---|\n| Extension works? | Yes | Yes |\n| GUCs in `postgresql.conf` | Not until first load | Immediately |\n| After `make install` | Workers pick up new `.so` automatically | **Restart required** (postmaster caches the library) |\n| Recommended for | Development, staging, simple setups | Production with tuned GUCs |\n\n### Custom Schema Installation\n\nThe extension is **relocatable**, allowing installation in any schema. This is useful for organizing extensions or avoiding namespace conflicts.\n\n```sql\n-- Create custom schema\nCREATE SCHEMA contrib;\n\n-- Install extension in custom schema\nCREATE EXTENSION pg_background WITH SCHEMA contrib;\n\n-- Verify installation\nSELECT extname, extversion, nspname AS schema\nFROM pg_extension e\nJOIN pg_namespace n ON n.oid = e.extnamespace\nWHERE e.extname = 'pg_background';\n-- Expected output:\n--    extname     | extversion | schema\n-- ---------------+------------+---------\n--  pg_background | 1.9        | contrib\n```\n\n**Using Extension in Custom Schema**:\n\nWhen installed in a custom schema, functions can be called with schema qualification or by adding the schema to `search_path`:\n\n```sql\n-- Option 1: Schema-qualified calls\nSELECT * FROM contrib.pg_background_launch_v2('SELECT 1') AS h;\nSELECT * FROM contrib.pg_background_result_v2(h.pid, h.cookie) AS (result int);\n\n-- Option 2: Add schema to search_path\nSET search_path = contrib, public;\nSELECT * FROM pg_background_launch_v2('SELECT 1') AS h;\n```\n\n**Privileges with Custom Schema**:\n\nThe privilege helper functions automatically detect the extension's schema:\n\n```sql\n-- Grant privileges (works regardless of installation schema)\nSELECT contrib.grant_pg_background_privileges('app_user', true);\n\n-- Or if schema is in search_path\nSELECT grant_pg_background_privileges('app_user', true);\n```\n\n**Test Cases for Custom Schema Installation**:\n\n```sql\n-- Test 1: Basic installation in custom schema\nCREATE SCHEMA test_schema;\nCREATE EXTENSION pg_background WITH SCHEMA test_schema;\n\n-- Test 2: Launch worker from custom schema\nSELECT (h).pid, (h).cookie FROM test_schema.pg_background_launch_v2('SELECT 42') AS h \\gset\n\n-- Test 3: Retrieve results\nSELECT * FROM test_schema.pg_background_result_v2(:pid, :cookie) AS (val int);\n-- Expected: val = 42\n\n-- Test 4: Privilege helpers work with custom schema\nCREATE ROLE test_user NOLOGIN;\nSELECT test_schema.grant_pg_background_privileges('test_user', true);\n-- Should output GRANT statements with test_schema prefix\n\n-- Test 5: Revoke privileges\nSELECT test_schema.revoke_pg_background_privileges('test_user', true);\n\n-- Test 6: V2 types are accessible\nSELECT (ROW(123, 456789)::test_schema.pg_background_handle).*;\n-- Expected: pid=123, cookie=456789\n\n-- Cleanup\nDROP ROLE test_user;\nDROP EXTENSION pg_background;\nDROP SCHEMA test_schema;\n```\n\n### Configure PostgreSQL\n\n```sql\n-- Set worker process limit (adjust based on your workload)\nALTER SYSTEM SET max_worker_processes = 32;\n\n-- Reload configuration\nSELECT pg_reload_conf();\n\n-- Verify setting\nSHOW max_worker_processes;\n```\n\n### Extension GUC Settings (v1.8+)\n\n```sql\n-- Limit concurrent workers per session (default: 16)\nSET pg_background.max_workers = 10;\n\n-- Set default queue size for workers (default: 64KB)\nSET pg_background.default_queue_size = '256KB';\n\n-- Set worker execution timeout (default: 0 = no limit)\nSET pg_background.worker_timeout = '5min';\n```\n\n| GUC Parameter | Default | Range | Description |\n|---------------|---------|-------|-------------|\n| `pg_background.max_workers` | 16 | 1-1000 | Max concurrent workers per session |\n| `pg_background.default_queue_size` | 65536 | 4KB-256MB | Default shared memory queue size |\n| `pg_background.worker_timeout` | 0 | 0-∞ | Worker execution timeout (0 = no limit) |\n\n---\n\n## Quick Start\n\n### V2 API (Recommended)\n\nThe v2 API provides cookie-based handle protection and explicit lifecycle semantics.\n\n#### 1. Launch a Background Job\n\n```sql\n-- Launch worker and capture handle\nSELECT * FROM pg_background_launch_v2(\n  'SELECT pg_sleep(5); SELECT count(*) FROM large_table'\n) AS handle;\n\n-- Output:\n--   pid  |      cookie       \n-- -------+-------------------\n--  12345 | 1234567890123456\n```\n\n#### 2. Retrieve Results\n\n```sql\n-- Results can only be consumed ONCE\nSELECT * FROM pg_background_result_v2(12345, 1234567890123456) AS (count BIGINT);\n\n-- Attempting second retrieval will error:\n-- ERROR: results already consumed for worker PID 12345\n```\n\n#### 3. Fire-and-Forget (Submit)\n\n```sql\n-- For queries with side effects only (no result consumption needed)\nSELECT * FROM pg_background_submit_v2(\n  'INSERT INTO audit_log (ts, event) VALUES (now(), ''system_check'')'\n) AS handle;\n\n-- Worker commits and exits automatically\n```\n\n#### 4. Cancel a Running Job\n\n```sql\n-- Request immediate cancellation\nSELECT pg_background_cancel_v2(pid, cookie);\n\n-- Or with grace period (500ms to finish current statement)\nSELECT pg_background_cancel_v2_grace(pid, cookie, 500);\n```\n\n⚠️ **Windows Limitation**: Cancel on Windows only sets interrupts; it cannot terminate an actively running statement. Always use `statement_timeout` on Windows.\n\n#### 5. Wait for Completion\n\n```sql\n-- Block until worker finishes\nSELECT pg_background_wait_v2(pid, cookie);\n\n-- Or wait with timeout (returns true if completed)\nSELECT pg_background_wait_v2_timeout(pid, cookie, 5000);  -- 5 seconds\n```\n\n#### 6. List Active Workers\n\n```sql\nSELECT *\nFROM pg_background_list_v2()\nAS (\n  pid int4,\n  cookie int8,\n  launched_at timestamptz,\n  user_id oid,\n  queue_size int4,\n  state text,\n  sql_preview text,\n  last_error text,\n  consumed bool\n)\nORDER BY launched_at DESC;\n```\n\n**State Values**:\n- `running`: Actively executing SQL\n- `stopped`: Completed successfully\n- `canceled`: Terminated via `cancel_v2()`\n- `error`: Failed with error (see `last_error`)\n\n#### 7. View Session Statistics (v1.8+)\n\n```sql\n-- Get session-wide worker statistics\nSELECT * FROM pg_background_stats_v2();\n\n-- Output:\n--  workers_launched | workers_completed | workers_failed | workers_active | avg_execution_ms | max_workers\n-- ------------------+-------------------+----------------+----------------+------------------+-------------\n--                42 |                38 |              2 |              2 |           1234.5 |          16\n```\n\n#### 8. Progress Reporting (v1.8+)\n\n**From within worker SQL** (report progress):\n```sql\n-- Launch a worker that reports progress\nSELECT * FROM pg_background_launch_v2($$\n  SELECT pg_background_progress(0, 'Starting...');\n  -- Do some work...\n  SELECT pg_background_progress(25, 'Phase 1 complete');\n  -- More work...\n  SELECT pg_background_progress(50, 'Halfway done');\n  -- Final work...\n  SELECT pg_background_progress(100, 'Complete');\n$$) AS h \\gset;\n```\n\n**From launcher** (check progress):\n```sql\n-- Poll worker progress\nSELECT * FROM pg_background_get_progress_v2(:'h.pid', :'h.cookie');\n\n-- Output:\n--  progress_pct | progress_msg\n-- --------------+---------------\n--            50 | Halfway done\n```\n\n### V1 API (Legacy)\n\nThe v1 API is retained for backward compatibility but **lacks cookie-based PID reuse protection**.\n\n```sql\n-- Launch (returns bare PID)\nSELECT pg_background_launch('VACUUM VERBOSE my_table') AS pid \\gset\n\n-- Retrieve results\nSELECT * FROM pg_background_result(:pid) AS (result TEXT);\n\n-- Fire-and-forget (detach does NOT cancel!)\nSELECT pg_background_detach(:pid);\n```\n\n⚠️ **Production Warning**: The v1 API is vulnerable to PID reuse over long session lifetimes. Always use v2 API in production.\n\n---\n\n## Complete API Reference\n\n### V2 Functions\n\n| Function | Returns | Description | Use Case |\n|----------|---------|-------------|----------|\n| `pg_background_launch_v2(sql, queue_size, label)` | `pg_background_handle` | Launch worker with optional label (v1.9) | Standard async execution |\n| `pg_background_submit_v2(sql, queue_size, label)` | `pg_background_handle` | Fire-and-forget with optional label (v1.9) | Side-effect queries |\n| `pg_background_result_v2(pid, cookie)` | `SETOF record` | Retrieve results (**one-time consumption**) | Collect query output |\n| `pg_background_result_info_v2(pid, cookie)` | `pg_background_result_info` | Get result metadata (v1.9) | Check completion without consuming |\n| `pg_background_error_info_v2(pid, cookie)` | `pg_background_error` | Get structured error details (v1.9) | Error diagnostics |\n| `pg_background_detach_v2(pid, cookie)` | `void` | Stop tracking worker (worker continues) | Cleanup bookkeeping |\n| `pg_background_detach_all_v2()` | `int4` | Detach all workers in session (v1.9) | Session cleanup |\n| `pg_background_cancel_v2(pid, cookie)` | `void` | Request cancellation (best-effort) | Terminate unwanted work |\n| `pg_background_cancel_v2_grace(pid, cookie, grace_ms)` | `void` | Cancel with grace period (max 3600000ms) | Allow statement to finish |\n| `pg_background_cancel_all_v2()` | `int4` | Cancel all workers in session (v1.9) | Emergency cleanup |\n| `pg_background_wait_v2(pid, cookie)` | `void` | Block until worker completes | Synchronous barrier |\n| `pg_background_wait_v2_timeout(pid, cookie, timeout_ms)` | `bool` | Wait with timeout (returns `true` if done) | Bounded blocking |\n| `pg_background_list_v2()` | `SETOF record` | List known workers in current session | Monitoring, debugging |\n| `pg_background_stats_v2()` | `pg_background_stats` | Session statistics (v1.8+) | Monitoring, debugging |\n| `pg_background_progress(pct, msg)` | `void` | Report progress from worker (v1.8+) | Long-running task feedback |\n| `pg_background_get_progress_v2(pid, cookie)` | `pg_background_progress` | Get worker progress (v1.8+) | Monitor long-running tasks |\n\n**Parameters**:\n- `sql`: SQL command(s) to execute (multiple statements allowed)\n- `queue_size`: Shared memory queue size in bytes (default: 65536, min: 4096)\n- `pid`: Process ID from handle\n- `cookie`: Unique identifier from handle (prevents PID reuse)\n- `label`: Optional worker label for identification (v1.9, default: NULL)\n- `grace_ms`: Milliseconds to wait before forceful termination (capped at 1 hour)\n- `timeout_ms`: Milliseconds to wait for completion\n\n**Handle Type**:\n```sql\nCREATE TYPE pg_background_handle AS (\n  pid    int4,   -- Process ID\n  cookie int8    -- Unique identifier (prevents PID reuse)\n);\n```\n\n**Statistics Type** (v1.8+):\n```sql\nCREATE TYPE pg_background_stats AS (\n  workers_launched   int8,    -- Total workers launched this session\n  workers_completed  int8,    -- Workers completed successfully\n  workers_failed     int8,    -- Workers that failed with error\n  workers_active     int4,    -- Currently active workers\n  avg_execution_ms   float8,  -- Average execution time\n  max_workers        int4     -- Current max_workers setting\n);\n```\n\n**Progress Type** (v1.8+):\n```sql\nCREATE TYPE pg_background_progress AS (\n  progress_pct  int4,   -- Progress percentage (0-100)\n  progress_msg  text    -- Brief status message\n);\n```\n\n**Result Info Type** (v1.9+):\n```sql\nCREATE TYPE pg_background_result_info AS (\n  row_count    int8,    -- Number of rows returned/affected\n  command_tag  text,    -- Command tag (SELECT, INSERT, etc.)\n  completed    bool,    -- True if worker completed\n  has_error    bool     -- True if SQL execution error was captured\n);\n```\n\n\u003e **Note**: `has_error` indicates SQL execution errors captured through structured error reporting. Early worker failures (e.g., resource exhaustion, connection issues) before SQL execution begins do not set this flag. The combination of `completed=true`, `has_error=false`, and `error_info_v2() IS NULL` indicates likely success, but does not guarantee the worker completed without infrastructure-level failures.\n\n**Error Type** (v1.9+):\n```sql\nCREATE TYPE pg_background_error AS (\n  sqlstate  text,   -- SQLSTATE error code (e.g., '23505')\n  message   text,   -- Primary error message\n  detail    text,   -- Detailed error info (if any)\n  hint      text,   -- Hint for resolution (if any)\n  context   text    -- Error context/stack trace\n);\n```\n\n#### Structured Error Returns — SQLSTATE Semantics\n\n`pg_background_error_info_v2(pid, cookie)` returns the **real** five-character\n`SQLSTATE` emitted by the worker's failed statement, not a synthesized\n`08006 \"lost connection to worker process\"`. The worker's `PG_CATCH` handler\ncopies `ErrorData` from the caught `ereport(ERROR)`, stores the fields in DSM\n(with `error_sqlstate` written last as a publish flag) and calls\n`EmitErrorReport()` + `ReadyForQuery(DestRemote)` + `pq_flush()` so the launcher\nsees the actual `'E'` error frame over `shm_mq`.\n\nTypical codes returned end-to-end (v1.9+):\n\n| Trigger SQL                                      | Returned `sqlstate` | Path            |\n|--------------------------------------------------|---------------------|-----------------|\n| `SELECT 1/0`                                     | `22012`             | execute         |\n| `RAISE EXCEPTION 'custom error'`                 | `P0001`             | execute         |\n| `INSERT NULL` into `NOT NULL` column             | `23502`             | execute         |\n| `INSERT` violating `INITIALLY DEFERRED` FK       | `23503`             | commit          |\n| `pg_background_cancel_v2()` during `pg_sleep()`  | `57014`             | execute         |\n\n**Recommended pattern**: call `error_info_v2` from the same PL/pgSQL\n`EXCEPTION` block that observes the failure. Once the launcher's transaction\naborts, `cleanup_worker_info` removes the hash entry and the next transaction\nwill see `ERRCODE_UNDEFINED_OBJECT` (\"PID N is not attached to this session\").\n\n```sql\nDO $$\nDECLARE\n    h pg_background_handle;\n    s text;\nBEGIN\n    h := pg_background_launch_v2('SELECT 1/0');\n    PERFORM pg_background_wait_v2(h.pid, h.cookie);\n    SELECT sqlstate INTO s\n      FROM pg_background_error_info_v2(h.pid, h.cookie);\n    RAISE NOTICE 'worker sqlstate=%', s;   -- 22012\n    PERFORM pg_background_detach_v2(h.pid, h.cookie);\nEND$$;\n```\n\n\u003e **Important — do not call `result_v2()` on an error path.** `result_v2()`\n\u003e re-raises the worker's error in the launcher via `ereport(ERROR)`, which\n\u003e aborts the current transaction and triggers `cleanup_worker_info` before you\n\u003e can inspect `error_info_v2()`. For error diagnosis, the supported pattern is\n\u003e `launch_v2 -\u003e wait_v2 -\u003e error_info_v2 -\u003e detach_v2` (no `result_v2`).\n\n\u003e **`08006` is now reserved for infra-level failures only.** The launcher\n\u003e synthesizes `ERRCODE_CONNECTION_FAILURE \"lost connection to worker process\"`\n\u003e only when the worker died before it could propagate a real error (see\n\u003e [Known Limitations — Early worker failures](#9-early-worker-failures-before-pq_redirect_to_shm_mq)).\n\u003e Under normal operation, any SQL-level error inside the worker surfaces as\n\u003e the concrete SQLSTATE shown in the table above.\n\n### Deployment Order\n\nThe fix for end-to-end SQLSTATE propagation lives in the compiled\n`pg_background.so`. Whether a server restart is required depends on how the\nlibrary is loaded (see [Library Loading](#library-loading)):\n\n- **With `shared_preload_libraries`**: the postmaster dlopens the library\n  once at startup and every forked background worker inherits the cached\n  handle. After replacing the `.so` on disk you must restart PostgreSQL —\n  a plain `pg_reload_conf()` is not sufficient.\n- **Without SPL** (the default): each background worker dlopens the library\n  in its own process, so a fresh `pg_background_launch_v2(...)` call picks\n  up the new binary automatically. No server restart is needed; at most,\n  reconnect long-lived client sessions.\n\n1. Build and install: `make clean \u0026\u0026 make \u0026\u0026 sudo make install`.\n2. **Reload the library:**\n   - SPL setup → `pg_ctl restart` / systemd / platform equivalent.\n   - On-demand setup → no action required (optionally reconnect clients).\n3. Verify on staging that real SQLSTATEs propagate:\n   ```sql\n   DO $$\n   DECLARE h pg_background_handle; s text;\n   BEGIN\n       h := pg_background_launch_v2('SELECT 1/0');\n       PERFORM pg_background_wait_v2(h.pid, h.cookie);\n       SELECT sqlstate INTO s FROM pg_background_error_info_v2(h.pid, h.cookie);\n       ASSERT s = '22012', 'expected 22012, got ' || s;\n       PERFORM pg_background_detach_v2(h.pid, h.cookie);\n   END$$;\n   ```\n4. **Only after step 3 succeeds**, remove any PL/pgSQL workarounds that read\n   `error_info_v2` as a fallback after catching `08006`. Before the fix ships\n   they were the only way to get a usable SQLSTATE; after the fix they become\n   dead code, but keeping them in place during the rollout is harmless.\n\n### Rollback Order\n\nTo roll back to a pre-fix `.so` (for example if another extension in the same\nimage regresses):\n\n1. **First** restore the PL/pgSQL workarounds in user code (they expect\n   `SQLERRM` to degrade to `08006` and then read `error_info_v2` out of band).\n2. **Only after step 1**, install the old `.so` and restart PostgreSQL.\n\nDoing rollback in the reverse order (old `.so` first) causes user functions to\nsee raw `08006` errors without the fallback path, which can manifest as\n`WHEN others` branches swallowing what used to be diagnosable SQLSTATEs.\n\n### V1 Functions (Deprecated)\n\n| Function | Returns | Description | Limitation |\n|----------|---------|-------------|------------|\n| `pg_background_launch(sql, queue_size)` | `int4` (PID) | Launch worker, return PID | Vulnerable to PID reuse |\n| `pg_background_result(pid)` | `SETOF record` | Retrieve results | No cookie validation |\n| `pg_background_detach(pid)` | `void` | Stop tracking worker | Does NOT cancel execution |\n\n⚠️ **Migration Path**: Replace v1 calls with v2 equivalents in new code. See [Migration Guide](#migration-guide).\n\n---\n\n## Critical Semantic Distinctions\n\n### Cancel vs Detach\n\n**These operations are NOT interchangeable.** Confusion between them is a common source of production issues.\n\n| Operation | Stops Execution | Prevents Commit | Removes Tracking |\n|-----------|-----------------|-----------------|------------------|\n| **`cancel_v2()`** | ⚠️ Best-effort (immediate on Unix, limited on Windows) | ⚠️ Best-effort | ❌ No |\n| **`detach_v2()`** | ❌ No | ❌ No | ✅ Yes |\n\n**Rule of Thumb**:\n- Use **`cancel_v2()`** to **stop work** (terminate execution, prevent commit/notify)\n- Use **`detach_v2()`** to **stop tracking** (free bookkeeping memory while worker continues)\n\n#### Example: Detach Does NOT Prevent NOTIFY\n\n```sql\n-- Launch worker that sends notification\nSELECT * FROM pg_background_launch_v2(\n  $$SELECT pg_notify('alerts', 'system_event')$$\n) AS h \\gset\n\n-- Detach only removes launcher's tracking\nSELECT pg_background_detach_v2(:'h.pid', :'h.cookie');\n\n-- Worker STILL runs and sends notification!\n-- To actually prevent notification, use:\nSELECT pg_background_cancel_v2(:'h.pid', :'h.cookie');\n```\n\n#### When to Use Each\n\n**Use `cancel_v2()`**:\n- User-initiated cancellation\n- Timeout enforcement\n- Rollback of unwanted side effects\n- Immediate resource reclamation\n\n**Use `detach_v2()`**:\n- Long-running maintenance (don't need to track VACUUM for hours)\n- Fire-and-forget after successful submission\n- Session cleanup before disconnect\n- Reducing launcher session memory usage\n\n### V1 vs V2 API\n\n| Aspect | V1 API | V2 API |\n|--------|--------|--------|\n| **Handle** | Bare `int4` PID | `(pid int4, cookie int8)` composite |\n| **PID Reuse Protection** | ❌ None | ✅ Cookie validation |\n| **Cancel Operation** | ❌ Not available | ✅ `cancel_v2()` / `cancel_v2_grace()` |\n| **Wait Operation** | ❌ Not available (manual polling) | ✅ `wait_v2()` / `wait_v2_timeout()` |\n| **Worker Listing** | ❌ Not available | ✅ `list_v2()` |\n| **Submit (fire-forget)** | ⚠️ Use `detach()` after `launch()` | ✅ Dedicated `submit_v2()` |\n| **Production Use** | ⚠️ Not recommended | ✅ Recommended |\n\n#### Common V1 Pain Point: Column Definition Lists\n\nA frequent source of confusion with the v1 API is the requirement to specify column definitions when retrieving results:\n\n```sql\n-- V1 API: MUST specify column definition list\nSELECT * FROM pg_background_result(\n  pg_background_launch('SELECT pg_sleep(3); SELECT ''done''')\n) AS (result text);\n\n-- Without it, you get:\n-- ERROR: a column definition list is required for functions returning \"record\"\n\n-- And if your query returns multiple columns, you must match them exactly:\nSELECT * FROM pg_background_result(\n  pg_background_launch('SELECT ''done'', ''here''')\n) AS (col1 text, col2 text);\n-- Mismatched columns cause: ERROR: remote query result rowtype does not match\n```\n\n**V2 Solution**: If you just need to wait for completion without retrieving results, use `wait_v2()`:\n\n```sql\n-- V2 API: Wait for completion without dealing with result columns\nSELECT (h).pid, (h).cookie\nFROM pg_background_launch_v2('SELECT pg_sleep(3); SELECT ''done'', ''here''') AS h \\gset\n\n-- Simply wait - no column definition needed!\nSELECT pg_background_wait_v2(:pid, :cookie);\n\n-- Or with timeout (returns true if completed, false if timed out)\nSELECT pg_background_wait_v2_timeout(:pid, :cookie, 5000);\n\n-- Cleanup\nSELECT pg_background_detach_v2(:pid, :cookie);\n```\n\nThis is especially useful for:\n- Background maintenance tasks (VACUUM, ANALYZE)\n- Fire-and-forget operations where you only care about completion\n- Cases where the result structure may vary\n\n### PID Reuse Protection\n\n**The Problem**: Operating systems recycle process IDs. On busy systems, a PID can be reused within minutes.\n\n**V1 API Risk** (PID-only reference):\n```sql\n-- Day 1: Launch worker\nSELECT pg_background_launch('slow_query()') AS pid \\gset\n\n-- Day 2: Session still alive, but worker PID may be reused\n-- This could attach to a DIFFERENT worker with the SAME PID!\nSELECT pg_background_result(:pid);  -- ⚠️ DANGEROUS\n```\n\n**V2 API Fix** (PID + Cookie):\n```sql\n-- Launch with cookie\nSELECT * FROM pg_background_launch_v2('slow_query()') AS h \\gset\n\n-- Days later: cookie validation prevents mismatch\nSELECT pg_background_result_v2(:'h.pid', :'h.cookie');\n-- If PID reused, cookie won't match → safe error\n```\n\n**Implementation**: Each worker generates a random 64-bit cookie at launch. All operations validate `(pid, cookie)` tuple matches.\n\n### NOTIFY and Autonomous Commits\n\nWorkers execute in **separate transactions** from the launcher. This has critical implications:\n\n#### Autonomous Transaction Behavior\n\n```sql\nBEGIN;\n  -- Launcher transaction starts\n\n  SELECT * FROM pg_background_launch_v2(\n    'INSERT INTO audit_log VALUES (now(), ''user_action'')'\n  ) AS h \\gset;\n  \n  -- Main work\n  UPDATE users SET status = 'active' WHERE id = 123;\n  \n  -- If we ROLLBACK, the audit_log INSERT still commits!\nROLLBACK;\n\n-- audit_log entry exists despite rollback\n```\n\n**Implications**:\n- ✅ **Good for**: Audit logging, NOTIFY, stats collection\n- ⚠️ **Bad for**: Interdependent data modifications requiring ACID\n\n#### NOTIFY Delivery with Detach\n\n```sql\n-- Worker sends notification\nSELECT * FROM pg_background_launch_v2(\n  $$SELECT pg_notify('channel', 'message')$$\n) AS h \\gset;\n\n-- Detach removes tracking but does NOT cancel\nSELECT pg_background_detach_v2(:'h.pid', :'h.cookie');\n\n-- Notification WILL be delivered (worker commits independently)\n```\n\nTo **prevent** notification delivery:\n```sql\n-- Cancel before worker commits\nSELECT pg_background_cancel_v2(:'h.pid', :'h.cookie');\n```\n\n---\n\n## Security Model\n\n### Privilege Architecture\n\n`pg_background` uses a role-based security model with zero PUBLIC access by default.\n\n#### Default Setup (Automatic)\n\n```sql\n-- Extension creates this role automatically:\nCREATE ROLE pgbackground_role NOLOGIN INHERIT;\n\n-- All pg_background functions granted to this role\n-- PUBLIC has NO access by default\n```\n\n#### Grant Access to Users\n\n```sql\n-- Method 1: Direct role grant (recommended)\nGRANT pgbackground_role TO app_user;\n\n-- Method 2: Helper function (explicit EXECUTE grants)\nSELECT grant_pg_background_privileges('app_user', true);\n```\n\n#### Revoke Access\n\n```sql\n-- Method 1: Revoke role membership\nREVOKE pgbackground_role FROM app_user;\n\n-- Method 2: Helper function\nSELECT revoke_pg_background_privileges('app_user', true);\n```\n\n### Security Considerations\n\n#### 1. SQL Injection Prevention\n\n❌ **Unsafe** (vulnerable to SQL injection):\n```sql\nCREATE FUNCTION unsafe_launch(user_input text) RETURNS void AS $$\nBEGIN\n  -- NEVER concatenate untrusted input!\n  PERFORM pg_background_launch_v2(\n    'SELECT * FROM users WHERE name = ''' || user_input || ''''\n  );\nEND;\n$$ LANGUAGE plpgsql;\n```\n\n✅ **Safe** (parameterized with `format()`):\n```sql\nCREATE FUNCTION safe_launch(user_input text) RETURNS void AS $$\nBEGIN\n  -- Use %L for literal quoting\n  PERFORM pg_background_launch_v2(\n    format('SELECT * FROM users WHERE name = %L', user_input)\n  );\nEND;\n$$ LANGUAGE plpgsql;\n```\n\n#### 2. Resource Exhaustion Protection\n\n```sql\n-- Application-level quota enforcement\nCREATE OR REPLACE FUNCTION launch_with_limit(sql text)\nRETURNS pg_background_handle AS $$\nDECLARE\n  active_count int;\n  h pg_background_handle;\nBEGIN\n  -- Count active workers for current user\n  SELECT count(*) INTO active_count\n  FROM pg_background_list_v2() AS (\n    pid int4, cookie int8, launched_at timestamptz, user_id oid,\n    queue_size int4, state text, sql_preview text, last_error text, consumed bool\n  )\n  WHERE user_id = current_user::regrole::oid\n    AND state IN ('running');\n  \n  IF active_count \u003e= 5 THEN\n    RAISE EXCEPTION 'User worker limit exceeded (max 5 concurrent)';\n  END IF;\n  \n  SELECT * INTO h FROM pg_background_launch_v2(sql);\n  RETURN h;\nEND;\n$$ LANGUAGE plpgsql SECURITY DEFINER;\n```\n\n#### 3. Privilege Isolation\n\n- ✅ Workers inherit **current_user** from launcher (not superuser escalation)\n- ✅ `SECURITY DEFINER` helpers use pinned `search_path = pg_catalog`\n- ✅ No ambient PUBLIC grants\n- ⚠️ Workers can access all databases launcher can access\n\n#### 4. Information Disclosure Risks\n\n```sql\n-- list_v2() exposes SQL previews (first 120 chars) and error messages\n-- For sensitive deployments, create restricted view:\n\nCREATE VIEW safe_worker_list AS\nSELECT pid, cookie, state, consumed, launched_at\nFROM pg_background_list_v2() AS (\n  pid int4, cookie int8, launched_at timestamptz, user_id oid,\n  queue_size int4, state text, sql_preview text, last_error text, consumed bool\n)\nWHERE user_id = current_user::regrole::oid;\n-- Omit sql_preview and last_error\n\nGRANT SELECT ON safe_worker_list TO app_users;\n```\n\n### Security Best Practices\n\n1. **Never grant `pgbackground_role` to PUBLIC**\n2. **Use v2 API exclusively** (cookie protection)\n3. **Set `statement_timeout`** to bound execution time\n4. **Implement application-level quotas** (max workers per user/database)\n5. **Sanitize all dynamic SQL** with `format()` or `quote_literal()`\n6. **Monitor `list_v2()`** for suspicious activity\n7. **Audit `pg_stat_activity`** for background worker usage\n8. **Test disaster recovery** with active workers\n\n---\n\n## Use Cases with Examples\n\n### 1. Background Maintenance Operations\n\n**Problem**: VACUUM blocks client connections and consumes resources.\n\n**Solution**: Run maintenance asynchronously.\n\n```sql\n-- Launch background VACUUM\nSELECT * FROM pg_background_launch_v2(\n  'VACUUM (VERBOSE, ANALYZE) large_table'\n) AS h \\gset\n\n-- Check progress periodically\nSELECT state, sql_preview\nFROM pg_background_list_v2() AS (\n  pid int4, cookie int8, launched_at timestamptz, user_id oid,\n  queue_size int4, state text, sql_preview text, last_error text, consumed bool\n)\nWHERE pid = :'h.pid' AND cookie = :'h.cookie';\n\n-- Wait for completion (optional)\nSELECT pg_background_wait_v2(:'h.pid', :'h.cookie');\n\n-- Cleanup tracking\nSELECT pg_background_detach_v2(:'h.pid', :'h.cookie');\n```\n\n### 2. Autonomous Audit Logging\n\n**Problem**: Audit logs must persist even if main transaction rolls back.\n\n**Solution**: Use background worker for independent commit.\n\n\u003e **⚠️ Critical Warning**: If `max_worker_processes` is exhausted, `pg_background_launch_v2()` will throw `INSUFFICIENT_RESOURCES`. For audit logging, this means:\n\u003e - The audit message will be **lost**\n\u003e - The calling transaction may **fail unexpectedly**\n\u003e - Failures occur **unpredictably** under load\n\u003e\n\u003e See the robust implementation below and [Known Limitation #4](#4-worker-exhaustion-insufficient_resources) for details.\n\n**Basic Example** (not fault-tolerant):\n\n```sql\nCREATE FUNCTION log_audit_simple(event_type text, details jsonb)\nRETURNS void AS $$\nDECLARE\n  h pg_background_handle;\nBEGIN\n  -- Launch audit insert (commits independently)\n  SELECT * INTO h FROM pg_background_submit_v2(\n    format(\n      'INSERT INTO audit_log (ts, event_type, details) VALUES (now(), %L, %L)',\n      event_type,\n      details::text\n    )\n  );\n\n  -- Detach immediately (fire-and-forget)\n  PERFORM pg_background_detach_v2(h.pid, h.cookie);\nEND;\n$$ LANGUAGE plpgsql;\n```\n\n**Robust Example** (handles worker exhaustion):\n\n```sql\nCREATE FUNCTION log_audit(event_type text, details jsonb)\nRETURNS void AS $$\nDECLARE\n  h pg_background_handle;\n  retries int := 3;\n  backoff_ms int := 100;\nBEGIN\n  FOR i IN 1..retries LOOP\n    BEGIN\n      SELECT * INTO h FROM pg_background_submit_v2(\n        format(\n          'INSERT INTO audit_log (ts, event_type, details) VALUES (now(), %L, %L)',\n          event_type,\n          details::text\n        )\n      );\n      PERFORM pg_background_detach_v2(h.pid, h.cookie);\n      RETURN;  -- Success\n    EXCEPTION\n      WHEN insufficient_resources THEN\n        IF i = retries THEN\n          -- Final fallback: log synchronously (blocks but doesn't lose data)\n          INSERT INTO audit_log (ts, event_type, details)\n          VALUES (now(), event_type, details);\n          RAISE WARNING 'pg_background exhausted, audit logged synchronously';\n          RETURN;\n        END IF;\n        -- Exponential backoff before retry\n        PERFORM pg_sleep(backoff_ms / 1000.0);\n        backoff_ms := backoff_ms * 2;\n    END;\n  END LOOP;\nEND;\n$$ LANGUAGE plpgsql;\n```\n\n**Usage in transaction**:\n\n```sql\nBEGIN;\n  UPDATE accounts SET balance = balance - 100 WHERE id = 123;\n\n  -- Audit log commits even if UPDATE rolls back\n  PERFORM log_audit('withdrawal', '{\"account\": 123, \"amount\": 100}');\n\n  -- Simulate error\n  ROLLBACK;\n\n-- Audit log entry exists!\nSELECT * FROM audit_log ORDER BY ts DESC LIMIT 1;\n```\n\n### 3. Asynchronous Notification Delivery\n\n**Problem**: `pg_notify()` in main transaction delays commit.\n\n**Solution**: Offload notifications to background worker.\n\n```sql\nCREATE FUNCTION notify_async(channel text, payload text)\nRETURNS void AS $$\nDECLARE\n  h pg_background_handle;\nBEGIN\n  SELECT * INTO h FROM pg_background_submit_v2(\n    format('SELECT pg_notify(%L, %L)', channel, payload)\n  );\n  \n  PERFORM pg_background_detach_v2(h.pid, h.cookie);\nEND;\n$$ LANGUAGE plpgsql;\n\n-- Usage\nSELECT notify_async('order_updates', '{\"order_id\": 456, \"status\": \"shipped\"}');\n```\n\n### 4. Long-Running ETL Pipeline\n\n**Problem**: ETL blocks client connection for hours.\n\n**Solution**: Launch in background, poll for completion.\n\n```sql\n-- Launch ETL\nSELECT * FROM pg_background_launch_v2($$\n  INSERT INTO fact_sales\n  SELECT * FROM staging_sales\n  WHERE processed = false;\n  \n  UPDATE staging_sales SET processed = true;\n$$) AS h \\gset\n\n-- Store handle for later retrieval\nINSERT INTO job_tracker (job_id, pid, cookie, started_at)\nVALUES ('etl-001', :'h.pid', :'h.cookie', now());\n\n-- Later: check status\nSELECT\n  j.job_id,\n  w.state,\n  w.launched_at,\n  (now() - w.launched_at) AS duration\nFROM job_tracker j\nCROSS JOIN LATERAL (\n  SELECT *\n  FROM pg_background_list_v2() AS (\n    pid int4, cookie int8, launched_at timestamptz, user_id oid,\n    queue_size int4, state text, sql_preview text, last_error text, consumed bool\n  )\n  WHERE pid = j.pid AND cookie = j.cookie\n) w\nWHERE j.job_id = 'etl-001';\n```\n\n### 5. Parallel Query Simulation\n\n**Problem**: PostgreSQL doesn't parallelize queries across tables.\n\n**Solution**: Launch concurrent workers for each table.\n\n```sql\nDO $$\nDECLARE\n  h1 pg_background_handle;\n  h2 pg_background_handle;\n  h3 pg_background_handle;\n  total_rows bigint;\nBEGIN\n  -- Launch parallel workers\n  SELECT * INTO h1 FROM pg_background_launch_v2('SELECT count(*) FROM sales');\n  SELECT * INTO h2 FROM pg_background_launch_v2('SELECT count(*) FROM orders');\n  SELECT * INTO h3 FROM pg_background_launch_v2('SELECT count(*) FROM customers');\n  \n  -- Wait for all to complete\n  PERFORM pg_background_wait_v2(h1.pid, h1.cookie);\n  PERFORM pg_background_wait_v2(h2.pid, h2.cookie);\n  PERFORM pg_background_wait_v2(h3.pid, h3.cookie);\n  \n  -- Aggregate results\n  SELECT sum(cnt) INTO total_rows FROM (\n    SELECT * FROM pg_background_result_v2(h1.pid, h1.cookie) AS (cnt bigint)\n    UNION ALL\n    SELECT * FROM pg_background_result_v2(h2.pid, h2.cookie) AS (cnt bigint)\n    UNION ALL\n    SELECT * FROM pg_background_result_v2(h3.pid, h3.cookie) AS (cnt bigint)\n  ) t;\n  \n  RAISE NOTICE 'Total rows: %', total_rows;\nEND;\n$$;\n```\n\n### 6. Timeout Enforcement\n\n**Problem**: Need to cancel queries that exceed time budget.\n\n**Solution**: Use `wait_v2_timeout()` with `cancel_v2_grace()`.\n\n```sql\nCREATE FUNCTION run_with_timeout(sql text, timeout_sec int)\nRETURNS text AS $$\nDECLARE\n  h pg_background_handle;\n  done bool;\n  result_text text;\nBEGIN\n  -- Launch worker\n  SELECT * INTO h FROM pg_background_launch_v2(sql);\n  \n  -- Wait with timeout\n  done := pg_background_wait_v2_timeout(h.pid, h.cookie, timeout_sec * 1000);\n  \n  IF NOT done THEN\n    -- Timeout exceeded\n    RAISE WARNING 'Query timed out after % seconds, cancelling...', timeout_sec;\n    PERFORM pg_background_cancel_v2_grace(h.pid, h.cookie, 1000);\n    PERFORM pg_background_detach_v2(h.pid, h.cookie);\n    RETURN 'TIMEOUT';\n  END IF;\n  \n  -- Retrieve result\n  SELECT * INTO result_text FROM pg_background_result_v2(h.pid, h.cookie) AS (res text);\n  RETURN result_text;\nEND;\n$$ LANGUAGE plpgsql;\n\n-- Usage\nSELECT run_with_timeout('SELECT pg_sleep(10)', 5);  -- Returns 'TIMEOUT'\n```\n\n---\n\n## Operational Guidance\n\n### Resource Management\n\n#### max_worker_processes Limit\n\nBackground workers count against PostgreSQL's global `max_worker_processes` limit.\n\n**Check Current Usage**:\n```sql\nSELECT count(*) AS bgworker_count\nFROM pg_stat_activity\nWHERE backend_type LIKE '%background%';\n```\n\n**Recommended Configuration**:\n```sql\n-- Formula: autovacuum_workers + max_parallel_workers + pg_background_estimate + buffer\nALTER SYSTEM SET max_worker_processes = 64;  -- Adjust per workload\nSELECT pg_reload_conf();\n```\n\n**Operational Limits**:\n- Default `max_worker_processes`: 8 (often insufficient)\n- Recommended minimum for pg_background: 16-32\n- Enterprise workloads: 64-128\n- Each worker: ~10MB memory overhead\n\n#### Dynamic Shared Memory (DSM) Usage\n\nEach worker allocates one DSM segment for IPC.\n\n**Monitor DSM**:\n```sql\nSELECT\n  name,\n  size,\n  allocated_size\nFROM pg_shmem_allocations\nWHERE name LIKE '%pg_background%'\nORDER BY size DESC;\n```\n\n**DSM Size**:\n- Default queue_size: 65536 bytes (~64KB)\n- Minimum queue_size: 4096 bytes (enforced by `shm_mq`)\n- Large result sets: increase queue_size parameter\n\n**Example**:\n```sql\n-- Small results (default)\nSELECT pg_background_launch_v2('SELECT id FROM small_table', 65536);\n\n-- Large results (1MB queue)\nSELECT pg_background_launch_v2('SELECT * FROM huge_table', 1048576);\n```\n\n#### Worker Lifecycle and Cleanup\n\n**Automatic Cleanup**:\n- Worker exits → DSM detached → hash entry removed\n- Launcher session ends → all tracked workers detached\n\n**Manual Cleanup**:\n```sql\n-- Detach all completed workers\nDO $$\nDECLARE r record;\nBEGIN\n  FOR r IN\n    SELECT *\n    FROM pg_background_list_v2() AS (\n      pid int4, cookie int8, launched_at timestamptz, user_id oid,\n      queue_size int4, state text, sql_preview text, last_error text, consumed bool\n    )\n    WHERE state IN ('stopped', 'canceled', 'error')\n  LOOP\n    PERFORM pg_background_detach_v2(r.pid, r.cookie);\n  END LOOP;\nEND;\n$$;\n```\n\n### Performance Tuning\n\n#### 1. Queue Size Optimization\n\n**Rule of Thumb**:\n- Small queries (\u003c 1000 rows): 65536 (64KB, default)\n- Medium queries (\u003c 10000 rows): 262144 (256KB)\n- Large queries (\u003e= 10000 rows): 1048576+ (1MB+)\n\n**Trade-offs**:\n- Larger queue → less blocking on result production\n- Larger queue → more shared memory consumption\n- Too small → worker blocks waiting for launcher to consume\n\n**Measure Contention**:\n```sql\n-- Check if worker is blocking on queue send\nSELECT\n  pid,\n  state,\n  wait_event_type,\n  wait_event\nFROM pg_stat_activity\nWHERE backend_type LIKE '%background%'\n  AND wait_event = 'SHM_MQ_SEND';\n```\n\n#### 2. Statement Timeout\n\nWorkers inherit `statement_timeout` from launcher session.\n\n**Set Per-Worker Timeout**:\n```sql\n-- Temporarily increase timeout\nSET statement_timeout = '30min';\nSELECT pg_background_launch_v2('slow_aggregation_query()');\nRESET statement_timeout;\n```\n\n**Set Database-Wide Default**:\n```sql\nALTER DATABASE production SET statement_timeout = '10min';\n```\n\n#### 3. Work Memory\n\n**Important**: Workers do NOT inherit `work_mem` from launcher.\n\n**Workaround**:\n```sql\n-- Include SET in worker SQL\nSELECT pg_background_launch_v2($$\n  SET work_mem = '256MB';\n  SELECT * FROM large_table ORDER BY col;\n$$);\n```\n\n#### 4. Parallel Workers\n\nBackground workers are separate from `max_parallel_workers`.\n\n**Configuration**:\n```sql\n-- Both settings are independent\nALTER SYSTEM SET max_worker_processes = 64;     -- Total pool\nALTER SYSTEM SET max_parallel_workers = 16;     -- Parallel query subset\n```\n\n### Monitoring\n\n#### Real-Time Worker Status\n\n```sql\nCREATE VIEW pg_background_status AS\nSELECT\n  w.pid,\n  w.cookie,\n  w.state,\n  left(w.sql_preview, 60) AS sql_snippet,\n  w.launched_at,\n  (now() - w.launched_at) AS age,\n  w.consumed,\n  a.state AS pg_state,\n  a.wait_event_type,\n  a.wait_event,\n  a.query AS current_query\nFROM pg_background_list_v2() AS (\n  pid int4, cookie int8, launched_at timestamptz, user_id oid,\n  queue_size int4, state text, sql_preview text, last_error text, consumed bool\n) w\nLEFT JOIN pg_stat_activity a USING (pid)\nORDER BY w.launched_at DESC;\n\n-- Query it\nSELECT * FROM pg_background_status;\n```\n\n#### Alerting on Long-Running Workers\n\n```sql\n-- Workers running \u003e 1 hour\nSELECT\n  pid,\n  cookie,\n  sql_preview,\n  (now() - launched_at) AS duration\nFROM pg_background_list_v2() AS (\n  pid int4, cookie int8, launched_at timestamptz, user_id oid,\n  queue_size int4, state text, sql_preview text, last_error text, consumed bool\n)\nWHERE state = 'running'\n  AND (now() - launched_at) \u003e interval '1 hour';\n```\n\n#### Prometheus-Style Metrics\n\n```sql\n-- Export metrics for monitoring systems\nSELECT\n  'pg_background_active_workers' AS metric,\n  count(*) AS value,\n  state AS labels\nFROM pg_background_list_v2() AS (\n  pid int4, cookie int8, launched_at timestamptz, user_id oid,\n  queue_size int4, state text, sql_preview text, last_error text, consumed bool\n)\nGROUP BY state;\n```\n\n---\n\n## Troubleshooting\n\n### Common Issues\n\n#### Issue 1: \"could not register background process\"\n\n**Symptom**:\n```\nERROR: could not register background process\nHINT: You may need to increase max_worker_processes.\n```\n\n**Cause**: `max_worker_processes` limit reached.\n\n**Solution**:\n```sql\n-- Check current limit and usage\nSHOW max_worker_processes;\nSELECT count(*) FROM pg_stat_activity WHERE backend_type LIKE '%worker%';\n\n-- Increase limit (requires restart for some versions)\nALTER SYSTEM SET max_worker_processes = 32;\nSELECT pg_reload_conf();  -- Or restart PostgreSQL\n```\n\n#### Issue 2: \"cookie mismatch for PID XXXXX\"\n\n**Symptom**:\n```\nERROR: cookie mismatch for PID 12345: expected 1234567890123456, got 9876543210987654\n```\n\n**Cause**: PID reused after worker exit, or stale handle.\n\n**Solution**:\n- Always use fresh handles from `launch_v2()`\n- Never hardcode PID/cookie values\n- Don't cache handles across long time periods\n\n```sql\n-- ❌ Bad: Reusing old handle\n-- h was from hours ago, worker exited, PID reused\n\n-- ✅ Good: Fresh handle per operation\nSELECT * FROM pg_background_launch_v2('...') AS h \\gset\nSELECT pg_background_wait_v2(:'h.pid', :'h.cookie');\n```\n\n#### Issue 3: Worker Hangs Indefinitely\n\n**Symptom**: Worker shows `running` state for hours without progress.\n\n**Cause**: Lock contention, infinite loop, or missing `CHECK_FOR_INTERRUPTS`.\n\n**Diagnosis**:\n```sql\n-- Check what worker is waiting on\nSELECT\n  w.pid,\n  w.sql_preview,\n  a.wait_event_type,\n  a.wait_event,\n  a.state,\n  a.query\nFROM pg_background_list_v2() AS (\n  pid int4, cookie int8, launched_at timestamptz, user_id oid,\n  queue_size int4, state text, sql_preview text, last_error text, consumed bool\n) w\nJOIN pg_stat_activity a USING (pid)\nWHERE w.state = 'running';\n\n-- Check locks\nSELECT\n  l.pid,\n  l.locktype,\n  l.relation::regclass,\n  l.mode,\n  l.granted\nFROM pg_locks l\nWHERE l.pid = \u003cworker_pid\u003e;\n```\n\n**Solution**:\n```sql\n-- Cancel with grace period\nSELECT pg_background_cancel_v2_grace(\u003cpid\u003e, \u003ccookie\u003e, 5000);\n\n-- Force cancel if grace period expires\nSELECT pg_background_cancel_v2(\u003cpid\u003e, \u003ccookie\u003e);\n```\n\n#### Issue 4: \"results already consumed\"\n\n**Symptom**:\n```\nERROR: results already consumed for worker PID 12345\n```\n\n**Cause**: Attempting to call `result_v2()` twice on same handle.\n\n**Solution**: Results are **one-time consumption**. Use CTE to reuse:\n```sql\n-- ✅ Correct: Use CTE to consume once\nWITH worker_results AS (\n  SELECT * FROM pg_background_result_v2(\u003cpid\u003e, \u003ccookie\u003e) AS (col text)\n)\nSELECT * FROM worker_results\nUNION ALL\nSELECT * FROM worker_results;\n```\n\n#### Issue 5: DSM Allocation Failure\n\n**Symptom**:\n```\nERROR: could not allocate dynamic shared memory\n```\n\n**Cause**: Insufficient shared memory or too many DSM segments.\n\n**Solution**:\n```sql\n-- Check DSM usage\nSELECT count(*), sum(size) AS total_bytes\nFROM pg_shmem_allocations\nWHERE name LIKE '%dsm%';\n\n-- Increase shared memory (postgresql.conf)\n-- dynamic_shared_memory_type = posix  (or sysv, mmap)\n-- Restart PostgreSQL\n```\n\n#### Issue 6: Custom Schema Installation Errors (Fixed in v1.7+)\n\n**Symptom** (in versions before fix):\n```\nCREATE EXTENSION pg_background WITH SCHEMA contrib;\nERROR: function public.grant_pg_background_privileges(unknown, boolean) does not exist\n```\n\n**Cause**: Hardcoded `public.` schema references in SQL scripts when extension is relocatable.\n\n**Status**: **Fixed in v1.7+** for fresh installations. The extension now properly supports custom schema installation.\n\n**Solution for fresh install**:\n```sql\n-- Install directly in custom schema (v1.7+)\nCREATE SCHEMA myschema;\nCREATE EXTENSION pg_background WITH SCHEMA myschema;\n\n-- Verify\nSELECT * FROM myschema.pg_background_launch_v2('SELECT 1') AS h;\n```\n\n**⚠️ Limitation for upgrades**: If you have v1.4, v1.5, or v1.6 already installed, upgrading to v1.7/v1.8 will NOT move the extension to a custom schema. The upgrade scripts for older versions contain hardcoded `public.` references because those versions only supported the public schema.\n\n**To relocate an existing installation**:\n```sql\n-- 1. Drop existing extension\nDROP EXTENSION pg_background;\n\n-- 2. Reinstall in desired schema\nCREATE EXTENSION pg_background WITH SCHEMA myschema;\n```\n\n#### Issue 7: Column Definition List Required (V1 API)\n\n**Symptom**:\n```\nSELECT pg_background_result(pg_background_launch('SELECT ''done'''));\nERROR: function returning record called in context that cannot accept type record\nHINT: Try calling the function in the FROM clause using a column definition list.\n\n-- Or when columns don't match:\nSELECT * FROM pg_background_result(...) AS (result text);\nERROR: remote query result rowtype does not match the specified FROM clause rowtype\n```\n\n**Cause**: The v1 `pg_background_result()` returns `SETOF record`, which requires PostgreSQL to know the column types at parse time.\n\n**Solution 1** - Match column definitions exactly:\n```sql\n-- Single column result\nSELECT * FROM pg_background_result(\n  pg_background_launch('SELECT ''done''')\n) AS (result text);\n\n-- Multiple columns - must match exactly\nSELECT * FROM pg_background_result(\n  pg_background_launch('SELECT ''done'', ''here''')\n) AS (col1 text, col2 text);\n```\n\n**Solution 2** - Use V2 API `wait_v2()` if you don't need results:\n```sql\n-- Launch the worker\nSELECT (h).pid, (h).cookie\nFROM pg_background_launch_v2('SELECT pg_sleep(3); SELECT ''done'', ''here''') AS h \\gset\n\n-- Wait for completion - no column definition needed!\nSELECT pg_background_wait_v2(:pid, :cookie);\n\n-- Cleanup\nSELECT pg_background_detach_v2(:pid, :cookie);\n```\n\n**Recommendation**: Migrate to the V2 API which provides `wait_v2()` for cases where you only need to wait for completion without retrieving results.\n\n### Platform-Specific Issues\n\n#### Windows: Cancel Limitations\n\n**Problem**: On Windows, `cancel_v2()` cannot interrupt actively running statements.\n\n**Explanation**: Windows lacks signal-based interrupts. Cancel only sets interrupt flags checked between statements.\n\n**Workaround**:\n```sql\n-- Always set statement_timeout on Windows\nALTER DATABASE mydb SET statement_timeout = '5min';\n\n-- Or per-worker:\nSELECT pg_background_launch_v2($$\n  SET statement_timeout = '5min';\n  SELECT slow_function();\n$$);\n```\n\n**Affected Operations**:\n- Long-running CPU-bound queries\n- Infinite loops in PL/pgSQL\n- Queries with no yielding points\n\n**See**: `windows/README.md` for details.\n\n### Debug Logging\n\n```sql\n-- Enable verbose logging\nSET client_min_messages = DEBUG1;\nSET log_min_messages = DEBUG1;\n\n-- Launch worker (check logs for DSM info)\nSELECT * FROM pg_background_launch_v2('SELECT 1') AS h \\gset;\n\n-- Check PostgreSQL logs for:\n-- - \"registered dynamic background worker\"\n-- - \"DSM segment attached\"\n-- - Worker execution details\n```\n\n---\n\n## Architecture \u0026 Design\n\n### High-Level Architecture\n\n```\n┌──────────────────┐\n│  Client Session  │\n│  (Launcher)      │\n└────────┬─────────┘\n         │ 1. pg_background_launch_v2(sql)\n         ▼\n┌──────────────────────────────────┐\n│  Extension C Code                │\n│  - Allocate DSM segment          │\n│  - RegisterDynamicBgWorker()     │\n│  - Create shm_mq                 │\n│  - Wait for worker attach        │\n└────────┬─────────────────────────┘\n         │ 2. Postmaster fork()\n         ▼\n┌──────────────────────────────────┐\n│  Background Worker Process       │\n│  - Attach database               │\n│  - Restore session GUCs          │\n│  - Execute SQL via SPI           │\n│  - Send results via shm_mq       │\n│  - Exit (DSM cleanup)            │\n└──────────────────────────────────┘\n         │ 3. Results via shared memory\n         ▼\n┌──────────────────┐\n│  Launcher        │\n│  pg_background_  │\n│  result_v2()     │\n└──────────────────┘\n```\n\n### Key Components\n\n#### 1. Dynamic Shared Memory (DSM)\n\n**Purpose**: IPC mechanism for SQL text and result transport.\n\n**Structure**:\n- **Key 0 (Fixed Data)**: Session metadata (user, database, cookie)\n- **Key 1 (SQL)**: SQL command string (null-terminated)\n- **Key 2 (GUC)**: Session GUC settings (serialized)\n- **Key 3 (Queue)**: Bidirectional message queue (shm_mq)\n\n**Lifecycle**:\n- Created by launcher in `launch_v2()`\n- Attached by worker on startup\n- Detached by worker on exit (automatic cleanup)\n- Launcher detaches on `detach_v2()` or session end\n\n#### 2. Shared Memory Queue (shm_mq)\n\n**Purpose**: Bidirectional streaming transport for results.\n\n**Flow**:\n1. Worker executes query via SPI\n2. Each result row serialized to shm_mq\n3. Launcher reads from shm_mq in `result_v2()`\n4. Queue blocks if full (backpressure)\n\n**Tuning**:\n- Queue size set at launch (default 64KB)\n- Larger queues reduce blocking\n- Monitor with `pg_stat_activity.wait_event = 'SHM_MQ_SEND'`\n\n#### 3. Background Worker API\n\n**Registration**:\n```c\nBackgroundWorker worker;\nworker.bgw_flags = BGWORKER_SHMEM_ACCESS | BGWORKER_BACKEND_DATABASE_CONNECTION;\nworker.bgw_start_time = BgWorkerStart_ConsistentState;\nworker.bgw_main = pg_background_worker_main;\nRegisterDynamicBackgroundWorker(\u0026worker, \u0026handle);\n```\n\n**Lifecycle Hooks**:\n- `bgw_main`: Entry point (`pg_background_worker_main`)\n- `bgw_notify_pid`: Launcher PID (for notifications)\n- `bgw_main_arg`: DSM handle (Datum)\n\n#### 4. Server Programming Interface (SPI)\n\n**Execution Pipeline**:\n```c\nSPI_connect();\nSPI_execute(sql, false, 0);  // read_only=false, limit=0\nwhile (SPI_processed \u003e 0) {\n    // Send result rows via shm_mq\n}\nSPI_finish();\n```\n\n**Result Serialization**:\n- `RowDescription`: Column metadata (names, types, formats)\n- `DataRow`: Binary-encoded tuple data\n- `CommandComplete`: Result tag (e.g., \"SELECT 42\")\n\n#### 5. Worker Hash Table\n\n**Purpose**: Per-session tracking of launched workers.\n\n**Structure**:\n```c\ntypedef struct pg_background_worker_info {\n    int pid;\n    uint64 cookie;\n    dsm_segment *seg;\n    BackgroundWorkerHandle *handle;\n    shm_mq_handle *responseq;\n    bool consumed;  // Result retrieval guard\n} pg_background_worker_info;\n```\n\n**Cleanup**:\n- On worker exit: `cleanup_worker_info()` callback\n- On launcher session end: detach all tracked workers\n- On explicit `detach_v2()`: remove hash entry\n\n### Concurrency and Race Conditions\n\n#### NOTIFY Race (Solved in v1.5+)\n\n**Problem**: Launcher returned before worker attached shm_mq → lost NOTIFYs.\n\n**Solution**: `shm_mq_wait_for_attach()` blocks launcher until worker ready.\n\n```c\n// In pg_background_launch_v2:\nshm_mq_wait_for_attach(mqh);  // BLOCK until worker attaches\nreturn handle;  // Safe to return now\n```\n\n#### PID Reuse (Solved in v2 API)\n\n**Problem**: Worker exits, PID reused, launcher attaches to wrong worker.\n\n**Solution**: 64-bit random cookie validated on all operations.\n\n```c\n// Generate cookie at launch\nfixed_data-\u003ecookie = (uint64)random() \u003c\u003c 32 | random();\n\n// Validate on every operation\nif (worker_info-\u003ecookie != provided_cookie)\n    ereport(ERROR, \"cookie mismatch\");\n```\n\n#### DSM Cleanup Races (Hardened in v1.6)\n\n**Problem**: Launcher `pfree(handle)` before worker attached → crash.\n\n**Solution**: Never explicitly free handle; let PostgreSQL manage lifetime.\n\n```c\n// ❌ OLD (buggy): pfree(handle);\n// ✅ NEW: Let handle live until dsm_detach\n```\n\n---\n\n## Known Limitations\n\n### 1. Windows Cancel Limitations\n\n**Limitation**: `cancel_v2()` on Windows cannot interrupt running statements.\n\n**Details**:\n- Windows lacks `SIGUSR1` equivalent for query cancellation\n- Cancel only sets `InterruptPending` flag\n- Flag checked between statements, not during execution\n\n**Impact**:\n- Infinite loops in PL/pgSQL cannot be interrupted\n- Long-running aggregate functions cannot be interrupted mid-execution\n- `pg_sleep()` DOES check interrupts (interruptible)\n\n**Workarounds**:\n1. Always set `statement_timeout`:\n   ```sql\n   ALTER DATABASE mydb SET statement_timeout = '5min';\n   ```\n2. Avoid infinite loops in worker SQL\n3. Test cancellation on Unix/Linux platforms first\n\n**Reference**: See `windows/README.md` for implementation details.\n\n### 2. No Cross-Database Workers\n\n**Limitation**: Workers can only connect to the **same database** as launcher.\n\n**Reason**: `BackgroundWorker` API requires database OID at registration.\n\n**Workaround**: Use `dblink` for cross-database operations:\n```sql\nSELECT pg_background_launch_v2($$\n  SELECT * FROM dblink('dbname=other_db', 'SELECT ...')\n$$);\n```\n\n### 3. Per-Session Worker Limits (v1.8+)\n\n**v1.8 Improvement**: Built-in `pg_background.max_workers` GUC limits concurrent workers per session.\n\n```sql\n-- Limit to 10 concurrent workers per session\nSET pg_background.max_workers = 10;\n```\n\n**Remaining Limitation**: No per-user or per-database quotas across sessions.\n\n**Workaround**: Implement application-level quotas for cross-session limits (see [Security](#security-model)).\n\n### 4. Worker Exhaustion (INSUFFICIENT_RESOURCES)\n\n**Limitation**: When `max_worker_processes` is exhausted, `pg_background_launch()` and `pg_background_launch_v2()` throw `INSUFFICIENT_RESOURCES`.\n\n**Error Message**:\n```\nERROR: could not register background process\nHINT: You may need to increase max_worker_processes.\n```\n\n**Impact**: This is particularly problematic for **autonomous logging** use cases:\n1. **Data Loss**: The message intended for logging is lost\n2. **Cascading Failures**: The calling transaction may fail unexpectedly\n3. **Unpredictable**: Failures occur sporadically under high load\n\n**Why This Happens**: Background workers share the global `max_worker_processes` pool with:\n- Parallel query workers (`max_parallel_workers`)\n- Autovacuum workers (`autovacuum_max_workers`)\n- Logical replication workers\n- Custom background workers from other extensions\n\n**Mitigation Strategies**:\n\n1. **Increase worker pool** (reduces frequency, doesn't eliminate):\n   ```sql\n   ALTER SYSTEM SET max_worker_processes = 64;\n   -- Requires PostgreSQL restart\n   ```\n\n2. **Implement retry with backoff**:\n   ```sql\n   BEGIN\n     SELECT pg_background_launch_v2(...);\n   EXCEPTION\n     WHEN insufficient_resources THEN\n       PERFORM pg_sleep(0.1);  -- Backoff\n       -- Retry or fallback\n   END;\n   ```\n\n3. **Fallback to synchronous execution** (for critical operations):\n   ```sql\n   EXCEPTION\n     WHEN insufficient_resources THEN\n       -- Execute synchronously as fallback\n       INSERT INTO audit_log VALUES (...);\n   END;\n   ```\n\n4. **Pre-check worker availability** (advisory, not guaranteed):\n   ```sql\n   SELECT count(*) \u003c current_setting('max_worker_processes')::int\n   FROM pg_stat_activity\n   WHERE backend_type LIKE '%worker%';\n   ```\n\n5. **Reserve capacity** by setting conservative `pg_background.max_workers`:\n   ```sql\n   -- Leave headroom for other workers\n   SET pg_background.max_workers = 8;  -- Even if pool is 64\n   ```\n\n**Recommendation**: For mission-critical logging, always implement a synchronous fallback. Autonomous transactions via pg_background are **best-effort**, not guaranteed.\n\n**See Also**: [Autonomous Audit Logging](#2-autonomous-audit-logging) for robust implementation patterns.\n\n### 5. Result Consumption is One-Time\n\n**Limitation**: `result_v2()` can only be called **once** per handle.\n\n**Reason**: Results streamed from DSM; no persistent storage.\n\n**Workaround**: Use CTE or temporary table:\n```sql\n-- Store results in temp table\nCREATE TEMP TABLE worker_output AS\n  SELECT * FROM pg_background_result_v2(\u003cpid\u003e, \u003ccookie\u003e) AS (col text);\n\n-- Query multiple times\nSELECT * FROM worker_output WHERE col LIKE '%foo%';\nSELECT count(*) FROM worker_output;\n```\n\n### 6. No Result Pagination\n\n**Limitation**: Cannot retrieve results in chunks (all-or-nothing).\n\n**Reason**: shm_mq is streaming; no cursor support.\n\n**Impact**: Large result sets (\u003e queue_size) may block worker.\n\n**Workaround**:\n- Increase `queue_size` parameter\n- Use `LIMIT` in worker SQL\n- Process results incrementally in launcher\n\n### 7. Limited Observability\n\n**Limitation**: `list_v2()` only shows workers in **current session**.\n\n**Reason**: Hash table is session-local (not shared memory).\n\n**Impact**: Cannot observe other sessions' workers.\n\n**Workaround**: Query `pg_stat_activity`:\n```sql\nSELECT\n  pid,\n  backend_type,\n  state,\n  query,\n  backend_start\nFROM pg_stat_activity\nWHERE backend_type LIKE '%background%';\n```\n\n### 8. No Transaction Pinning\n\n**Limitation**: Worker transactions are **fully autonomous** (cannot join launcher's transaction).\n\n**Reason**: PostgreSQL does not support distributed transactions.\n\n**Impact**: Cannot implement 2PC-like patterns natively.\n\n**Workaround**: Use `dblink` with `PREPARE TRANSACTION` for XA-like semantics.\n\n### 9. Early Worker Failures (Before `pq_redirect_to_shm_mq`)\n\n**Limitation**: Errors raised in the worker **before** `pq_redirect_to_shm_mq()`\ninstalls the shm_mq destination cannot be captured as a structured error.\n\n**What is \"early\"**: The small window between worker startup and the\n`pq_redirect_to_shm_mq()` call in `pg_background_worker_main` — primarily:\n\n- Failure to attach the DSM segment (`dsm_attach` returning NULL).\n- `shm_toc_lookup` failure (missing TOC entry — implies an internally\n  inconsistent DSM, typically a sign of server misconfiguration).\n- Out-of-memory during the initial worker setup allocations.\n\n**Observable behavior for the launcher**:\n\n- `pg_background_result_v2()` raises `SQLSTATE 08006 \"lost connection to\n  worker process\"` when it tries to read results from the detached shm_mq.\n  `pg_background_wait_v2()` blocks on `WaitForBackgroundWorkerShutdown` and\n  returns silently — it does not raise; the early worker exit leaves no\n  structured error on the wire for it to observe.\n- `pg_background_error_info_v2()` returns `NULL` row (no structured info).\n- `pg_background_result_info_v2()` reports `completed=true, has_error=false`\n  since the worker never got far enough to run SQL.\n\n**Why it cannot be captured**: the worker's error-propagation contract\n(`EmitErrorReport` over shm_mq, `ReadyForQuery(DestRemote)`, `pq_flush`)\nrequires the shm_mq destination to already be installed. Before\n`pq_redirect_to_shm_mq`, `ereport(ERROR)` goes to the server log only; the\nlauncher observes the worker exit and synthesizes `08006`.\n\n**Impact in practice**: these are infrastructure-level failures (DSM OOM,\nmisconfigured `dynamic_shared_memory_type`, missing `shm_toc` entry). They\nare rare in a correctly configured server and do not indicate user-level SQL\nproblems.\n\n**Recommended handling**: treat a `08006` from `pg_background_result_v2()`\nas an infra signal — do not attempt to parse an `error_info_v2` row that may\nbe `NULL`. All ordinary SQL errors (syntax, constraint violation,\ndivision-by-zero, `RAISE EXCEPTION`, statement cancel) propagate through the\nnormal path and appear as their real SQLSTATE, not `08006`.\n\n---\n\n## Best Practices\n\n### 1. Always Use v2 API in Production\n\n✅ **Correct**:\n```sql\nSELECT * FROM pg_background_launch_v2('...') AS h \\gset\nSELECT pg_background_result_v2(:'h.pid', :'h.cookie');\n```\n\n❌ **Avoid**:\n```sql\nSELECT pg_background_launch('...') AS pid \\gset  -- No PID reuse protection\nSELECT pg_background_result(:pid);\n```\n\n### 2. Set Timeouts for All Workers\n\n```sql\n-- Database-wide default\nALTER DATABASE production SET statement_timeout = '10min';\n\n-- Or per-worker\nSELECT pg_background_launch_v2($$\n  SET statement_timeout = '5min';\n  SELECT slow_query();\n$$);\n```\n\n### 3. Use submit_v2() for Fire-and-Forget\n\n```sql\n-- ✅ Idiomatic: submit + detach\nSELECT * FROM pg_background_submit_v2('INSERT INTO log ...') AS h \\gset;\nSELECT pg_background_detach_v2(:'h.pid', :'h.cookie');\n\n-- ❌ Verbose: launch + detach without result retrieval\nSELECT * FROM pg_background_launch_v2('INSERT INTO log ...') AS h \\gset;\nSELECT pg_background_detach_v2(:'h.pid', :'h.cookie');\n```\n\n### 4. Monitor Worker State Regularly\n\n```sql\n-- Scheduled cleanup of stale workers\nCREATE OR REPLACE FUNCTION cleanup_stale_workers()\nRETURNS void AS $$\nDECLARE r record;\nBEGIN\n  FOR r IN\n    SELECT *\n    FROM pg_background_list_v2() AS (\n      pid int4, cookie int8, launched_at timestamptz, user_id oid,\n      queue_size int4, state text, sql_preview text, last_error text, consumed bool\n    )\n    WHERE state IN ('stopped', 'error')\n      AND (now() - launched_at) \u003e interval '1 hour'\n  LOOP\n    PERFORM pg_background_detach_v2(r.pid, r.cookie);\n  END LOOP;\nEND;\n$$ LANGUAGE plpgsql;\n\n-- Run periodically\nSELECT cleanup_stale_workers();\n```\n\n### 5. Sanitize All Dynamic SQL\n\n```sql\n-- ✅ Safe: Use format() with %L\nCREATE FUNCTION safe_worker(table_name text) RETURNS void AS $$\nBEGIN\n  PERFORM pg_background_launch_v2(\n    format('VACUUM %I', table_name)  -- %I for identifiers\n  );\nEND;\n$$ LANGUAGE plpgsql;\n```\n\n### 6. Handle Errors Gracefully\n\n```sql\nDO $$\nDECLARE\n  h pg_background_handle;\n  result_val text;\nBEGIN\n  SELECT * INTO h FROM pg_background_launch_v2('SELECT 1/0');\n  \n  BEGIN\n    SELECT * INTO result_val FROM pg_background_result_v2(h.pid, h.cookie) AS (r text);\n  EXCEPTION WHEN OTHERS THEN\n    RAISE NOTICE 'Worker failed: %', SQLERRM;\n    -- Cleanup\n    PERFORM pg_background_detach_v2(h.pid, h.cookie);\n  END;\nEND;\n$$;\n```\n\n### 7. Document Worker Purpose\n\n```sql\n-- ✅ Good: Clear intent\nSELECT * FROM pg_background_launch_v2($$\n  /* Background VACUUM for nightly maintenance */\n  VACUUM (VERBOSE, ANALYZE) user_activity;\n$$) AS h \\gset;\n\n-- Comment visible in list_v2() sql_preview\n```\n\n### 8. Test Disaster Recovery\n\nEnsure application handles:\n- PostgreSQL restart (all workers lost)\n- Worker crashes (orphaned handles)\n- Launcher session termination (workers detached)\n\n```sql\n-- Simulate crash: check handle invalidation\nSELECT * FROM pg_background_launch_v2('SELECT pg_sleep(100)') AS h \\gset;\n-- Restart PostgreSQL\nSELECT pg_background_wait_v2(:'h.pid', :'h.cookie');  -- Should error gracefully\n```\n\n---\n\n## Migration Guide\n\n### Upgrading from v1.7 to v1.8\n\n```sql\nALTER EXTENSION pg_background UPDATE TO '1.8';\n```\n\n**New Features**:\n- ✅ `pg_background_stats_v2()` - Session statistics\n- ✅ `pg_background_progress()` - Worker progress reporting\n- ✅ `pg_background_get_progress_v2()` - Get worker progress\n- ✅ GUCs: `max_workers`, `worker_timeout`, `default_queue_size`\n- ✅ Built-in max workers enforcement\n- ✅ Enhanced robustness (overflow protection, UTF-8 truncation)\n\n**Action Items**:\n1. Review new GUC settings and configure as needed\n2. Consider using progress reporting for long-running workers\n3. Use `stats_v2()` for monitoring\n\n### Upgrading from v1.6 to v1.7\n\n```sql\nALTER EXTENSION pg_background UPDATE TO '1.7';\n```\n\n**Changes**:\n- ✅ Cryptographically secure cookie generation\n- ✅ Dedicated memory context (prevents session bloat)\n- ✅ Exponential backoff polling (reduces CPU usage)\n- ✅ **FIX: Custom schema installation support** (`CREATE EXTENSION ... WITH SCHEMA`)\n- ⚠️ No breaking changes\n\n**Custom Schema Support**: Prior to v1.7, installing the extension in a custom schema would fail with `function public.grant_pg_background_privileges does not exist`. This has been fixed by removing hardcoded schema prefixes (PostgreSQL automatically places objects in the target schema for relocatable extensions) and using dynamic schema lookup in privilege helper functions.\n\n\u003e **⚠️ Important Upgrade Note**: Custom schema support is only available for **fresh installs** of v1.7+. If you have an existing installation of v1.4, v1.5, or v1.6, the extension was installed in the `public` schema (older versions did not support custom schemas). Upgrading from these versions will keep the extension in the `public` schema because the upgrade scripts contain hardcoded `public.` references.\n\u003e\n\u003e **To move an existing installation to a custom schema:**\n\u003e ```sql\n\u003e -- 1. Drop the existing extension (preserves your data tables)\n\u003e DROP EXTENSION pg_background;\n\u003e\n\u003e -- 2. Create target schema if needed\n\u003e CREATE SCHEMA IF NOT EXISTS myschema;\n\u003e\n\u003e -- 3. Reinstall in custom schema\n\u003e CREATE EXTENSION pg_background WITH SCHEMA myschema;\n\u003e ```\n\n### Upgrading from v1.5 to v1.6\n\n```sql\nALTER EXTENSION pg_background UPDATE TO '1.6';\n```\n\n**Changes**:\n- ✅ v1 API unchanged (fully backward compatible)\n- ✅ New v2 API functions added\n- ✅ `pgbackground_role` created automatically\n- ✅ Hardened privilege helpers added\n- ⚠️ No breaking changes\n\n**Action Items**:\n1. Review privilege grants (v1.6 revokes PUBLIC access)\n2. Grant `pgbackground_role` to application users\n3. Migrate v1 API calls to v2 in new code\n\n### Upgrading from v1.0-v1.4\n\n```sql\n-- Multi-hop upgrade path\nALTER EXTENSION pg_background UPDATE TO '1.4';\nALTER EXTENSION pg_background UPDATE TO '1.6';\n```\n\n**Breaking Changes**:\n- v1.4: Removed PostgreSQL 9.x support\n- v1.5: Changed DSM lifecycle (no functional API changes)\n- v1.6: Revoked PUBLIC access (requires explicit grants)\n\n**Action Items**:\n1. Test on non-production first\n2. Audit existing privilege grants\n3. Update application code to use v2 API\n\n### Migrating from v1 to v2 API\n\n| v1 API | v2 API Equivalent |\n|--------|-------------------|\n| `pg_background_launch(sql)` | `pg_background_launch_v2(sql)` (returns handle) |\n| `pg_background_result(pid)` | `pg_background_result_v2(pid, cookie)` |\n| `pg_background_detach(pid)` | `pg_background_detach_v2(pid, cookie)` |\n| N/A | `pg_background_submit_v2(sql)` (fire-forget) |\n| N/A | `pg_background_cancel_v2(pid, cookie)` |\n| N/A | `pg_background_wait_v2(pid, cookie)` |\n| N/A | `pg_background_list_v2()` |\n\n**Example Migration**:\n\nBefore (v1):\n```sql\nSELECT pg_background_launch('VACUUM my_table') AS pid \\gset\nSELECT pg_background_detach(:pid);\n```\n\nAfter (v2):\n```sql\nSELECT * FROM pg_background_submit_v2('VACUUM my_table') AS h \\gset;\nSELECT pg_background_detach_v2(:'h.pid', :'h.cookie');\n```\n\n---\n\n## Testing\n\n### Local Testing (Native)\n\nIf you have PostgreSQL development files installed locally:\n\n```bash\n# Build and install\nmake clean \u0026\u0026 make\nsudo make install\n\n# Run regression tests\nmake installcheck\n\n# Clean test artifacts\nmake installcheckclean\n```\n\n### Docker-Based Testing (Recommended)\n\nDocker-based testing requires no local PostgreSQL installation:\n\n```bash\n# Test with PostgreSQL 17 (default)\n./test-local.sh\n\n# Test with specific PostgreSQL version\n./test-local.sh 14\n./test-local.sh 16\n\n# Test all supported versions (14-18)\n./test-local.sh all\n```\n\n### Relocatable Extension Testing\n\nVerify the extension works correctly when installed in a custom schema:\n\n```bash\n# Run comprehensive relocatable tests\n./test-relocatable.sh 17\n```\n\n### Upgrade Path Testing\n\nValidate extension upgrades work correctly:\n\n```bash\n# Test 1.8 → 1.9 upgrade path\n./test-upgrade.sh 17\n```\n\n### CI Pipeline\n\nThe project uses GitHub Actions for continuous integration:\n\n| Job | Description |\n|-----|-------------|\n| **test** | Matrix: PG 14-18 × ubuntu-22.04/24.04 regression tests |\n| **relocatable-test** | Validates custom schema installation (PG 17) |\n| **upgrade-test** | Validates 1.8 → 1.9 upgrade path |\n| **lint** | cppcheck and clang-format checks |\n| **security** | CodeQL security analysis |\n\nAll tests must pass before merging to main branches.\n\n---\n\n## Contributing\n\nWe welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for:\n- Code of conduct\n- Development setup\n- Coding standards (PostgreSQL style, `pgindent`)\n- Testing requirements\n- Pull request process\n\n**Quick Start**:\n```bash\ngit clone https://github.com/vibhorkum/pg_background.git\ncd pg_background\nmake clean \u0026\u0026 make \u0026\u0026 sudo make install\nmake installcheck\n```\n\n**Before Submitting PR**:\n- [ ] Code follows PostgreSQL conventions\n- [ ] Regression tests added/updated\n- [ ] Tests pass (`make installcheck`)\n- [ ] No compiler warnings\n- [ ] Documentation updated\n\n---\n\n## License\n\nThis project is licensed under the [PostgreSQL License](LICENSE).\n\nCopyright (c) 2014-2026, Vibhor Kumar and contributors.\nPortions Copyright (c) 1996-2026, PostgreSQL Global Development Group.\n\n---\n\n## Author\n\n**Vibhor Kumar** – Original author and maintainer\n\n**Inspiration**:\n- PostgreSQL Background Worker API\n- `dblink` extension\n- Oracle DBMS_JOB\n\n---\n\n## Related Projects\n\n- **[pg_cron](https://github.com/citusdata/pg_cron)** – Schedule periodic jobs  \n- **[dblink](https://www.postgresql.org/docs/current/dblink.html)** – Cross-database/async queries  \n- **[pgAgent](https://www.pgagent.org/)** – Job scheduler daemon  \n- **[pg_task](https://github.com/RekGRpth/pg_task)** – Task queue extension  \n\n---\n\n**Production Deployments**: For critical workloads, always:\n1. Use **v2 API exclusively** (cookie-protected handles)\n2. Set **statement_timeout** on all workers\n3. **Monitor** `pg_background_list_v2()` and `pg_stat_activity`\n4. **Test** disaster recovery scenarios (restarts, crashes)\n5. **Audit** privilege grants regularly\n\n**Version**: 1.8\n**Last Updated**: 2026-02-18\n**Minimum PostgreSQL**: 14\n**Tested Through**: PostgreSQL 18\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fvibhorkum%2Fpg_background","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fvibhorkum%2Fpg_background","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fvibhorkum%2Fpg_background/lists"}