{"id":27915314,"url":"https://github.com/pgsty/pg_exporter","last_synced_at":"2026-04-01T20:41:25.340Z","repository":{"id":38422359,"uuid":"226279197","full_name":"pgsty/pg_exporter","owner":"pgsty","description":"Advanced PostgreSQL \u0026 Pgbouncer Metrics Exporter for Prometheus","archived":false,"fork":false,"pushed_at":"2026-03-21T01:20:23.000Z","size":16542,"stargazers_count":333,"open_issues_count":12,"forks_count":55,"subscribers_count":11,"default_branch":"main","last_synced_at":"2026-03-21T17:04:45.875Z","etag":null,"topics":["monitoring","pg-exporter","pgbouncer","postgres","prometheus","prometheus-exporter"],"latest_commit_sha":null,"homepage":"https://pigsty.io/docs/pg_exporter","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/pgsty.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2019-12-06T08:17:23.000Z","updated_at":"2026-03-21T16:06:59.000Z","dependencies_parsed_at":"2022-09-14T12:23:17.327Z","dependency_job_id":"67e830aa-a53c-482c-a4e6-16a6c7e4f0a1","html_url":"https://github.com/pgsty/pg_exporter","commit_stats":{"total_commits":135,"total_committers":9,"mean_commits":15.0,"dds":"0.37777777777777777","last_synced_commit":"01bbe00cac913afb65043cb042848cdc5ae93a0d"},"previous_names":["vonng/pg_exporter"],"tags_count":33,"template":false,"template_full_name":null,"purl":"pkg:github/pgsty/pg_exporter","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pgsty%2Fpg_exporter","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pgsty%2Fpg_exporter/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pgsty%2Fpg_exporter/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pgsty%2Fpg_exporter/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/pgsty","download_url":"https://codeload.github.com/pgsty/pg_exporter/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pgsty%2Fpg_exporter/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31291754,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-01T13:12:26.723Z","status":"ssl_error","status_checked_at":"2026-04-01T13:12:25.102Z","response_time":53,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["monitoring","pg-exporter","pgbouncer","postgres","prometheus","prometheus-exporter"],"created_at":"2025-05-06T15:52:44.613Z","updated_at":"2026-04-01T20:41:25.327Z","avatar_url":"https://github.com/pgsty.png","language":"Go","readme":"\u003cp align=\"center\"\u003e\n  \u003cimg src=\"static/logo.png\" alt=\"PG Exporter Logo\" height=\"128\" align=\"middle\"\u003e\n\u003c/p\u003e\n\n# PG EXPORTER\n\n[![Webite: https://pigsty.io/docs/pg_exporter](https://img.shields.io/badge/website-pigsty.io/docs/pg_exporter-slategray?style=flat\u0026logo=cilium\u0026logoColor=white)](https://pigsty.io/docs/pg_exporter)\n[![DockerHub: pgsty/pg_exporter](https://img.shields.io/badge/docker-pgsty/pg_exporter-slategray?style=flat\u0026logo=docker\u0026logoColor=white)](https://hub.docker.com/r/pgsty/pg_exporter)\n[![Version: 1.2.1](https://img.shields.io/badge/version-1.2.1-slategray?style=flat\u0026logo=cilium\u0026logoColor=white)](https://github.com/pgsty/pg_exporter/releases/tag/v1.2.1)\n[![License: Apache-2.0](https://img.shields.io/github/license/pgsty/pg_exporter?logo=opensourceinitiative\u0026logoColor=green\u0026color=slategray)](https://github.com/pgsty/pg_exporter/blob/main/LICENSE)\n[![GitHub Stars](https://img.shields.io/github/stars/pgsty/pg_exporter?style=flat\u0026logo=github\u0026logoColor=black\u0026color=slategray)](https://star-history.com/#pgsty/pg_exporter\u0026Date)\n[![Go Report Card](https://goreportcard.com/badge/github.com/pgsty/pg_exporter)](https://goreportcard.com/report/github.com/pgsty/pg_exporter)\n\n\u003e **Advanced [PostgreSQL](https://www.postgresql.org) \u0026 [pgBouncer](https://www.pgbouncer.org/) metrics [exporter](https://prometheus.io/docs/instrumenting/exporters/) for [Prometheus](https://prometheus.io/)**\n\nPG Exporter brings ultimate monitoring experience to your PostgreSQL with **declarative config**, **dynamic planning**, and **customizable collectors**. \nIt provides **600+** metrics and ~3K time series per instance, covers everything you'll need for PostgreSQL observability.\n\nCheck [**https://demo.pigsty.io**](https://demo.pigsty.io/ui/) for live demo, which is built upon this exporter by [**Pigsty**](https://pigsty.io).\n\n\u003cdiv align=\"center\"\u003e\n    \u003ca href=\"https://pigsty.io/docs/pg_exporter\"\u003eDocs\u003c/a\u003e •    \n    \u003ca href=\"#quick-start\"\u003eQuick Start\u003c/a\u003e •\n    \u003ca href=\"#features\"\u003eFeatures\u003c/a\u003e •\n    \u003ca href=\"#usage\"\u003eUsage\u003c/a\u003e •\n    \u003ca href=\"#api\"\u003eAPI\u003c/a\u003e •\n    \u003ca href=\"#deployment\"\u003eDeployment\u003c/a\u003e •\n    \u003ca href=\"#collectors\"\u003eCollectors\u003c/a\u003e •\n    \u003ca href=\"https://demo.pigsty.io/ui/\"\u003eDemo\u003c/a\u003e\n\u003c/div\u003e\u003cbr\u003e\n\n[![pigsty-dashboard](https://pigsty.io/img/pigsty/dashboard.jpg)](https://demo.pigsty.io)\n\n\n--------\n\n## Features\n\n- **Highly Customizable**: Define almost all metrics through declarative YAML configs\n- **Full Coverage**: Monitor PostgreSQL (10-18+) and pgBouncer (1.8-1.25+) in a single exporter\n- **Fine-grained Control**: Configure timeout, caching, skip conditions, and fatality per collector\n- **Dynamic Planning**: Define multiple query branches based on different conditions\n- **Self-monitoring**: Rich metrics about pg_exporter [itself](https://demo.pigsty.io/d/pgsql-exporter) for complete observability\n- **Production-Ready**: Battle-tested in real-world environments across 12K+ cores for 6+ years\n- **Auto-discovery**: Automatically discover and monitor multiple databases within an instance\n- **Health Check APIs**: Comprehensive HTTP endpoints for service health and traffic routing\n- **Extension Support**: `timescaledb`, `citus`, `pg_stat_statements`, `pg_wait_sampling`,...\n- **Local-first URL behavior**: Built for on-host deployment, with implicit local target fallback and automatic `sslmode=disable` when omitted\n\n\u003e Also support PG 9.x with [legacy config bundle](legacy/).\n\n\n--------\n\n## Quick Start\n\nRPM / DEB / Tarball available in the GitHub [release page](https://github.com/pgsty/pg_exporter/releases), and Pigsty's YUM / APT [Infra Repo](https://pigsty.io/docs/repo/infra).\n\nTo run this exporter, you need to pass the postgres/pgbouncer URL via env or arg:\n\n```bash\nPG_EXPORTER_URL='postgres://user:pass@host:port/postgres' pg_exporter\ncurl http://localhost:9630/metrics   # access metrics\n```\n\nThere are built-in metrics such as `pg_up`, `pg_version`, `pg_in_recovery`, `pg_exporter_build_info`, and exporter self-metrics under `pg_exporter_*` (disable with `--disable-intro`).\n\n**All other metrics are defined in the [`pg_exporter.yml`](pg_exporter.yml) config file**.\n\nThere are two monitoring dashboards in the [`monitor/`](monitor/) directory.\n\nYou can use [**Pigsty**](https://pigsty.io) to monitor existing PostgreSQL cluster or RDS, it will setup pg_exporter for you. \n\n\n--------\n\n## Usage\n\n```bash\nusage: pg_exporter [\u003cflags\u003e]\n\n\nFlags:\n  -h, --[no-]help            Show context-sensitive help (also try --help-long and --help-man).\n  -u, --url=URL              postgres target url\n  -c, --config=CONFIG        path to config dir or file\n      --web.listen-address=:9630 ...  \n                             Addresses on which to expose metrics and web interface. \n      --web.config.file=\"\"   Path to configuration file that can enable TLS or authentication. \n  -l, --label=\"\"             constant lables:comma separated list of label=value pair ($PG_EXPORTER_LABEL)\n  -t, --tag=\"\"               tags,comma separated list of server tag ($PG_EXPORTER_TAG)\n  -C, --[no-]disable-cache   force not using cache ($PG_EXPORTER_DISABLE_CACHE)\n  -m, --[no-]disable-intro   disable internal/exporter self metrics ($PG_EXPORTER_DISABLE_INTRO)\n  -a, --[no-]auto-discovery  automatically scrape all database for given server ($PG_EXPORTER_AUTO_DISCOVERY)\n  -x, --exclude-database=\"template0,template1,postgres\"  \n                             excluded databases when enabling auto-discovery ($PG_EXPORTER_EXCLUDE_DATABASE)\n  -i, --include-database=\"\"  included databases when enabling auto-discovery ($PG_EXPORTER_INCLUDE_DATABASE)\n  -n, --namespace=\"\"         prefix of built-in metrics, (pg|pgbouncer) by default ($PG_EXPORTER_NAMESPACE)\n  -f, --[no-]fail-fast       fail fast instead of waiting during start-up ($PG_EXPORTER_FAIL_FAST)\n  -T, --connect-timeout=100  connect timeout in ms, 100 by default ($PG_EXPORTER_CONNECT_TIMEOUT)\n  -P, --web.telemetry-path=\"/metrics\"  \n                             URL path under which to expose metrics. ($PG_EXPORTER_TELEMETRY_PATH)\n  -D, --[no-]dry-run         dry run and print raw configs\n  -E, --[no-]explain         explain server planned queries\n      --log.level=\"info\"     log level: debug|info|warn|error]\n      --log.format=\"logfmt\"  log format: logfmt|json\n      --[no-]version         Show application version.\n```\n\nParameters could be given via command-line args or environment variables. \n\n| CLI Arg                | Environment Variable           | Default Value                    |\n|------------------------|--------------------------------|----------------------------------|\n| `--url`                | `PG_EXPORTER_URL`              | `postgresql:///?sslmode=disable` |\n| `--config`             | `PG_EXPORTER_CONFIG`           | `pg_exporter.yml`                |\n| `--label`              | `PG_EXPORTER_LABEL`            |                                  |\n| `--tag`                | `PG_EXPORTER_TAG`              |                                  |\n| `--auto-discovery`     | `PG_EXPORTER_AUTO_DISCOVERY`   | `true`                           |\n| `--disable-cache`      | `PG_EXPORTER_DISABLE_CACHE`    | `false`                          |\n| `--fail-fast`          | `PG_EXPORTER_FAIL_FAST`        | `false`                          |\n| `--exclude-database`   | `PG_EXPORTER_EXCLUDE_DATABASE` |                                  |\n| `--include-database`   | `PG_EXPORTER_INCLUDE_DATABASE` |                                  |\n| `--namespace`          | `PG_EXPORTER_NAMESPACE`        | `pg\\|pgbouncer`                  |\n| `--connect-timeout`    | `PG_EXPORTER_CONNECT_TIMEOUT`  | `100`                            |\n| `--dry-run`            |                                | `false`                          |\n| `--explain`            |                                | `false`                          |\n| `--log.level`          |                                | `info`                           |\n| `--log.format`         |                                | `logfmt`                         |\n| `--web.listen-address` |                                | `:9630`                          |\n| `--web.config.file`    |                                | `\"\"`                             |\n| `--web.telemetry-path` | `PG_EXPORTER_TELEMETRY_PATH`   | `/metrics`                       |\n\n### Connection URL Defaults\n\n- If `--url` / `PG_EXPORTER_URL` is not provided, pg_exporter falls back to a local-first default URL: `postgresql:///?sslmode=disable`.\n- If `sslmode` is not explicitly set in the URL, pg_exporter injects `sslmode=disable` by default.\n- This is an intentional design choice for common on-host deployments (`pg_exporter` and PostgreSQL/PgBouncer on the same machine), where loopback TLS adds overhead with little practical gain.\n- If you need TLS for remote targets, provide `sslmode` explicitly in the connection URL (for example: `sslmode=require`, `verify-ca`, `verify-full`).\n\n\n------\n\n## API\n\nPG Exporter provides a rich set of HTTP endpoints:\n\nHere are `pg_exporter` REST APIs\n\n```bash\n# Fetch metrics (customizable)\ncurl localhost:9630/metrics\n\n# Reload configuration\ncurl -X POST localhost:9630/reload\n\n# Explain configuration\ncurl localhost:9630/explain\n\n# Print Statistics\ncurl localhost:9630/stat\n\n# Aliveness health check (200 up, 503 down)\ncurl localhost:9630/up\ncurl localhost:9630/health\ncurl localhost:9630/liveness\ncurl localhost:9630/readiness\n\n# traffic route health check\n\n### 200 if not in recovery, 404 if in recovery, 503 if server is down\ncurl localhost:9630/primary\ncurl localhost:9630/leader\ncurl localhost:9630/master\ncurl localhost:9630/read-write\ncurl localhost:9630/rw\n\n### 200 if in recovery, 404 if not in recovery, 503 if server is down\ncurl localhost:9630/replica\ncurl localhost:9630/standby\ncurl localhost:9630/read-only\ncurl localhost:9630/ro\n\n### 200 if server is ready for read traffic (including primary), 503 if server is down\ncurl localhost:9630/read\n```\n\n\n--------\n\n## Build\n\nTo build a static stand-alone binary for docker scratch\n\n```bash\nCGO_ENABLED=0 GOOS=linux go build -a -ldflags '-extldflags \"-static\"' -o pg_exporter\n```\n\nOr [download](https://github.com/pgsty/pg_exporter/releases) the latest prebuilt binaries from release pages.\n\nWe also have pre-packaged RPM / DEB packages in the [Pigsty Infra Repo](https://pigsty.io/docs/repo/infra/)\n\n\n--------\n\n## Docker\n\nYou can find pre-built amd64/arm64 docker images here: [pgsty/pg_exporter](https://hub.docker.com/r/pgsty/pg_exporter)\n\n\n--------\n\n## Deployment\n\nRedhat rpm and Debian/Ubuntu deb packages are made with `nfpm` for `x86/arm64`:\n\n* `/usr/bin/pg_exporter`: the pg_exporter binary.\n* [`/etc/pg_exporter.yml`](pg_exporter.yml): the config file\n* [`/usr/lib/systemd/system/pg_exporter.service`](package/pg_exporter.service): systemd service file\n* [`/etc/default/pg_exporter`](package/pg_exporter.default): systemd service envs \u0026 options\n\n\nWhich is also available on Pigsty's [Infra Repo](https://pigsty.io/docs/repo/infra).\n\n\n------\n\n## Collectors\n\nConfigs lie in the core of `pg_exporter`. Actually, this project contains more lines of YAML than go.\n\n* A monolith battery-included config file: [`pg_exporter.yml`](pg_exporter.yml)\n* Separated metrics definition in [`config/collector`](config/)\n* Example of how to write a config file:  [`doc.yml`](config/0000-doc.yml)\n* Legacy config bundle for PostgreSQL 9.1 - 9.6: [`legacy/`](legacy/) ([`legacy/README.md`](legacy/README.md))\n\nCurrent `pg_exporter` is shipped with the following metrics collector definition files\n\n- [0000-doc.yml](config/0000-doc.yml)\n- [0110-pg.yml](config/0110-pg.yml)\n- [0120-pg_meta.yml](config/0120-pg_meta.yml)\n- [0130-pg_setting.yml](config/0130-pg_setting.yml)\n- [0210-pg_repl.yml](config/0210-pg_repl.yml)\n- [0220-pg_sync_standby.yml](config/0220-pg_sync_standby.yml)\n- [0230-pg_downstream.yml](config/0230-pg_downstream.yml)\n- [0240-pg_slot.yml](config/0240-pg_slot.yml)\n- [0250-pg_recv.yml](config/0250-pg_recv.yml)\n- [0260-pg_sub.yml](config/0260-pg_sub.yml)\n- [0270-pg_origin.yml](config/0270-pg_origin.yml)\n- [0300-pg_io.yml](config/0300-pg_io.yml)\n- [0310-pg_size.yml](config/0310-pg_size.yml)\n- [0320-pg_archiver.yml](config/0320-pg_archiver.yml)\n- [0330-pg_bgwriter.yml](config/0330-pg_bgwriter.yml)\n- [0331-pg_checkpointer.yml](config/0331-pg_checkpointer.yml)\n- [0340-pg_ssl.yml](config/0340-pg_ssl.yml)\n- [0350-pg_checkpoint.yml](config/0350-pg_checkpoint.yml)\n- [0355-pg_timeline.yml](config/0355-pg_timeline.yml)\n- [0360-pg_recovery.yml](config/0360-pg_recovery.yml)\n- [0370-pg_slru.yml](config/0370-pg_slru.yml)\n- [0380-pg_shmem.yml](config/0380-pg_shmem.yml)\n- [0390-pg_wal.yml](config/0390-pg_wal.yml)\n- [0410-pg_activity.yml](config/0410-pg_activity.yml)\n- [0420-pg_wait.yml](config/0420-pg_wait.yml)\n- [0430-pg_backend.yml](config/0430-pg_backend.yml)\n- [0440-pg_xact.yml](config/0440-pg_xact.yml)\n- [0450-pg_lock.yml](config/0450-pg_lock.yml)\n- [0460-pg_query.yml](config/0460-pg_query.yml)\n- [0510-pg_vacuuming.yml](config/0510-pg_vacuuming.yml)\n- [0520-pg_indexing.yml](config/0520-pg_indexing.yml)\n- [0530-pg_clustering.yml](config/0530-pg_clustering.yml)\n- [0540-pg_backup.yml](config/0540-pg_backup.yml)\n- [0610-pg_db.yml](config/0610-pg_db.yml)\n- [0620-pg_db_confl.yml](config/0620-pg_db_confl.yml)\n- [0640-pg_pubrel.yml](config/0640-pg_pubrel.yml)\n- [0650-pg_subrel.yml](config/0650-pg_subrel.yml)\n- [0700-pg_table.yml](config/0700-pg_table.yml)\n- [0710-pg_index.yml](config/0710-pg_index.yml)\n- [0720-pg_func.yml](config/0720-pg_func.yml)\n- [0730-pg_seq.yml](config/0730-pg_seq.yml)\n- [0740-pg_relkind.yml](config/0740-pg_relkind.yml)\n- [0750-pg_defpart.yml](config/0750-pg_defpart.yml)\n- [0810-pg_table_size.yml](config/0810-pg_table_size.yml)\n- [0820-pg_table_bloat.yml](config/0820-pg_table_bloat.yml)\n- [0830-pg_index_bloat.yml](config/0830-pg_index_bloat.yml)\n- [0910-pgbouncer_list.yml](config/0910-pgbouncer_list.yml)\n- [0920-pgbouncer_database.yml](config/0920-pgbouncer_database.yml)\n- [0930-pgbouncer_stat.yml](config/0930-pgbouncer_stat.yml)\n- [0940-pgbouncer_pool.yml](config/0940-pgbouncer_pool.yml)\n- [1000-pg_wait_event.yml](config/1000-pg_wait_event.yml)\n- [1800-pg_tsdb_hypertable.yml](config/1800-pg_tsdb_hypertable.yml)\n- [1900-pg_citus.yml](config/1900-pg_citus.yml)\n- [2000-pg_heartbeat.yml](config/2000-pg_heartbeat.yml)\n\n\n\u003e #### Note\n\u003e\n\u003e Supported version: PostgreSQL 10, 11, 12, 13, 14, 15, 16, 17, 18+\n\u003e\n\u003e But you can still get PostgreSQL 9.1 - 9.6 support by switching to the [`legacy/pg_exporter.yml`](legacy/pg_exporter.yml) config\n\n`pg_exporter` will generate approximately 600 metrics for a completely new database cluster.\nFor a real-world database with 10 ~ 100 tables, it may generate several 1k ~ 10k metrics. \n\nYou may need to modify or disable some database-level metrics on a database with several thousand or more tables to complete the scrape in time.\n\nConfig files are using YAML format, there are lots of examples in the [conf](https://github.com/pgsty/pg_exporter/tree/main/config/collector) dir. and here is a [sample](config/0000-doc.yml) config.\n\n```\n#==============================================================#\n# 1. Config File\n#==============================================================#\n# The configuration file for pg_exporter is a YAML file.\n# Default configurations are retrieved via following precedence:\n#     1. command line args:      --config=\u003cconfig path\u003e\n#     2. environment variables:  PG_EXPORTER_CONFIG=\u003cconfig path\u003e\n#     3. pg_exporter.yml        (Current directory)\n#     4. /etc/pg_exporter.yml   (config file)\n#     5. /etc/pg_exporter       (config dir)\n\n#==============================================================#\n# 2. Config Format\n#==============================================================#\n# pg_exporter config could be a single YAML file, or a directory containing a series of separated YAML files.\n# Each YAML config file consists of one or more metrics Collector definition, which are top-level objects.\n# If a directory is provided, all YAML files in that directory (non-recursive; subdirectories are ignored) will be merged in alphabetic order.\n# Collector definition examples are shown below.\n\n#==============================================================#\n# 3. Collector Example\n#==============================================================#\n#  # Here is an example of a metrics collector definition\n#  pg_primary_only:       # Collector branch name. Must be UNIQUE among the entire configuration\n#    name: pg             # Collector namespace, used as METRIC PREFIX, set to branch name by default, can be override\n#                         # the same namespace may contain multiple collector branches. It`s the user`s responsibility\n#                         # to make sure that AT MOST ONE collector is picked for each namespace.\n#\n#    desc: PostgreSQL basic information (on primary)                 # Collector description\n#    query: |                                                        # Metrics Query SQL\n#\n#      SELECT extract(EPOCH FROM CURRENT_TIMESTAMP)                  AS timestamp,\n#             pg_current_wal_lsn() - '0/0'                           AS lsn,\n#             pg_current_wal_insert_lsn() - '0/0'                    AS insert_lsn,\n#             pg_current_wal_lsn() - '0/0'                           AS write_lsn,\n#             pg_current_wal_flush_lsn() - '0/0'                     AS flush_lsn,\n#             extract(EPOCH FROM now() - pg_postmaster_start_time()) AS uptime,\n#             extract(EPOCH FROM now() - pg_conf_load_time())        AS conf_reload_time,\n#             pg_is_in_backup()                                      AS is_in_backup,\n#             extract(EPOCH FROM now() - pg_backup_start_time())     AS backup_time;\n#\n#                             # [OPTIONAL] metadata fields, control collector behavior\n#    ttl: 10                  # Cache TTL: in seconds, how long will pg_exporter cache this collector`s query result.\n#    timeout: 0.1             # Query Timeout: in seconds, queries that exceed this limit will be canceled.\n#    min_version: 100000      # minimal supported version, boundary IS included. In server version number format,\n#    max_version: 130000      # maximal supported version, boundary NOT included, In server version number format\n#    fatal: false             # Collector marked `fatal` fails, the entire scrape will abort immediately and marked as failed\n#    skip: false              # Collector marked `skip` will not be installed during the planning procedure\n#\n#    tags: [cluster, primary] # Collector tags, used for planning and scheduling\n#\n#    # tags are list of strings, which could be:\n#    #   * `cluster` marks this query as cluster level, so it will only execute once for the same PostgreSQL Server\n#    #   * `primary` or `master`  mark this query can only run on a primary instance (WILL NOT execute if pg_is_in_recovery())\n#    #   * `standby` or `replica` mark this query can only run on a replica instance (WILL execute if pg_is_in_recovery())\n#    # some special tag prefix have special interpretation:\n#    #   * `dbname:\u003cdbname\u003e` means this query will ONLY be executed on database with name `\u003cdbname\u003e`\n#    #   * `username:\u003cuser\u003e` means this query will only be executed when connect with user `\u003cuser\u003e`\n#    #   * `extension:\u003cextname\u003e` means this query will only be executed when extension `\u003cextname\u003e` is installed\n#    #   * `schema:\u003cnspname\u003e` means this query will only by executed when schema `\u003cnspname\u003e` exist\n#    #   * `not:\u003cnegtag\u003e` means this query WILL NOT be executed when exporter is tagged with `\u003cnegtag\u003e`\n#    #   * `\u003ctag\u003e` means this query WILL be executed when exporter is tagged with `\u003ctag\u003e`\n#    #   ( \u003ctag\u003e could not be cluster,primary,standby,master,replica,etc...)\n#\n#    # One or more \"predicate queries\" may be defined for a metric query. These\n#    # are run before the main metric query (after any cache hit check). If all\n#    # of them, when run sequentially, return a single row with a single column\n#    # boolean true result, the main metric query is executed. If any of them\n#    # return false or return zero rows, the main query is skipped. If any\n#    # predicate query returns more than one row, a non-boolean result, or fails\n#    # with an error, the whole query is marked failed. Predicate queries can be\n#    # used to check for the presence of specific functions, tables, extensions,\n#    # settings, and vendor-specific pg features before running the main query.\n#\n#    predicate_queries:\n#      - name: predicate query name\n#        predicate_query: |\n#          SELECT EXISTS (SELECT 1 FROM information_schema.routines WHERE routine_schema = 'pg_catalog' AND routine_name = 'pg_backup_start_time');\n#\n#    metrics:                 # List of returned columns, each column must have a `name` and `usage`, `rename` and `description` are optional\n#      - timestamp:           # Column name, should be exactly the same as returned column name\n#          usage: GAUGE       # Metric type, `usage` could be\n#                                  * DISCARD: completely ignoring this field\n#                                  * LABEL:   use columnName=columnValue as a label in metric\n#                                  * GAUGE:   Mark column as a gauge metric, full name will be `\u003cquery.name\u003e_\u003ccolumn.name\u003e`\n#                                  * COUNTER: Same as above, except it is a counter rather than a gauge.\n#          rename: ts         # [OPTIONAL] Alias, optional, the alias will be used instead of the column name\n#          description: xxxx  # [OPTIONAL] Description of the column, will be used as a metric description\n#          default: 0         # [OPTIONAL] Default value, will be used when column is NULL\n#          scale:   1000      # [OPTIONAL] Scale the value by this factor\n#      - lsn:\n#          usage: COUNTER\n#          description: log sequence number, current write location (on primary)\n#      - insert_lsn:\n#          usage: COUNTER\n#          description: primary only, location of current wal inserting\n#      - write_lsn:\n#          usage: COUNTER\n#          description: primary only, location of current wal writing\n#      - flush_lsn:\n#          usage: COUNTER\n#          description: primary only, location of current wal syncing\n#      - uptime:\n#          usage: GAUGE\n#          description: seconds since postmaster start\n#      - conf_reload_time:\n#          usage: GAUGE\n#          description: seconds since last configuration reload\n#      - is_in_backup:\n#          usage: GAUGE\n#          description: 1 if backup is in progress\n#      - backup_time:\n#          usage: GAUGE\n#          description: seconds since the current backup start. null if don`t have one\n#\n#      .... # you can also use rename \u0026 scale to customize the metric name and value:\n#      - checkpoint_write_time:\n#          rename: write_time\n#          usage: COUNTER\n#          scale: 1e-3\n#          description: Total amount of time that has been spent in the portion of checkpoint processing where files are written to disk, in seconds\n\n#==============================================================#\n# 4. Collector Presets\n#==============================================================#\n# pg_exporter is shipped with a series of preset collectors (already numbered and ordered by filename)\n#\n# 1xx  Basic metrics:        basic info, metadata, settings\n# 2xx  Replication metrics:  replication, walreceiver, downstream, sync standby, slots, subscription\n# 3xx  Persist metrics:      size, wal, background writer, checkpointer, ssl, checkpoint, recovery, slru cache, shmem usage\n# 4xx  Activity metrics:     backend count group by state, wait event, locks, xacts, queries\n# 5xx  Progress metrics:     clustering, vacuuming, indexing, basebackup, copy\n# 6xx  Database metrics:     pg_database, publication, subscription\n# 7xx  Object metrics:       pg_class, table, index, function, sequence, default partition\n# 8xx  Optional metrics:     optional metrics collector (disable by default, slow queries)\n# 9xx  Pgbouncer metrics:    metrics from pgbouncer admin database `pgbouncer`\n#\n# 100-599 Metrics for entire database cluster  (scrape once)\n# 600-899 Metrics for single database instance (scrape for each database ,except for pg_db itself)\n\n#==============================================================#\n# 5. Cache TTL\n#==============================================================#\n# Cache can be used for reducing query overhead, it can be enabled by setting a non-zero value for `ttl`\n# It is highly recommended to use cache to avoid duplicate scrapes. Especially when you got multiple Prometheus\n# scraping the same instance with slow monitoring queries. Setting `ttl` to zero or leaving blank will disable\n# result caching, which is the default behavior\n#\n# TTL has to be smaller than your scrape interval. 15s scrape interval and 10s TTL is a good start for\n# production environment. Some expensive monitoring queries (such as size/bloat check) will have longer `ttl`\n# which can also be used as a mechanism to achieve `different scrape frequency`\n\n#==============================================================#\n# 6. Query Timeout\n#==============================================================#\n# Collectors can be configured with an optional Timeout. If the collector's query executes more than that\n# timeout, it will be canceled immediately. Setting the `timeout` to 0 or leaving blank will reset it to\n# default timeout 0.1 (100ms). Setting it to any negative number will disable the query timeout feature.\n# All queries have a default timeout of 100ms, if exceeded, the query will be canceled immediately to avoid\n# avalanche. You can explicitly overwrite that option. but beware: in some extreme cases, if all your\n# timeouts sum up greater your scrape/cache interval (usually 15s), the queries may still be jammed.\n# or, you can just disable potential slow queries.\n\n#==============================================================#\n# 7. Version Compatibility\n#==============================================================#\n# Each collector has two optional version compatibility parameters: `min_version` and `max_version`.\n# These two parameters specify the version compatibility of the collector. If target postgres/pgbouncer's\n# version is less than `min_version`, or higher than `max_version`, the collector will not be installed.\n# These two parameters are using PostgreSQL server version number format, which is a 6-digit integer\n# format as \u003cmajor:2 digit\u003e\u003cminor:2 digit\u003e:\u003crelease: 2 digit\u003e.\n# For example, 090600 stands for 9.6, and 120100 stands for 12.1\n# And beware that version compatibility range is left-inclusive right exclusive: [min, max), set to zero or\n# leaving blank will affect as -inf or +inf\n\n#==============================================================#\n# 8. Fatality\n#==============================================================#\n# If a collector is marked with `fatal` falls, the entire scrape operation will be marked as fail and key metrics\n# `pg_up` / `pgbouncer_up` will be reset to 0. It is always a good practice to set up AT LEAST ONE fatal\n# collector for pg_exporter. `pg.pg_primary_only` and `pgbouncer_list` are the default fatal collector.\n#\n# If a collector without `fatal` flag fails, it will increase global fail counters. But the scrape operation\n# will carry on. The entire scrape result will not be marked as faile, thus will not affect the `\u003cxx\u003e_up` metric.\n\n#==============================================================#\n# 9. Skip\n#==============================================================#\n# Collector with `skip` flag set to true will NOT be installed.\n# This could be a handy option to disable collectors\n\n#==============================================================#\n# 10. Tags and Planning\n#==============================================================#\n# Tags are designed for collector planning \u0026 schedule. It can be handy to customize which queries run\n# on which instances. And thus you can use one-single monolith config for multiple environments\n#\n#  Tags are a list of strings, each string could be:\n#  Pre-defined special tags\n#    * `cluster` marks this collector as cluster level, so it will ONLY BE EXECUTED ONCE for the same PostgreSQL Server\n#    * `primary` or `master` mark this collector as primary-only, so it WILL NOT work iff pg_is_in_recovery()\n#    * `standby` or `replica` mark this collector as replica-only, so it WILL work iff pg_is_in_recovery()\n#  Special tag prefix which have different interpretation:\n#    * `dbname:\u003cdbname\u003e` means this collector will ONLY work on database with name `\u003cdbname\u003e`\n#    * `username:\u003cuser\u003e` means this collector will ONLY work when connect with user `\u003cuser\u003e`\n#    * `extension:\u003cextname\u003e` means this collector will ONLY work when extension `\u003cextname\u003e` is installed\n#    * `schema:\u003cnspname\u003e` means this collector will only work when schema `\u003cnspname\u003e` exists\n#  Customized positive tags (filter) and negative tags (taint)\n#    * `not:\u003cnegtag\u003e` means this collector WILL NOT work when exporter is tagged with `\u003cnegtag\u003e`\n#    * `\u003ctag\u003e` means this query WILL work if exporter is tagged with `\u003ctag\u003e` (special tags not included)\n#\n#  pg_exporter will trigger the Planning procedure after connecting to the target. It will gather database facts\n#  and match them with tags and other metadata (such as supported version range). Collector will only\n#  be installed if and only if it is compatible with the target server.\n\n```\n\n\n\n--------------------\n\n## About\n\nAuthor: [Vonng](https://vonng.com/en) ([rh@vonng.com](mailto:rh@vonng.com))\n\nContributors: https://github.com/pgsty/pg_exporter/graphs/contributors\n\nLicense: [Apache-2.0](LICENSE)\n\nCopyright: 2018-2026 rh@vonng.com\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"static/logo.png\" alt=\"PG Exporter Logo\" height=\"128\" align=\"middle\"\u003e\n\u003c/p\u003e\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpgsty%2Fpg_exporter","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fpgsty%2Fpg_exporter","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpgsty%2Fpg_exporter/lists"}