{"id":13879602,"url":"https://github.com/discourse/prometheus_exporter","last_synced_at":"2025-05-13T20:09:25.783Z","repository":{"id":28552076,"uuid":"116633535","full_name":"discourse/prometheus_exporter","owner":"discourse","description":"A framework for collecting and aggregating prometheus metrics","archived":false,"fork":false,"pushed_at":"2025-04-17T06:58:36.000Z","size":526,"stargazers_count":552,"open_issues_count":68,"forks_count":156,"subscribers_count":29,"default_branch":"main","last_synced_at":"2025-05-07T02:05:25.150Z","etag":null,"topics":["rubygem"],"latest_commit_sha":null,"homepage":"","language":"Ruby","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/discourse.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG","contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2018-01-08T05:23:27.000Z","updated_at":"2025-05-03T11:13:50.000Z","dependencies_parsed_at":"2024-01-13T20:57:36.050Z","dependency_job_id":"89b3bdbb-105a-4a9a-b590-f9413170dd35","html_url":"https://github.com/discourse/prometheus_exporter","commit_stats":{"total_commits":261,"total_committers":87,"mean_commits":3.0,"dds":0.5134099616858238,"last_synced_commit":"ad0f37ed242d1e1307f5042b135cf6502a5a1f0d"},"previous_names":[],"tags_count":62,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/discourse%2Fprometheus_exporter","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/discourse%2Fprometheus_exporter/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/discourse%2Fprometheus_exporter/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/discourse%2Fprometheus_exporter/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/discourse","download_url":"https://codeload.github.com/discourse/prometheus_exporter/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253025446,"owners_count":21842415,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["rubygem"],"created_at":"2024-08-06T08:02:26.449Z","updated_at":"2025-05-13T20:09:25.775Z","avatar_url":"https://github.com/discourse.png","language":"Ruby","readme":"# Prometheus Exporter\n\nPrometheus Exporter allows you to aggregate custom metrics from multiple processes and export to Prometheus. It provides a very flexible framework for handling Prometheus metrics and can operate in a single and multiprocess mode.\n\nTo learn more see [Instrumenting Rails with Prometheus](https://samsaffron.com/archive/2018/02/02/instrumenting-rails-with-prometheus) (it has pretty pictures!)\n\n* [Requirements](#requirements)\n* [Migrating from v0.x](#migrating-from-v0x)\n* [Installation](#installation)\n* [Usage](#usage)\n  * [Single process mode](#single-process-mode)\n    * [Custom quantiles and buckets](#custom-quantiles-and-buckets)\n  * [Multi process mode](#multi-process-mode)\n  * [Rails integration](#rails-integration)\n    * [Per-process stats](#per-process-stats)\n    * [Sidekiq metrics](#sidekiq-metrics)\n    * [Shoryuken metrics](#shoryuken-metrics)\n    * [ActiveRecord Connection Pool Metrics](#activerecord-connection-pool-metrics)\n    * [Delayed Job plugin](#delayed-job-plugin)\n    * [Hutch metrics](#hutch-message-processing-tracer)\n  * [Puma metrics](#puma-metrics)\n  * [Unicorn metrics](#unicorn-process-metrics)\n  * [Resque metrics](#resque-metrics)\n  * [GoodJob metrics](#goodjob-metrics)\n  * [Custom type collectors](#custom-type-collectors)\n  * [Multi process mode with custom collector](#multi-process-mode-with-custom-collector)\n  * [GraphQL support](#graphql-support)\n  * [Metrics default prefix / labels](#metrics-default-prefix--labels)\n  * [Client default labels](#client-default-labels)\n  * [Client default host](#client-default-host)\n  * [Histogram mode](#histogram-mode)\n  * [Histogram - custom buckets](#histogram---custom-buckets)\n* [Transport concerns](#transport-concerns)\n* [JSON generation and parsing](#json-generation-and-parsing)\n* [Logging](#logging)\n* [Docker Usage](#docker-usage)\n* [Contributing](#contributing)\n* [License](#license)\n* [Code of Conduct](#code-of-conduct)\n\n## Requirements\n\nMinimum Ruby of version 3.0.0 is required, Ruby 2.7 is EOL as of March 31st 2023.\n\n## Migrating from v0.x\n\nThere are some major changes in v1.x from v0.x.\n\n- Some of metrics are renamed to match [prometheus official guide for metric names](https://prometheus.io/docs/practices/naming/#metric-names). (#184)\n\n## Installation\n\nAdd this line to your application's Gemfile:\n\n```ruby\ngem 'prometheus_exporter'\n```\n\nAnd then execute:\n\n    $ bundle\n\nOr install it yourself as:\n\n    $ gem install prometheus_exporter\n\n## Usage\n\n### Single process mode\n\nSimplest way of consuming Prometheus exporter is in a single process mode.\n\n```ruby\nrequire 'prometheus_exporter/server'\n\n# client allows instrumentation to send info to server\nrequire 'prometheus_exporter/client'\nrequire 'prometheus_exporter/instrumentation'\n\n# bind is the address, on which the webserver will listen\n# port is the port that will provide the /metrics route\nserver = PrometheusExporter::Server::WebServer.new bind: 'localhost', port: 12345\nserver.start\n\n# wire up a default local client\nPrometheusExporter::Client.default = PrometheusExporter::LocalClient.new(collector: server.collector)\n\n# this ensures basic process instrumentation metrics are added such as RSS and Ruby metrics\nPrometheusExporter::Instrumentation::Process.start(type: \"my program\", labels: {my_custom: \"label for all process metrics\"})\n\ngauge = PrometheusExporter::Metric::Gauge.new(\"rss\", \"used RSS for process\")\ncounter = PrometheusExporter::Metric::Counter.new(\"web_requests\", \"number of web requests\")\nsummary = PrometheusExporter::Metric::Summary.new(\"page_load_time\", \"time it took to load page\")\nhistogram = PrometheusExporter::Metric::Histogram.new(\"api_access_time\", \"time it took to call api\")\n\nserver.collector.register_metric(gauge)\nserver.collector.register_metric(counter)\nserver.collector.register_metric(summary)\nserver.collector.register_metric(histogram)\n\ngauge.observe(get_rss)\ngauge.observe(get_rss)\n\ncounter.observe(1, route: 'test/route')\ncounter.observe(1, route: 'another/route')\n\nsummary.observe(1.1)\nsummary.observe(1.12)\nsummary.observe(0.12)\n\nhistogram.observe(0.2, api: 'twitter')\n\n# http://localhost:12345/metrics now returns all your metrics\n\n```\n\n#### Custom quantiles and buckets\n\nYou can also choose custom quantiles for summaries and custom buckets for histograms.\n\n```ruby\n\nsummary = PrometheusExporter::Metric::Summary.new(\"load_time\", \"time to load page\", quantiles: [0.99, 0.75, 0.5, 0.25])\nhistogram = PrometheusExporter::Metric::Histogram.new(\"api_time\", \"time to call api\", buckets: [0.1, 0.5, 1])\n\n```\n\n### Multi process mode\n\nIn some cases (for example, unicorn or puma clusters) you may want to aggregate metrics across multiple processes.\n\nSimplest way to achieve this is to use the built-in collector.\n\nFirst, run an exporter on your desired port (we use the default bind to localhost and port of 9394):\n\n```\n$ prometheus_exporter\n```\n\nAnd in your application:\n\n```ruby\nrequire 'prometheus_exporter/client'\n\nclient = PrometheusExporter::Client.default\ngauge = client.register(:gauge, \"awesome\", \"amount of awesome\")\n\ngauge.observe(10)\ngauge.observe(99, day: \"friday\")\n\n```\n\nThen you will get the metrics:\n\n```\n$ curl localhost:9394/metrics\n# HELP collector_working Is the master process collector able to collect metrics\n# TYPE collector_working gauge\ncollector_working 1\n\n# HELP awesome amount of awesome\n# TYPE awesome gauge\nawesome{day=\"friday\"} 99\nawesome 10\n\n```\n\nCustom quantiles for summaries and buckets for histograms can also be passed in.\n\n```ruby\nrequire 'prometheus_exporter/client'\n\nclient = PrometheusExporter::Client.default\nhistogram = client.register(:histogram, \"api_time\", \"time to call api\", buckets: [0.1, 0.5, 1])\n\nhistogram.observe(0.2, api: 'twitter')\n```\n\n### Rails integration\n\nYou can easily integrate into any Rack application.\n\nIn your Gemfile:\n\n```ruby\ngem 'prometheus_exporter'\n```\n\nIn an initializer:\n\n```ruby\nunless Rails.env.test?\n  require 'prometheus_exporter/middleware'\n\n  # This reports stats per request like HTTP status and timings\n  Rails.application.middleware.unshift PrometheusExporter::Middleware\nend\n```\n\nEnsure you run the exporter in a monitored background process:\n\n```\n$ bundle exec prometheus_exporter\n```\n\n#### Choosing the style of method patching\n\nBy default, `prometheus_exporter` uses `alias_method` to instrument methods used by SQL and Redis as it is the fastest approach (see [this article](https://samsaffron.com/archive/2017/10/18/fastest-way-to-profile-a-method-in-ruby)). You may desire to add additional instrumentation libraries beyond `prometheus_exporter` to your app. This can become problematic if these other libraries instead use `prepend` to instrument methods. To resolve this, you can tell the middleware to instrument using `prepend` by passing an `instrument` option like so:\n\n```ruby\nRails.application.middleware.unshift PrometheusExporter::Middleware, instrument: :prepend\n```\n\n#### Metrics collected by Rails integration middleware\n\n| Type    | Name                                      | Description                                                 |\n| ---     | ---                                       | ---                                                         |\n| Counter | `http_requests_total`                     | Total HTTP requests from web app                            |\n| Summary | `http_request_duration_seconds`           | Time spent in HTTP reqs in seconds                          |\n| Summary | `http_request_redis_duration_seconds`¹    | Time spent in HTTP reqs in Redis, in seconds                |\n| Summary | `http_request_sql_duration_seconds`²      | Time spent in HTTP reqs in SQL in seconds                   |\n| Summary | `http_request_queue_duration_seconds`³    | Time spent queueing the request in load balancer in seconds |\n| Summary | `http_request_memcache_duration_seconds`⁴ | Time spent in HTTP reqs in Memcache in seconds              |\n\nAll metrics have a `controller` and an `action` label.\n`http_requests_total` additionally has a (HTTP response) `status` label.\n\nTo add your own labels to the default metrics, create a subclass of `PrometheusExporter::Middleware`, override `custom_labels`, and use it in your initializer.\n```ruby\nclass MyMiddleware \u003c PrometheusExporter::Middleware\n  def custom_labels(env)\n    labels = {}\n\n    if env['HTTP_X_PLATFORM']\n      labels['platform'] = env['HTTP_X_PLATFORM']\n    end\n\n    labels\n  end\nend\n```\n\nIf you're not using Rails like framework, you can extend `PrometheusExporter::Middleware#default_labels` in a way to add more relevant labels.\nFor example you can mimic [prometheus-client](https://github.com/prometheus/client_ruby) labels with code like this:\n```ruby\nclass MyMiddleware \u003c PrometheusExporter::Middleware\n  def default_labels(env, result)\n    status = (result \u0026\u0026 result[0]) || -1\n    path = [env[\"SCRIPT_NAME\"], env[\"PATH_INFO\"]].join\n    {\n      path: strip_ids_from_path(path),\n      method: env[\"REQUEST_METHOD\"],\n      status: status\n    }\n  end\n\n  def strip_ids_from_path(path)\n    path\n      .gsub(%r{/[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}(/|$)}, '/:uuid\\\\1')\n      .gsub(%r{/\\d+(/|$)}, '/:id\\\\1')\n  end\nend\n```\nThat way you won't have all metrics labeled with `controller=other` and `action=other`, but have labels such as\n```\nruby_http_request_duration_seconds{path=\"/api/v1/teams/:id\",method=\"GET\",status=\"200\",quantile=\"0.99\"} 0.009880661998977303\n```\n\n¹) Only available when Redis is used.\n²) Only available when Mysql or PostgreSQL are used.\n³) Only available when [Instrumenting Request Queueing Time](#instrumenting-request-queueing-time) is set up.\n⁴) Only available when Dalli is used.\n\n#### Activerecord Connection Pool Metrics\n\nThis collects activerecord connection pool metrics.\n\nIt supports injection of custom labels and the connection config options (`username`, `database`, `host`, `port`) as labels.\n\nFor Puma single mode\n```ruby\n#in puma.rb\nrequire 'prometheus_exporter/instrumentation'\nPrometheusExporter::Instrumentation::ActiveRecord.start(\n  custom_labels: { type: \"puma_single_mode\" }, #optional params\n  config_labels: [:database, :host] #optional params\n)\n```\n\nFor Puma cluster mode\n\n```ruby\n# in puma.rb\non_worker_boot do\n  require 'prometheus_exporter/instrumentation'\n  PrometheusExporter::Instrumentation::ActiveRecord.start(\n    custom_labels: { type: \"puma_worker\" }, #optional params\n    config_labels: [:database, :host] #optional params\n  )\nend\n```\n\nFor Unicorn / Passenger\n\n```ruby\nafter_fork do |_server, _worker|\n  require 'prometheus_exporter/instrumentation'\n  PrometheusExporter::Instrumentation::ActiveRecord.start(\n    custom_labels: { type: \"unicorn_worker\" }, #optional params\n    config_labels: [:database, :host] #optional params\n  )\nend\n```\n\nFor Sidekiq\n```ruby\nSidekiq.configure_server do |config|\n  config.on :startup do\n    require 'prometheus_exporter/instrumentation'\n    PrometheusExporter::Instrumentation::ActiveRecord.start(\n      custom_labels: { type: \"sidekiq\" }, #optional params\n      config_labels: [:database, :host] #optional params\n    )\n  end\nend\n```\n\n##### Metrics collected by ActiveRecord Instrumentation\n\n| Type  | Name                                        | Description                           |\n| ---   | ---                                         | ---                                   |\n| Gauge | `active_record_connection_pool_connections` | Total connections in pool             |\n| Gauge | `active_record_connection_pool_busy`        | Connections in use in pool            |\n| Gauge | `active_record_connection_pool_dead`        | Dead connections in pool              |\n| Gauge | `active_record_connection_pool_idle`        | Idle connections in pool              |\n| Gauge | `active_record_connection_pool_waiting`     | Connection requests waiting           |\n| Gauge | `active_record_connection_pool_size`        | Maximum allowed connection pool size  |\n\nAll metrics collected by the ActiveRecord integration include at least the following labels: `pid` (of the process the stats where collected in), `pool_name`, any labels included in the `config_labels` option (prefixed with `dbconfig_`, example: `dbconfig_host`), and all custom labels provided with the `custom_labels` option.\n\n#### Per-process stats\n\nYou may also be interested in per-process stats. This collects memory and GC stats:\n\n```ruby\n# in an initializer\nunless Rails.env.test?\n  require 'prometheus_exporter/instrumentation'\n\n  # this reports basic process stats like RSS and GC info\n  PrometheusExporter::Instrumentation::Process.start(type: \"master\")\nend\n\n# in unicorn/puma/passenger be sure to run a new process instrumenter after fork\nafter_fork do\n  require 'prometheus_exporter/instrumentation'\n  PrometheusExporter::Instrumentation::Process.start(type: \"web\")\nend\n\n```\n\n##### Metrics collected by Process Instrumentation\n\n| Type    | Name                      | Description                                  |\n| ---     | ---                       | ---                                          |\n| Gauge   | `heap_free_slots`         | Free ruby heap slots                         |\n| Gauge   | `heap_live_slots`         | Used ruby heap slots                         |\n| Gauge   | `v8_heap_size`*           | Total JavaScript V8 heap size (bytes)        |\n| Gauge   | `v8_used_heap_size`*      | Total used JavaScript V8 heap size (bytes)   |\n| Gauge   | `v8_physical_size`*       | Physical size consumed by V8 heaps           |\n| Gauge   | `v8_heap_count`*          | Number of V8 contexts running                |\n| Gauge   | `rss`                     | Total RSS used by process                    |\n| Counter | `major_gc_ops_total`      | Major GC operations by process               |\n| Counter | `minor_gc_ops_total`      | Minor GC operations by process               |\n| Counter | `allocated_objects_total` | Total number of allocated objects by process |\n\n_Metrics marked with * are only collected when `MiniRacer` is defined._\n\nMetrics collected by Process instrumentation include labels `type` (as given with the `type` option), `pid` (of the process the stats where collected in), and any custom labels given to `Process.start` with the `labels` option.\n\n#### Sidekiq metrics\n\nThere are different kinds of Sidekiq metrics that can be collected. A recommended setup looks like this:\n\n```ruby\nSidekiq.configure_server do |config|\n  require 'prometheus_exporter/instrumentation'\n  config.server_middleware do |chain|\n    chain.add PrometheusExporter::Instrumentation::Sidekiq\n  end\n  config.death_handlers \u003c\u003c PrometheusExporter::Instrumentation::Sidekiq.death_handler\n  config.on :startup do\n    PrometheusExporter::Instrumentation::Process.start type: 'sidekiq'\n    PrometheusExporter::Instrumentation::SidekiqProcess.start\n    PrometheusExporter::Instrumentation::SidekiqQueue.start\n    PrometheusExporter::Instrumentation::SidekiqStats.start\n  end\nend\n```\n\n* The middleware and death handler will generate job specific metrics (how many jobs ran? how many failed? how long did they take? how many are dead? how many were restarted?).\n* The [`Process`](#per-process-stats) metrics provide basic ruby metrics.\n* The `SidekiqProcess` metrics provide the concurrency and busy metrics for this process.\n* The `SidekiqQueue` metrics provides size and latency for the queues run by this process.\n* The `SidekiqStats` metrics provide general, global Sidekiq stats (size of Scheduled, Retries, Dead queues, total number of jobs, etc).\n\nFor `SidekiqQueue`, if you run more than one process for the same queues, note that the same metrics will be exposed by all the processes, just like the `SidekiqStats` will if you run more than one process of any kind. You might want use `avg` or `max` when consuming their metrics.\n\nAn alternative would be to expose these metrics in lone, long-lived process. Using a rake task, for example:\n\n```ruby\ntask :sidekiq_metrics do\n  server = PrometheusExporter::Server::WebServer.new\n  server.start\n\n  PrometheusExporter::Client.default = PrometheusExporter::LocalClient.new(collector: server.collector)\n\n  PrometheusExporter::Instrumentation::SidekiqQueue.start(all_queues: true)\n  PrometheusExporter::Instrumentation::SidekiqStats.start\n  sleep\nend\n```\n\nThe `all_queues` parameter for `SidekiqQueue` will expose metrics for all queues.\n\nSometimes the Sidekiq server shuts down before it can send metrics, that were generated right before the shutdown, to the collector. Especially if you care about the `sidekiq_restarted_jobs_total` metric, it is a good idea to explicitly stop the client:\n\n```ruby\n  Sidekiq.configure_server do |config|\n    at_exit do\n      PrometheusExporter::Client.default.stop(wait_timeout_seconds: 10)\n    end\n  end\n```\n\nCustom labels can be added for individual jobs by defining a class method on the job class. These labels will be added to all Sidekiq metrics written by the job:\n\n```ruby\n  class WorkerWithCustomLabels\n    def self.custom_labels\n      { my_label: 'value-here', other_label: 'second-val' }\n    end\n\n    def perform; end\n  end\n```\n\n##### Metrics collected by Sidekiq Instrumentation\n\n**PrometheusExporter::Instrumentation::Sidekiq**\n| Type    | Name                           | Description                                                                  |\n| ---     | ---                            | ---                                                                          |\n| Summary | `sidekiq_job_duration_seconds` | Time spent in sidekiq jobs                                                   |\n| Counter | `sidekiq_jobs_total`           | Total number of sidekiq jobs executed                                        |\n| Counter | `sidekiq_restarted_jobs_total` | Total number of sidekiq jobs that we restarted because of a sidekiq shutdown |\n| Counter | `sidekiq_failed_jobs_total`    | Total number of failed sidekiq jobs                                          |\n\nAll metrics have a `job_name` label and a `queue` label.\n\n**PrometheusExporter::Instrumentation::Sidekiq.death_handler**\n| Type    | Name                      | Description                       |\n| ---     | ---                       | ---                               |\n| Counter | `sidekiq_dead_jobs_total` | Total number of dead sidekiq jobs |\n\nThis metric has a `job_name` label and a `queue` label.\n\n**PrometheusExporter::Instrumentation::SidekiqQueue**\n| Type  | Name                            | Description                  |\n| ---   | ---                             | ---                          |\n| Gauge | `sidekiq_queue_backlog`         | Size of the sidekiq queue    |\n| Gauge | `sidekiq_queue_latency_seconds` | Latency of the sidekiq queue |\n\nBoth metrics will have a `queue` label with the name of the queue.\n\n**PrometheusExporter::Instrumentation::SidekiqProcess**\n| Type  | Name                          | Description                             |\n| ---   | ---                           | ---                                     |\n| Gauge | `sidekiq_process_busy`        | Number of busy workers for this process |\n| Gauge | `sidekiq_process_concurrency` | Concurrency for this process            |\n\nBoth metrics will include the labels `labels`, `queues`, `quiet`, `tag`, `hostname` and `identity`, as returned by the [Sidekiq Processes API](https://github.com/mperham/sidekiq/wiki/API#processes).\n\n**PrometheusExporter::Instrumentation::SidekiqStats**\n| Type  | Name                            | Description                             |\n| ---   | ---                             | ---                                     |\n| Gauge | `sidekiq_stats_dead_size`       | Size of the dead queue                  |\n| Gauge | `sidekiq_stats_enqueued`        | Number of enqueued jobs                 |\n| Gauge | `sidekiq_stats_failed`          | Number of failed jobs                   |\n| Gauge | `sidekiq_stats_processed`       | Total number of processed jobs          |\n| Gauge | `sidekiq_stats_processes_size`  | Number of processes                     |\n| Gauge | `sidekiq_stats_retry_size`      | Size of the retries queue               |\n| Gauge | `sidekiq_stats_scheduled_size`  | Size of the scheduled queue             |\n| Gauge | `sidekiq_stats_workers_size`    | Number of jobs actively being processed |\n\nBased on the [Sidekiq Stats API](https://github.com/mperham/sidekiq/wiki/API#stats).\n\n_See [Metrics collected by Process Instrumentation](#metrics-collected-by-process-instrumentation) for a list of metrics the Process instrumentation will produce._\n\n#### Shoryuken metrics\n\nFor Shoryuken metrics (how many jobs ran? how many failed? how long did they take? how many were restarted?)\n\n```ruby\nShoryuken.configure_server do |config|\n  config.server_middleware do |chain|\n    require 'prometheus_exporter/instrumentation'\n    chain.add PrometheusExporter::Instrumentation::Shoryuken\n  end\nend\n```\n\n##### Metrics collected by Shoryuken Instrumentation\n\n| Type    | Name                             | Description                                                                      |\n| ---     | ---                              | ---                                                                              |\n| Counter | `shoryuken_job_duration_seconds` | Total time spent in shoryuken jobs                                               |\n| Counter | `shoryuken_jobs_total`           | Total number of shoryuken jobs executed                                          |\n| Counter | `shoryuken_restarted_jobs_total` | Total number of shoryuken jobs that we restarted because of a shoryuken shutdown |\n| Counter | `shoryuken_failed_jobs_total`    | Total number of failed shoryuken jobs                                            |\n\nAll metrics have labels for `job_name` and `queue_name`.\n\n#### Delayed Job plugin\n\nIn an initializer:\n\n```ruby\nunless Rails.env.test?\n  require 'prometheus_exporter/instrumentation'\n  PrometheusExporter::Instrumentation::DelayedJob.register_plugin\nend\n```\n\n##### Metrics collected by Delayed Job Instrumentation\n\n| Type    | Name                                      | Description                                                        | Labels     |\n| ---     | ---                                       | ---                                                                | ---        |\n| Counter | `delayed_job_duration_seconds`            | Total time spent in delayed jobs                                   | `job_name` |\n| Counter | `delayed_job_latency_seconds_total`       | Total delayed jobs latency                                         | `job_name` |\n| Counter | `delayed_jobs_total`                      | Total number of delayed jobs executed                              | `job_name` |\n| Gauge   | `delayed_jobs_enqueued`                   | Number of enqueued delayed jobs                                    | -          |\n| Gauge   | `delayed_jobs_pending`                    | Number of pending delayed jobs                                     | -          |\n| Counter | `delayed_failed_jobs_total`               | Total number failed delayed jobs executed                          | `job_name` |\n| Counter | `delayed_jobs_max_attempts_reached_total` | Total number of delayed jobs that reached max attempts             | -          |\n| Summary | `delayed_job_duration_seconds_summary`    | Summary of the time it takes jobs to execute                       | `status`   |\n| Summary | `delayed_job_attempts_summary`            | Summary of the amount of attempts it takes delayed jobs to succeed | -          |\n\nAll metrics have labels for `job_name` and `queue_name`.\n`delayed_job_latency_seconds_total` is considering delayed job's [sleep_delay](https://github.com/collectiveidea/delayed_job#:~:text=If%20no%20jobs%20are%20found%2C%20the%20worker%20sleeps%20for%20the%20amount%20of%20time%20specified%20by%20the%20sleep%20delay%20option.%20Set%20Delayed%3A%3AWorker.sleep_delay%20%3D%2060%20for%20a%2060%20second%20sleep%20time.) parameter, so please be aware of this in case you are looking for high latency precision.\n\n#### Hutch Message Processing Tracer\n\nCapture [Hutch](https://github.com/gocardless/hutch) metrics (how many jobs ran? how many failed? how long did they take?)\n\n```ruby\nunless Rails.env.test?\n  require 'prometheus_exporter/instrumentation'\n  Hutch::Config.set(:tracer, PrometheusExporter::Instrumentation::Hutch)\nend\n```\n\n##### Metrics collected by Hutch Instrumentation\n\n| Type    | Name                         | Description                             |\n| ---     | ---                          | ---                                     |\n| Counter | `hutch_job_duration_seconds` | Total time spent in hutch jobs          |\n| Counter | `hutch_jobs_total`           | Total number of hutch jobs executed     |\n| Counter | `hutch_failed_jobs_total`    | Total number failed hutch jobs executed |\n\nAll metrics have a `job_name` label.\n\n#### Instrumenting Request Queueing Time\n\nRequest Queueing is defined as the time it takes for a request to reach your application (instrumented by this `prometheus_exporter`) from farther upstream (as your load balancer). A high queueing time usually means that your backend cannot handle all the incoming requests in time, so they queue up (= you should see if you need to add more capacity).\n\nAs this metric starts before `prometheus_exporter` can handle the request, you must add a specific HTTP header as early in your infrastructure as possible (we recommend your load balancer or reverse proxy).\n\nThe Amazon Application Load Balancer [request tracing header](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-request-tracing.html) is natively supported. If you are using another upstream entrypoint, you may configure your HTTP server / load balancer to add a header `X-Request-Start: t=\u003cMSEC\u003e` when passing the request upstream. Please keep in mind request time start is reported as epoch time (in seconds) and lacks precision, which may introduce additional latency in reported metrics. For more information, please consult your software manual.\n\nHint: we aim to be API-compatible with the big APM solutions, so if you've got requests queueing time configured for them, it should be expected to also work with `prometheus_exporter`.\n\n### Puma metrics\n\nThe puma metrics are using the `Puma.stats` method and hence need to be started after the\nworkers has been booted and from a Puma thread otherwise the metrics won't be accessible.\nThe easiest way to gather this metrics is to put the following in your `puma.rb` config:\n\nFor Puma single mode\n```ruby\n# puma.rb config\nrequire 'prometheus_exporter/instrumentation'\n# optional check, avoids spinning up and down threads per worker\nif !PrometheusExporter::Instrumentation::Puma.started?\n  PrometheusExporter::Instrumentation::Puma.start\nend\n```\n\nFor Puma clustered mode\n```ruby\n# puma.rb config\nafter_worker_boot do\n  require 'prometheus_exporter/instrumentation'\n  # optional check, avoids spinning up and down threads per worker\n  if !PrometheusExporter::Instrumentation::Puma.started?\n    PrometheusExporter::Instrumentation::Puma.start\n  end\nend\n```\n\n#### Metrics collected by Puma Instrumentation\n\n| Type  | Name                        | Description                                                                                                         |\n| ---   | ---                         | ---                                                                                                                 |\n| Gauge | `puma_workers`              | Number of puma workers                                                                                              |\n| Gauge | `puma_booted_workers`       | Number of puma workers booted                                                                                       |\n| Gauge | `puma_old_workers`          | Number of old puma workers                                                                                          |\n| Gauge | `puma_running_threads`      | How many threads are spawned. A spawned thread may be busy processing a request or waiting for a new request        |\n| Gauge | `puma_request_backlog`      | Number of requests waiting to be processed by a puma thread                                                         |\n| Gauge | `puma_thread_pool_capacity` | Number of puma threads available at current scale                                                                   |\n| Gauge | `puma_max_threads`          | Number of puma threads at available at max scale                                                                    |\n| Gauge | `puma_busy_threads`         | Running - how many threads are waiting to receive work + how many requests are waiting for a thread to pick them up |\n\nAll metrics may have a `phase` label and all custom labels provided with the `labels` option.\n\n### Resque metrics\n\nThe resque metrics are using the `Resque.info` method, which queries Redis internally. To start monitoring your resque\ninstallation, you'll need to start the instrumentation:\n\n```ruby\n# e.g. config/initializers/resque.rb\nrequire 'prometheus_exporter/instrumentation'\nPrometheusExporter::Instrumentation::Resque.start\n```\n\n#### Metrics collected by Resque Instrumentation\n\n| Type  | Name                    | Description                            |\n| ---   | ---                     | ---                                    |\n| Gauge | `resque_processed_jobs` | Total number of processed Resque jobs  |\n| Gauge | `resque_failed_jobs`    | Total number of failed Resque jobs     |\n| Gauge | `resque_pending_jobs`   | Total number of pending Resque jobs    |\n| Gauge | `resque_queues`         | Total number of Resque queues          |\n| Gauge | `resque_workers`        | Total number of Resque workers running |\n| Gauge | `resque_working`        | Total number of Resque workers working |\n\n### GoodJob metrics\n\nThe metrics are generated from the database using the relevant scopes. To start monitoring your GoodJob\ninstallation, you'll need to start the instrumentation:\n\n```ruby\n# e.g. config/initializers/good_job.rb\nrequire 'prometheus_exporter/instrumentation'\nPrometheusExporter::Instrumentation::GoodJob.start\n```\n\n#### Metrics collected by GoodJob Instrumentation\n\n| Type  | Name                 | Description                             |\n| ---   |----------------------|-----------------------------------------|\n| Gauge | `good_job_scheduled` | Total number of scheduled GoodJob jobs. |\n| Gauge | `good_job_retried`   | Total number of retried GoodJob jobs.   |\n| Gauge | `good_job_queued`    | Total number of queued GoodJob jobs.    |\n| Gauge | `good_job_running`   | Total number of running GoodJob jobs.   |\n| Gauge | `good_job_finished`  | Total number of finished GoodJob jobs.  |\n| Gauge | `good_job_succeeded` | Total number of succeeded GoodJob jobs. |\n| Gauge | `good_job_discarded` | Total number of discarded GoodJob jobs  |\n\n### Unicorn process metrics\n\nIn order to gather metrics from unicorn processes, we use `rainbows`, which exposes `Rainbows::Linux.tcp_listener_stats` to gather information about active workers and queued requests. To start monitoring your unicorn processes, you'll need to know both the path to unicorn PID file and the listen address (`pid_file` and `listen` in your unicorn config file)\n\nThen, run `prometheus_exporter` with `--unicorn-master` and `--unicorn-listen-address` options:\n\n```bash\nprometheus_exporter --unicorn-master /var/run/unicorn.pid --unicorn-listen-address 127.0.0.1:3000\n\n# alternatively, if you're using unix sockets:\nprometheus_exporter --unicorn-master /var/run/unicorn.pid --unicorn-listen-address /var/run/unicorn.sock\n```\n\nNote: You must install the `raindrops` gem in your `Gemfile` or locally.\n\n#### Metrics collected by Unicorn Instrumentation\n\n| Type  | Name                      | Description                                                    |\n| ---   | ---                       | ---                                                            |\n| Gauge | `unicorn_workers`         | Number of unicorn workers                                      |\n| Gauge | `unicorn_active_workers`  | Number of active unicorn workers                               |\n| Gauge | `unicorn_request_backlog` | Number of requests waiting to be processed by a unicorn worker |\n\n### Custom type collectors\n\nIn some cases you may have custom metrics you want to ship the collector in a batch. In this case you may still be interested in the base collector behavior, but would like to add your own special messages.\n\n```ruby\n# person_collector.rb\nclass PersonCollector \u003c PrometheusExporter::Server::TypeCollector\n  def initialize\n    @oldies = PrometheusExporter::Metric::Counter.new(\"oldies\", \"old people\")\n    @youngies = PrometheusExporter::Metric::Counter.new(\"youngies\", \"young people\")\n  end\n\n  def type\n    \"person\"\n  end\n\n  def collect(obj)\n    if obj[\"age\"] \u003e 21\n      @oldies.observe(1)\n    else\n      @youngies.observe(1)\n    end\n  end\n\n  def metrics\n    [@oldies, @youngies]\n  end\nend\n```\n\nShipping metrics then is done via:\n\n```ruby\nPrometheusExporter::Client.default.send_json(type: \"person\", age: 40)\n```\n\nTo load the custom collector run:\n\n```\n$ bundle exec prometheus_exporter -a person_collector.rb\n```\n\n#### Global metrics in a custom type collector\n\nCustom type collectors are the ideal place to collect global metrics, such as user/article counts and connection counts. The custom type collector runs in the collector, which usually runs in the prometheus exporter process.\n\nOut-of-the-box we try to keep the prometheus exporter as lean as possible. We do not load all Rails dependencies, so you won't have access to your models. You can always ensure it is loaded in your custom type collector with:\n\n```ruby\nunless defined? Rails\n  require File.expand_path(\"../../config/environment\", __FILE__)\nend\n```\n\nThen you can collect the metrics you need on demand:\n\n```ruby\ndef metrics\n  user_count_gauge = PrometheusExporter::Metric::Gauge.new('user_count', 'number of users in the app')\n  user_count_gauge.observe User.count\n  [user_count_gauge]\nend\n```\n\nThe metrics endpoint is called whenever prometheus calls the `/metrics` HTTP endpoint, so it may make sense to introduce some type of caching. [lru_redux](https://github.com/SamSaffron/lru_redux) is the perfect gem for this job: you can use `LruRedux::TTL::Cache`, which will expire automatically after N seconds, thus saving multiple database queries.\n\n### Multi process mode with custom collector\n\nYou can opt for custom collector logic in a multi process environment.\n\nThis allows you to completely replace the collector logic.\n\nFirst, define a custom collector. It is important that you inherit off `PrometheusExporter::Server::CollectorBase` and have custom implementations for `#process` and `#prometheus_metrics_text` methods.\n\n```ruby\nclass MyCustomCollector \u003c PrometheusExporter::Server::CollectorBase\n  def initialize\n    @gauge1 = PrometheusExporter::Metric::Gauge.new(\"thing1\", \"I am thing 1\")\n    @gauge2 = PrometheusExporter::Metric::Gauge.new(\"thing2\", \"I am thing 2\")\n    @mutex = Mutex.new\n  end\n\n  def process(str)\n    obj = JSON.parse(str)\n    @mutex.synchronize do\n      if thing1 = obj[\"thing1\"]\n        @gauge1.observe(thing1)\n      end\n\n      if thing2 = obj[\"thing2\"]\n        @gauge2.observe(thing2)\n      end\n    end\n  end\n\n  def prometheus_metrics_text\n    @mutex.synchronize do\n      \"#{@gauge1.to_prometheus_text}\\n#{@gauge2.to_prometheus_text}\"\n    end\n  end\nend\n```\n\nNext, launch the exporter process:\n\n```\n$ bin/prometheus_exporter --collector examples/custom_collector.rb\n```\n\nIn your application send metrics you want:\n\n```ruby\nrequire 'prometheus_exporter/client'\n\nclient = PrometheusExporter::Client.new(host: 'localhost', port: 12345)\nclient.send_json(thing1: 122)\nclient.send_json(thing2: 12)\n```\n\nNow your exporter will echo the metrics:\n\n```\n$ curl localhost:12345/metrics\n# HELP collector_working Is the master process collector able to collect metrics\n# TYPE collector_working gauge\ncollector_working 1\n\n# HELP thing1 I am thing 1\n# TYPE thing1 gauge\nthing1 122\n\n# HELP thing2 I am thing 2\n# TYPE thing2 gauge\nthing2 12\n```\n\n### GraphQL support\n\nGraphQL execution metrics are [supported](https://github.com/rmosolgo/graphql-ruby/blob/master/guides/queries/tracing.md#prometheus) and can be collected via the GraphQL collector, included in [graphql-ruby](https://github.com/rmosolgo/graphql-ruby).\n\n### Metrics default prefix / labels\n\n_This only works in single process mode._\n\nYou can specify default prefix or labels for metrics. For example:\n\n```ruby\n# Specify prefix for metric names\nPrometheusExporter::Metric::Base.default_prefix = \"ruby\"\n\n# Specify default labels for metrics\nPrometheusExporter::Metric::Base.default_labels = { \"hostname\" =\u003e \"app-server-01\" }\n\ncounter = PrometheusExporter::Metric::Counter.new(\"web_requests\", \"number of web requests\")\n\ncounter.observe(1, route: 'test/route')\ncounter.observe\n```\n\nWill result in:\n\n```\n# HELP web_requests number of web requests\n# TYPE web_requests counter\nruby_web_requests{hostname=\"app-server-01\",route=\"test/route\"} 1\nruby_web_requests{hostname=\"app-server-01\"} 1\n```\n\n### Exporter Process Configuration\n\nWhen running the process for `prometheus_exporter` using `bin/prometheus_exporter`, there are several configurations that\ncan be passed in:\n\n```\nUsage: prometheus_exporter [options]\n    -p, --port INTEGER               Port exporter should listen on (default: 9394)\n    -b, --bind STRING                IP address exporter should listen on (default: localhost)\n    -t, --timeout INTEGER            Timeout in seconds for metrics endpoint (default: 2)\n        --prefix METRIC_PREFIX       Prefix to apply to all metrics (default: ruby_)\n        --label METRIC_LABEL         Label to apply to all metrics (default: {})\n    -c, --collector FILE             (optional) Custom collector to run\n    -a, --type-collector FILE        (optional) Custom type collectors to run in main collector\n    -v, --verbose\n    -g, --histogram                  Use histogram instead of summary for aggregations\n        --auth FILE                  (optional) enable basic authentication using a htpasswd FILE\n        --realm REALM                (optional) Use REALM for basic authentication (default: \"Prometheus Exporter\")\n        --unicorn-listen-address ADDRESS\n                                     (optional) Address where unicorn listens on (unix or TCP address)\n        --unicorn-master PID_FILE    (optional) PID file of unicorn master process to monitor unicorn\n```\n\n#### Example\n\nThe following will run the process at\n- Port `8080` (default `9394`)\n- Bind to `0.0.0.0` (default `localhost`)\n- Timeout in `1 second` for metrics endpoint (default `2 seconds`)\n- Metric prefix as `foo_` (default `ruby_`)\n- Default labels as `{environment: \"integration\", foo: \"bar\"}`\n\n```bash\nprometheus_exporter -p 8080 \\\n                    -b 0.0.0.0 \\\n                    -t 1 \\\n                    --label '{\"environment\": \"integration\", \"foo\": \"bar\"}' \\\n                    --prefix 'foo_'\n```\n\nYou can use `-b` option to bind the `prometheus_exporter` web server to any IPv4 interface with `-b 0.0.0.0`,\nany IPv6 interface with `-b ::`, or `-b ANY` to any IPv4/IPv6 interfaces available on your host system.\n\n#### Enabling Basic Authentication\n\nIf you desire authentication on your `/metrics` route, you can enable basic authentication with the `--auth` option.\n\n```\n$ prometheus_exporter --auth my-htpasswd-file\n```\n\nAdditionally, the `--realm` option may be used to provide a customized realm for the challenge request.\n\nNotes:\n\n* You will need to create a `htpasswd` formatted file before hand which contains one or more user:password entries\n* Only the basic `crypt` encryption is currently supported\n\nA simple `htpasswd` file can be created with the Apache `htpasswd` utility; e.g:\n\n```\n$ htpasswd -cdb my-htpasswd-file my-user my-unencrypted-password\n```\n\nThis will create a file named `my-htpasswd-file` which is suitable for use the `--auth` option.\n\n### Client default labels\n\nYou can specify a default label for instrumentation metrics sent by a specific client. For example:\n\n```ruby\n# Specify on intializing PrometheusExporter::Client\nPrometheusExporter::Client.new(custom_labels: { hostname: 'app-server-01', app_name: 'app-01' })\n\n# Specify on an instance of PrometheusExporter::Client\nclient = PrometheusExporter::Client.new\nclient.custom_labels = { hostname: 'app-server-01', app_name: 'app-01' }\n```\n\nWill result in:\n\n```\nhttp_requests_total{controller=\"home\",\"action\"=\"index\",service=\"app-server-01\",app_name=\"app-01\"} 2\nhttp_requests_total{service=\"app-server-01\",app_name=\"app-01\"} 1\n```\n### Client default host\n\nBy default, `PrometheusExporter::Client.default` connects to `localhost:9394`. If your setup requires this (e.g. when using `docker-compose`), you can change the default host and port by setting the environment variables `PROMETHEUS_EXPORTER_HOST` and `PROMETHEUS_EXPORTER_PORT`.\n\n### Histogram mode\n\nBy default, the built-in collectors will report aggregations as summaries. If you need to aggregate metrics across labels, you can switch from summaries to histograms:\n\n```\n$ prometheus_exporter --histogram\n```\n\nIn histogram mode, the same metrics will be collected but will be reported as histograms rather than summaries. This sacrifices some precision but allows aggregating metrics across actions and nodes using [`histogram_quantile`].\n\n[`histogram_quantile`]: https://prometheus.io/docs/prometheus/latest/querying/functions/#histogram_quantile\n\n### Histogram - custom buckets\n\nBy default these buckets will be used:\n```\n[0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5.0, 10.0].freeze\n```\nif this is not enough you can specify `default_buckets` like this:\n```\nHistogram.default_buckets = [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2, 2.5, 3, 4, 5.0, 10.0, 12, 14, 15, 20, 25].freeze\n```\n\nSpecfied buckets on the instance  takes precedence over default:\n\n```\nHistogram.default_buckets = [0.005, 0.01, 0,5].freeze\nbuckets = [0.1, 0.2, 0.3]\nhistogram = Histogram.new('test_bucktets', 'I have specified buckets', buckets: buckets)\nhistogram.buckets =\u003e [0.1, 0.2, 0.3]\n```\n\n## Transport concerns\n\nPrometheus Exporter handles transport using a simple HTTP protocol. In multi process mode we avoid needing a large number of HTTP request by using chunked encoding to send metrics. This means that a single HTTP channel can deliver 100s or even 1000s of metrics over a single HTTP session to the `/send-metrics` endpoint. All calls to `send` and `send_json` on the `PrometheusExporter::Client` class are **non-blocking** and batched.\n\nThe `/bench` directory has simple benchmark, which is able to send through 10k messages in 500ms.\n\n## JSON generation and parsing\n\nThe `PrometheusExporter::Client` class has the method `#send-json`. This method, by default, will call `JSON.dump` on the Object it recieves. You may opt in for `oj` mode where it can use the faster `Oj.dump(obj, mode: :compat)` for JSON serialization. But be warned that if you have custom objects that implement own `to_json` methods this may not work as expected. You can opt for oj serialization with `json_serializer: :oj`.\n\nWhen `PrometheusExporter::Server::Collector` parses your JSON, by default it will use the faster Oj deserializer if available. This happens cause it only expects a simple Hash out of the box. You can opt in for the default JSON deserializer with `json_serializer: :json`.\n\n## Logging\n\n`PrometheusExporter::Client.default` will export to `STDERR`. To change this, you can pass your own logger:\n```ruby\nPrometheusExporter::Client.new(logger: Rails.logger)\nPrometheusExporter::Client.new(logger: Logger.new(STDOUT))\n```\n\nYou can also pass a log level (default is [`Logger::WARN`](https://ruby-doc.org/stdlib-3.0.1/libdoc/logger/rdoc/Logger.html)):\n```ruby\nPrometheusExporter::Client.new(log_level: Logger::DEBUG)\n```\n\n## Docker Usage\n\nYou can run `prometheus_exporter` project using an official Docker image:\n\n```bash\ndocker pull discourse/prometheus_exporter:latest\n# or use specific version\ndocker pull discourse/prometheus_exporter:x.x.x\n```\n\nThe start the container:\n\n```bash\ndocker run -p 9394:9394 discourse/prometheus_exporter\n```\n\nAdditional flags could be included:\n\n```\ndocker run -p 9394:9394 discourse/prometheus_exporter --verbose --prefix=myapp\n```\n\n## Docker/Kubernetes Healthcheck\n\nA `/ping` endpoint which only returns `PONG` is available so you can run container healthchecks :\n\nExample:\n\n```yml\nservices:\n  rails-exporter:\n    command:\n      - bin/prometheus_exporter\n      - -b\n      - 0.0.0.0\n    healthcheck:\n      test: [\"CMD\", \"curl\", \"--silent\", \"--show-error\", \"--fail\", \"--max-time\", \"3\", \"http://0.0.0.0:9394/ping\"]\n      timeout: 3s\n      interval: 10s\n      retries: 5\n```\n\n## Contributing\n\nBug reports and pull requests are welcome on GitHub at https://github.com/discourse/prometheus_exporter. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [Contributor Covenant](http://contributor-covenant.org) code of conduct.\n\n## License\n\nThe gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).\n\n## Code of Conduct\n\nEveryone interacting in the PrometheusExporter project’s codebases, issue trackers, chat rooms and mailing lists is expected to follow the [code of conduct](https://github.com/discourse/prometheus_exporter/blob/master/CODE_OF_CONDUCT.md).\n","funding_links":[],"categories":["Ruby"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdiscourse%2Fprometheus_exporter","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdiscourse%2Fprometheus_exporter","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdiscourse%2Fprometheus_exporter/lists"}