{"id":22006094,"url":"https://github.com/prometheus/client_ruby","last_synced_at":"2025-05-13T17:10:28.187Z","repository":{"id":39420778,"uuid":"10052994","full_name":"prometheus/client_ruby","owner":"prometheus","description":"Prometheus instrumentation library for Ruby applications","archived":false,"fork":false,"pushed_at":"2025-02-02T20:00:09.000Z","size":545,"stargazers_count":543,"open_issues_count":17,"forks_count":148,"subscribers_count":18,"default_branch":"main","last_synced_at":"2025-05-08T11:45:15.390Z","etag":null,"topics":["middleware","prometheus","prometheus-client-library","rack","rack-middleware","ruby","ruby-client"],"latest_commit_sha":null,"homepage":"","language":"Ruby","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/prometheus.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2013-05-14T10:40:26.000Z","updated_at":"2025-04-23T20:32:02.000Z","dependencies_parsed_at":"2023-01-31T11:46:09.548Z","dependency_job_id":"b619c4be-7ff0-4e0f-afeb-e202d43c67cd","html_url":"https://github.com/prometheus/client_ruby","commit_stats":{"total_commits":291,"total_committers":54,"mean_commits":5.388888888888889,"dds":0.6804123711340206,"last_synced_commit":"5514b2be2a417c5fba02479c7ba78096b7eaf4cf"},"previous_names":[],"tags_count":30,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/prometheus%2Fclient_ruby","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/prometheus%2Fclient_ruby/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/prometheus%2Fclient_ruby/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/prometheus%2Fclient_ruby/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/prometheus","download_url":"https://codeload.github.com/prometheus/client_ruby/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253418183,"owners_count":21905326,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["middleware","prometheus","prometheus-client-library","rack","rack-middleware","ruby","ruby-client"],"created_at":"2024-11-30T01:08:58.240Z","updated_at":"2025-05-13T17:10:28.140Z","avatar_url":"https://github.com/prometheus.png","language":"Ruby","readme":"# Prometheus Ruby Client\n\nA suite of instrumentation metric primitives for Ruby that can be exposed\nthrough a HTTP interface. Intended to be used together with a\n[Prometheus server][1].\n\n[![Gem Version][4]](http://badge.fury.io/rb/prometheus-client)\n[![Build Status][3]](https://circleci.com/gh/prometheus/client_ruby/tree/main.svg?style=svg)\n\n## Usage\n\n### Installation\n\nFor a global installation run `gem install prometheus-client`.\n\nIf you're using [Bundler](https://bundler.io/) add `gem \"prometheus-client\"` to your `Gemfile`.\nMake sure to run `bundle install` afterwards.\n\n### Overview\n\n```ruby\nrequire 'prometheus/client'\n\n# returns a default registry\nprometheus = Prometheus::Client.registry\n\n# create a new counter metric\nhttp_requests = Prometheus::Client::Counter.new(:http_requests, docstring: 'A counter of HTTP requests made')\n# register the metric\nprometheus.register(http_requests)\n\n# equivalent helper function\nhttp_requests = prometheus.counter(:http_requests, docstring: 'A counter of HTTP requests made')\n\n# start using the counter\nhttp_requests.increment\n```\n\n### Rack middleware\n\nThere are two [Rack][2] middlewares available, one to expose a metrics HTTP\nendpoint to be scraped by a Prometheus server ([Exporter][9]) and one to trace all HTTP\nrequests ([Collector][10]).\n\nIt's highly recommended to enable gzip compression for the metrics endpoint,\nfor example by including the `Rack::Deflater` middleware.\n\n```ruby\n# config.ru\n\nrequire 'rack'\nrequire 'prometheus/middleware/collector'\nrequire 'prometheus/middleware/exporter'\n\nuse Rack::Deflater\nuse Prometheus::Middleware::Collector\nuse Prometheus::Middleware::Exporter\n\nrun -\u003e(_) { [200, {'content-type' =\u003e 'text/html'}, ['OK']] }\n```\n\nStart the server and have a look at the metrics endpoint:\n[http://localhost:5123/metrics](http://localhost:5123/metrics).\n\nFor further instructions and other scripts to get started, have a look at the\nintegrated [example application](examples/rack/README.md).\n\n### Pushgateway\n\nThe Ruby client can also be used to push its collected metrics to a\n[Pushgateway][8]. This comes in handy with batch jobs or in other scenarios\nwhere it's not possible or feasible to let a Prometheus server scrape a Ruby\nprocess. TLS and HTTP basic authentication are supported.\n\n```ruby\nrequire 'prometheus/client'\nrequire 'prometheus/client/push'\n\nregistry = Prometheus::Client.registry\n# ... register some metrics, set/increment/observe/etc. their values\n\n# push the registry state to the default gateway\nPrometheus::Client::Push.new(job: 'my-batch-job').add(registry)\n\n# optional: specify a grouping key that uniquely identifies a job instance, and gateway.\n#\n# Note: the labels you use in the grouping key must not conflict with labels set on the\n# metrics being pushed. If they do, an error will be raised.\nPrometheus::Client::Push.new(\n  job: 'my-batch-job',\n  gateway: 'https://example.domain:1234',\n  grouping_key: { instance: 'some-instance', extra_key: 'foobar' }\n).add(registry)\n\n# If you want to replace any previously pushed metrics for a given grouping key,\n# use the #replace method.\n#\n# Unlike #add, this will completely replace the metrics under the specified grouping key\n# (i.e. anything currently present in the pushgateway for the specified grouping key, but\n# not present in the registry for that grouping key will be removed).\n#\n# See https://github.com/prometheus/pushgateway#put-method for a full explanation.\nPrometheus::Client::Push.new(job: 'my-batch-job').replace(registry)\n\n# If you want to delete all previously pushed metrics for a given grouping key,\n# use the #delete method.\nPrometheus::Client::Push.new(job: 'my-batch-job').delete\n```\n\n#### Basic authentication\n\nBy design, `Prometheus::Client::Push` doesn't read credentials for HTTP basic\nauthentication when they are passed in via the gateway URL using the\n`http://user:password@example.com:9091` syntax, and will in fact raise an error if they're\nsupplied that way.\n\nThe reason for this is that when using that syntax, the username and password\nhave to follow the usual rules for URL encoding of characters [per RFC\n3986](https://datatracker.ietf.org/doc/html/rfc3986#section-2.1).\n\nRather than place the burden of correctly performing that encoding on users of this gem,\nwe decided to have a separate method for supplying HTTP basic authentication credentials,\nwith no requirement to URL encode the characters in them.\n\nInstead of passing credentials like this:\n\n```ruby\npush = Prometheus::Client::Push.new(job: \"my-job\", gateway: \"http://user:password@localhost:9091\")\n```\n\nplease pass them like this:\n\n```ruby\npush = Prometheus::Client::Push.new(job: \"my-job\", gateway: \"http://localhost:9091\")\npush.basic_auth(\"user\", \"password\")\n```\n\n## Metrics\n\nThe following metric types are currently supported.\n\n### Counter\n\nCounter is a metric that exposes merely a sum or tally of things.\n\n```ruby\ncounter = Prometheus::Client::Counter.new(:service_requests_total, docstring: '...', labels: [:service])\n\n# increment the counter for a given label set\ncounter.increment(labels: { service: 'foo' })\n\n# increment by a given value\ncounter.increment(by: 5, labels: { service: 'bar' })\n\n# get current value for a given label set\ncounter.get(labels: { service: 'bar' })\n# =\u003e 5\n```\n\n### Gauge\n\nGauge is a metric that exposes merely an instantaneous value or some snapshot\nthereof.\n\n```ruby\ngauge = Prometheus::Client::Gauge.new(:room_temperature_celsius, docstring: '...', labels: [:room])\n\n# set a value\ngauge.set(21.534, labels: { room: 'kitchen' })\n\n# retrieve the current value for a given label set\ngauge.get(labels: { room: 'kitchen' })\n# =\u003e 21.534\n\n# increment the value (default is 1)\ngauge.increment(labels: { room: 'kitchen' })\n# =\u003e 22.534\n\n# decrement the value by a given value\ngauge.decrement(by: 5, labels: { room: 'kitchen' })\n# =\u003e 17.534\n```\n\n### Histogram\n\nA histogram samples observations (usually things like request durations or\nresponse sizes) and counts them in configurable buckets. It also provides a sum\nof all observed values.\n\n```ruby\nhistogram = Prometheus::Client::Histogram.new(:service_latency_seconds, docstring: '...', labels: [:service])\n\n# record a value\nhistogram.observe(Benchmark.realtime { service.call(arg) }, labels: { service: 'users' })\n\n# retrieve the current bucket values\nhistogram.get(labels: { service: 'users' })\n# =\u003e { 0.005 =\u003e 3, 0.01 =\u003e 15, 0.025 =\u003e 18, ..., 2.5 =\u003e 42, 5 =\u003e 42, 10 = \u003e42 }\n```\n\nHistograms provide default buckets of `[0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10]`\n\nYou can specify your own buckets, either explicitly, or using the `Histogram.linear_buckets`\nor `Histogram.exponential_buckets` methods to define regularly spaced buckets.\n\n### Summary\n\nSummary, similar to histograms, is an accumulator for samples. It captures\nNumeric data and provides an efficient percentile calculation mechanism.\n\nFor now, only `sum` and `total` (count of observations) are supported, no actual quantiles.\n\n```ruby\nsummary = Prometheus::Client::Summary.new(:service_latency_seconds, docstring: '...', labels: [:service])\n\n# record a value\nsummary.observe(Benchmark.realtime { service.call() }, labels: { service: 'database' })\n\n# retrieve the current sum and total values\nsummary_value = summary.get(labels: { service: 'database' })\nsummary_value['sum'] # =\u003e 123.45\nsummary_value['count'] # =\u003e 100\n```\n\n## Labels\n\nAll metrics can have labels, allowing grouping of related time series.\n\nLabels are an extremely powerful feature, but one that must be used with care.\nRefer to the best practices on [naming](https://prometheus.io/docs/practices/naming/) and\n[labels](https://prometheus.io/docs/practices/instrumentation/#use-labels).\n\nMost importantly, avoid labels that can have a large number of possible values (high\ncardinality). For example, an HTTP Status Code is a good label. A User ID is **not**.\n\nLabels are specified optionally when updating metrics, as a hash of `label_name =\u003e value`.\nRefer to [the Prometheus documentation](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels)\nas to what's a valid `label_name`.\n\nIn order for a metric to accept labels, their names must be specified when first initializing\nthe metric. Then, when the metric is updated, all the specified labels must be present.\n\nExample:\n\n```ruby\nhttps_requests_total = Counter.new(:http_requests_total, docstring: '...', labels: [:service, :status_code])\n\n# increment the counter for a given label set\nhttps_requests_total.increment(labels: { service: \"my_service\", status_code: response.status_code })\n```\n\n### Pre-set Label Values\n\nYou can also \"pre-set\" some of these label values, if they'll always be the same, so you don't\nneed to specify them every time:\n\n```ruby\nhttps_requests_total = Counter.new(:http_requests_total,\n                                   docstring: '...',\n                                   labels: [:service, :status_code],\n                                   preset_labels: { service: \"my_service\" })\n\n# increment the counter for a given label set\nhttps_requests_total.increment(labels: { status_code: response.status_code })\n```\n\n### `with_labels`\n\nSimilar to pre-setting labels, you can get a new instance of an existing metric object,\nwith a subset (or full set) of labels set, so that you can increment / observe the metric\nwithout having to specify the labels for every call.\n\nMoreover, if all the labels the metric can take have been pre-set, validation of the labels\nis done on the call to `with_labels`, and then skipped for each observation, which can\nlead to performance improvements. If you are incrementing a counter in a fast loop, you\ndefinitely want to be doing this.\n\n\nExamples:\n\n**Pre-setting labels for ease of use:**\n\n```ruby\n# in the metric definition:\nrecords_processed_total = registry.counter.new(:records_processed_total,\n                                               docstring: '...',\n                                               labels: [:service, :component],\n                                               preset_labels: { service: \"my_service\" })\n\n# in one-off calls, you'd specify the missing labels (component in this case)\nrecords_processed_total.increment(labels: { component: 'a_component' })\n\n# you can also have a \"view\" on this metric for a specific component where this label is\n# pre-set:\nclass MyComponent\n  def metric\n    @metric ||= records_processed_total.with_labels(component: \"my_component\")\n  end\n\n  def process\n    records.each do |record|\n      # process the record\n      metric.increment\n    end\n  end\nend\n```\n\n### `init_label_set`\n\nThe time series of a metric are not initialized until something happens. For counters, for example, this means that the time series do not exist until the counter is incremented for the first time.\n\nTo get around this problem the client provides the `init_label_set` method that can be used to initialise the time series of a metric for a given label set.\n\n### Reserved labels\n\nThe following labels are reserved by the client library, and attempting to use them in a\nmetric definition will result in a\n`Prometheus::Client::LabelSetValidator::ReservedLabelError` being raised:\n\n  - `:job`\n  - `:instance`\n  - `:pid`\n\n## Data Stores\n\nThe data for all the metrics (the internal counters associated with each labelset)\nis stored in a global Data Store object, rather than in the metric objects themselves.\n(This \"storage\" is ephemeral, generally in-memory, it's not \"long-term storage\")\n\nThe main reason to do this is that different applications may have different requirements\nfor their metrics storage. Applications running in pre-fork servers (like Unicorn, for\nexample), require a shared store between all the processes, to be able to report coherent\nnumbers. At the same time, other applications may not have this requirement but be very\nsensitive to performance, and would prefer instead a simpler, faster store.\n\nBy having a standardized and simple interface that metrics use to access this store,\nwe abstract away the details of storing the data from the specific needs of each metric.\nThis allows us to then simply swap around the stores based on the needs of different\napplications, with no changes to the rest of the client.\n\nThe client provides 3 built-in stores, but if neither of these is ideal for your\nrequirements, you can easily make your own store and use that instead. More on this below.\n\n### Configuring which store to use.\n\nBy default, the Client uses the `Synchronized` store, which is a simple, thread-safe Store\nfor single-process scenarios.\n\nIf you need to use a different store, set it in the Client Config:\n\n```ruby\nPrometheus::Client.config.data_store = Prometheus::Client::DataStores::DataStore.new(store_specific_params)\n```\n\nNOTE: You **must** make sure to set the `data_store` before initializing any metrics.\nIf using Rails, you probably want to set up your Data Store on `config/application.rb`,\nor `config/environments/*`, both of which run before `config/initializers/*`\n\nAlso note that `config.data_store` is set to an *instance* of a `DataStore`, not to the\nclass. This is so that the stores can receive parameters. Most of the built-in stores\ndon't require any, but `DirectFileStore` does, for example.\n\nWhen instantiating metrics, there is an optional `store_settings` attribute. This is used\nto set up store-specific settings for each metric. For most stores, this is not used, but\nfor multi-process stores, this is used to specify how to aggregate the values of each\nmetric across multiple processes. For the most part, this is used for Gauges, to specify\nwhether you want to report the `SUM`, `MAX`, `MIN`, or `MOST_RECENT` value observed across\nall processes. For almost all other cases, you'd leave the default (`SUM`). More on this\non the *Aggregation* section below.\n\nCustom stores may also accept extra parameters besides `:aggregation`. See the\ndocumentation of each store for more details.\n\n### Built-in stores\n\nThere are 3 built-in stores, with different trade-offs:\n\n- **Synchronized**: Default store. Thread safe, but not suitable for multi-process\n  scenarios (e.g. pre-fork servers, like Unicorn). Stores data in Hashes, with all accesses\n  protected by Mutexes.\n- **SingleThreaded**: Fastest store, but only suitable for single-threaded scenarios.\n  This store does not make any effort to synchronize access to its internal hashes, so\n  it's absolutely not thread safe.\n- **DirectFileStore**: Stores data in binary files, one file per process and per metric.\n  This is generally the recommended store to use with pre-fork servers and other\n  \"multi-process\" scenarios. There are some important caveats to using this store, so\n  please read on the section below.\n\n### `DirectFileStore` caveats and things to keep in mind\n\nEach metric gets a file for each process, and manages its contents by storing keys and\nbinary floats next to them, and updating the offsets of those Floats directly. When\nexporting metrics, it will find all the files that apply to each metric, read them,\nand aggregate them.\n\n**Aggregation of metrics**: Since there will be several files per metrics (one per process),\nthese need to be aggregated to present a coherent view to Prometheus. Depending on your\nuse case, you may need to control how this works. When using this store,\neach Metric allows you to specify an `:aggregation` setting, defining how\nto aggregate the multiple possible values we can get for each labelset. By default,\nCounters, Histograms and Summaries are `SUM`med, and Gauges report all their values (one\nfor each process), tagged with a `pid` label. You can also select `SUM`, `MAX`, `MIN`, or\n`MOST_RECENT` for your gauges, depending on your use case.\n\nPlease note that the `MOST_RECENT` aggregation only works for gauges, and it does not\nallow the use of `increment` / `decrement`, you can only use `set`.\n\n**Memory Usage**: When scraped by Prometheus, this store will read all these files, get all\nthe values and aggregate them. We have notice this can have a noticeable effect on memory\nusage for your app. We recommend you test this in a realistic usage scenario to make sure\nyou won't hit any memory limits your app may have.\n\n**Resetting your metrics on each run**: You should also make sure that the directory where\nyou store your metric files (specified when initializing the `DirectFileStore`) is emptied\nwhen your app starts. Otherwise, each app run will continue exporting the metrics from the\nprevious run.\n\nIf you have this issue, one way to do this is to run code similar to this as part of you\ninitialization:\n\n```ruby\nDir[\"#{app_path}/tmp/prometheus/*.bin\"].each do |file_path|\n  File.unlink(file_path)\nend\n```\n\nIf you are running in pre-fork servers (such as Unicorn or Puma with multiple processes),\nmake sure you do this **before** the server forks. Otherwise, each child process may delete\nfiles created by other processes on *this* run, instead of deleting old files.\n\n**Declare metrics before fork**: As well as deleting files before your process forks, you\nshould make sure to declare your metrics before forking too. Because the metric registry\nis held in memory, any metrics declared after forking will only be present in child\nprocesses where the code declaring them ran, and as a result may not be consistently\nexported when scraped (i.e. they will only appear when a child process that declared them\nis scraped).\n\nIf you're absolutely sure that every child process will run the metric declaration code,\nthen you won't run into this issue, but the simplest approach is to declare the metrics\nbefore forking.\n\n**Large numbers of files**: Because there is an individual file per metric and per process\n(which is done to optimize for observation performance), you may end up with a large number\nof files. We don't currently have a solution for this problem, but we're working on it.\n\n**Performance**: Even though this store saves data on disk, it's still much faster than\nwould probably be expected, because the files are never actually `fsync`ed, so the store\nnever blocks while waiting for disk. The kernel's page cache is incredibly efficient in\nthis regard. If in doubt, check the benchmark scripts described in the documentation for\ncreating your own stores and run them in your particular runtime environment to make sure\nthis provides adequate performance.\n\n\n### Building your own store, and stores other than the built-in ones.\n\nIf none of these stores is suitable for your requirements, you can easily make your own.\n\nThe interface and requirements of Stores are specified in detail in the `README.md`\nin the `client/data_stores` directory. This thoroughly documents how to make your own\nstore.\n\nThere are also links there to non-built-in stores created by others that may be useful,\neither as they are, or as a starting point for making your own.\n\n### Aggregation settings for multi-process stores\n\nIf you are in a multi-process environment (such as pre-fork servers like Unicorn), each\nprocess will probably keep their own counters, which need to be aggregated when receiving\na Prometheus scrape, to report coherent total numbers.\n\nFor Counters, Histograms and quantile-less Summaries this is simply a matter of\nsumming the values of each process.\n\nFor Gauges, however, this may not be the right thing to do, depending on what they're\nmeasuring. You might want to take the maximum or minimum value observed in any process,\nrather than the sum of all of them. By default, we export each process's individual\nvalue, with a `pid` label identifying each one.\n\nIf these defaults don't work for your use case, you should use the `store_settings`\nparameter when registering the metric, to specify an `:aggregation` setting.\n\n```ruby\nfree_disk_space = registry.gauge(:free_disk_space_bytes,\n                                docstring: \"Free disk space, in bytes\",\n                                store_settings: { aggregation: :max })\n```\n\nNOTE: This will only work if the store you're using supports the `:aggregation` setting.\nOf the built-in stores, only `DirectFileStore` does.\n\nAlso note that the `:aggregation` setting works for all metric types, not just for gauges.\nIt would be unusual to use it for anything other than gauges, but if your use-case\nrequires it, the store will respect your aggregation wishes.\n\n## Tests\n\nInstall necessary development gems with `bundle install` and run tests with\nrspec:\n\n```bash\nrake\n```\n\n[1]: https://github.com/prometheus/prometheus\n[2]: http://rack.github.io/\n[3]: https://circleci.com/gh/prometheus/client_ruby/tree/main.svg?style=svg\n[4]: https://badge.fury.io/rb/prometheus-client.svg\n[8]: https://github.com/prometheus/pushgateway\n[9]: lib/prometheus/middleware/exporter.rb\n[10]: lib/prometheus/middleware/collector.rb\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fprometheus%2Fclient_ruby","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fprometheus%2Fclient_ruby","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fprometheus%2Fclient_ruby/lists"}