https://github.com/risingstack/opentracing-metrics-tracer
Exports cross-process metrics via OpenTracing to Prometheus.
https://github.com/risingstack/opentracing-metrics-tracer
node nodejs opentracing prometheus prometheus-exporter tracer
Last synced: 6 months ago
JSON representation
Exports cross-process metrics via OpenTracing to Prometheus.
- Host: GitHub
- URL: https://github.com/risingstack/opentracing-metrics-tracer
- Owner: RisingStack
- License: mit
- Created: 2017-09-02T10:21:57.000Z (about 8 years ago)
- Default Branch: master
- Last Pushed: 2018-03-06T07:01:15.000Z (over 7 years ago)
- Last Synced: 2025-04-10T06:41:41.650Z (6 months ago)
- Topics: node, nodejs, opentracing, prometheus, prometheus-exporter, tracer
- Language: JavaScript
- Homepage:
- Size: 74.2 KB
- Stars: 13
- Watchers: 3
- Forks: 8
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- License: LICENSE
Awesome Lists containing this project
README
# opentracing-metrics-tracer
[](https://travis-ci.org/RisingStack/opentracing-metrics-tracer)
Exports cross-process metrics via OpenTracing instrumentation to reporters: Prometheus.
It's capable to measure operation characteristics in a distributed system like microservices.It also makes possible to reverse engineer the infrastructure topology as we know the initiators
## Available Reporters
- [Prometheus](https://prometheus.io/) via [prom-client](https://github.com/siimon/prom-client)
## Getting started
```js
const MetricsTracer = require('@risingstack/opentracing-metrics-tracer')
const prometheusReporter = new MetricsTracer.PrometheusReporter()
const metricsTracer = new MetricsTracer('my-service', [prometheusReporter])// Instrument
const span = metricsTracer.startSpan('my-operation')
span.finish()
...app.get('/metrics', (req, res) => {
res.set('Content-Type', MetricsTracer.PrometheusReporter.Prometheus.register.contentType)
res.end(prometheusReporter.metrics())
})
```### With auto instrumentation and multiple tracers
Check out: https://github.com/RisingStack/opentracing-auto
```js
// Prometheus metrics tracer
const MetricsTracer = require('@risingstack/opentracing-metrics-tracer')
const prometheusReporter = new MetricsTracer.PrometheusReporter()
const metricsTracer = new MetricsTracer('my-service', [prometheusReporter])// Jaeger tracer (classic distributed tracing)
const jaeger = require('jaeger-client')
const UDPSender = require('jaeger-client/dist/src/reporters/udp_sender').default
const sampler = new jaeger.RateLimitingSampler(1)
const reporter = new jaeger.RemoteReporter(new UDPSender())
const jaegerTracer = new jaeger.Tracer('my-server-pg', reporter, sampler)// Auto instrumentation
const Instrument = require('@risingstack/opentracing-auto')
const instrument = new Instrument({
tracers: [metricsTracer, jaegerTracer]
})// Rest of your code
const express = require('express')
const app = express()app.get('/metrics', (req, res) => {
res.set('Content-Type', MetricsTracer.PrometheusReporter.Prometheus.register.contentType)
res.end(prometheusReporter.metrics())
})
```### Example
See [example server](/example/server.js).
```sh
node example/server
curl http://localhost:3000
curl http://localhost:3000/metrics
```## API
`const Tracer = require('@risingstack/opentracing-metrics-tracer')`
### new Tracer(serviceKey, [reporter1, reporter2, ...])
- **serviceKey** *String*, *required*, unique key that identifies a specific type of service *(for example: my-frontend-api)*
- **reporters** *Array of reporters*, *optional*, *default:* [][OpenTracing](https://github.com/opentracing/opentracing-javascript) compatible tracer, for the complete API check out the official [documentation](https://opentracing-javascript.surge.sh/).
### new Tracer.PrometheusReporter([opts])
- **opts** *Object*, *optional*
- **opts.ignoreTags** *Object*, *optional*
- Example: `{ ignoreTags: { [Tags.HTTP_URL]: /\/metrics$/ } }` to ignore Prometheus scraperCreates a new Prometheus reporter.
### Tracer.PrometheusReporter.Prometheus
Exposed [prom-client](https://github.com/siimon/prom-client).
## Reporters
### Prometheus Reporter
Exposes metrics in Prometheus format via [prom-client](https://github.com/siimon/prom-client)
#### Metrics
- [operation_duration_seconds](#operation_duration_seconds)
- [http_request_duration_seconds](#http_request_duration_seconds)##### operation_duration_seconds
Always measured.
Sample output: Two distributed services communicate over the network.```
# HELP operation_duration_seconds Duration of operations in second
# TYPE operation_duration_seconds histogram
operation_duration_seconds_bucket{le="0.005",parent_service="my-parent-service",name="my-operation" 0
operation_duration_seconds_bucket{le="0.01",parent_service="my-parent-service",name="my-operation" 0
operation_duration_seconds_bucket{le="0.025",parent_service="my-parent-service",name="my-operation" 0
operation_duration_seconds_bucket{le="0.05",parent_service="my-parent-service",name="my-operation" 0
operation_duration_seconds_bucket{le="0.1",parent_service="my-parent-service",name="my-operation" 1
operation_duration_seconds_bucket{le="0.25",parent_service="my-parent-service",name="my-operation" 1
operation_duration_seconds_bucket{le="0.5",parent_service="my-parent-service",name="my-operation" 2
operation_duration_seconds_bucket{le="1",parent_service="my-parent-service",name="my-operation" 2
operation_duration_seconds_bucket{le="2.5",parent_service="my-parent-service",name="my-operation" 2
operation_duration_seconds_bucket{le="5",parent_service="my-parent-service",name="my-operation" 2
operation_duration_seconds_bucket{le="10",parent_service="my-parent-service",name="my-operation" 2
operation_duration_seconds_bucket{le="+Inf",parent_service="my-parent-service",name="my-operation" 2
operation_duration_seconds_sum{parent_service="my-parent-service",name="my-operation" 0.4
operation_duration_seconds_count{parent_service="my-parent-service",name="my-operation" 2
```##### http_request_duration_seconds
Measured only when the span is tagged with `SPAN_KIND_RPC_SERVER` and any of `HTTP_URL`, `HTTP_METHOD` or `HTTP_STATUS_CODE`.
Sample output:
```
# HELP http_request_handler_duration_seconds Duration of HTTP requests in second
# TYPE http_request_handler_duration_seconds histogram
http_request_handler_duration_seconds_bucket{le="0.005",parent_service="my-parent-service",method="GET",code="200",name="http_request" 0
http_request_handler_duration_seconds_bucket{le="0.01",parent_service="my-parent-service",method="GET",code="200",name="http_request" 0
http_request_handler_duration_seconds_bucket{le="0.025",parent_service="my-parent-service",method="GET",code="200",name="http_request" 0
http_request_handler_duration_seconds_bucket{le="0.05",parent_service="my-parent-service",method="GET",code="200",name="http_request" 0
http_request_handler_duration_seconds_bucket{le="0.1",parent_service="my-parent-service",method="GET",code="200",name="http_request" 1
http_request_handler_duration_seconds_bucket{le="0.25",parent_service="my-parent-service",method="GET",code="200",name="http_request" 1
http_request_handler_duration_seconds_bucket{le="0.5",parent_service="my-parent-service",method="GET",code="200",name="http_request" 2
http_request_handler_duration_seconds_bucket{le="1",parent_service="my-parent-service",method="GET",code="200",name="http_request" 2
http_request_handler_duration_seconds_bucket{le="2.5",parent_service="my-parent-service",method="GET",code="200",name="http_request" 2
http_request_handler_duration_seconds_bucket{le="5",parent_service="my-parent-service",method="GET",code="200",name="http_request" 2
http_request_handler_duration_seconds_bucket{le="10",parent_service="my-parent-service",method="GET",code="200",name="http_request" 2
http_request_handler_duration_seconds_bucket{le="+Inf",parent_service="my-parent-service",method="GET",code="200",name="http_request" 2
http_request_handler_duration_seconds_sum{parent_service="my-parent-service",method="GET",code="200",name="http_request" 0.4
http_request_handler_duration_seconds_count{parent_service="my-parent-service",method="GET",code="200",name="http_request" 2
```## Future and ideas
This library is new, in the future we could measure much more useful and specific metrics with it.
Please share your ideas in a form of issues or pull-requests.