Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/bitkill/prometheus-msk-discovery
🧿 discovers msk nodes and adds them to prometheus
https://github.com/bitkill/prometheus-msk-discovery
Last synced: about 10 hours ago
JSON representation
🧿 discovers msk nodes and adds them to prometheus
- Host: GitHub
- URL: https://github.com/bitkill/prometheus-msk-discovery
- Owner: bitkill
- License: apache-2.0
- Created: 2022-10-12T23:02:57.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2024-10-07T08:50:12.000Z (about 1 month ago)
- Last Synced: 2024-11-02T00:27:58.481Z (7 days ago)
- Language: Go
- Homepage:
- Size: 127 KB
- Stars: 1
- Watchers: 2
- Forks: 0
- Open Issues: 9
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Prometheus AWS MSK discovery
Heavily based on [teralytics/prometheus-ecs-discovery](https://github.com/teralytics/prometheus-ecs-discovery),
but exports MSK instead of ECS.## Help
Run `prometheus-msk-discovery --help` to get information.
The command line parameters that can be used are:
* `-config.cluster` (string) : arn of the MSK cluster to scrape
* `-config.jmx-metrics` : add the jmx metrics to the out file (default = true) (default true)
* `-config.node-metrics` : add the jmx metrics to the out file (default = true) (default true)
* `-config.role-arn` (string) : arn of the role to assume when scraping the AWS API (optional)
* `-config.scrape-interval` (duration) : interval at which to scrape the AWS API for MSK service discovery information (default 1m0s)
* `-config.scrape-times` (int) : how many times to scrape before exiting (0 = infinite)
* `-config.write-to` (string) : path of file to write MSK service discovery information to (default "msk_file_sd.yml")## Usage
First, build this program using the usual `go get` mechanism.
Then, run it as follows:
* Ensure the program can write to a directory readable by
your Prometheus master instance(s).
* Export the usual `AWS_REGION`, `AWS_ACCESS_KEY_ID` and
`AWS_SECRET_ACCESS_KEY` into the environment of the program,
making sure that the keys have access to the MSK API.
If the program needs to assume
a different role to obtain access, this role's ARN may be
passed in via the `--config.role-arn` option. This option also
allows for cross-account access, depending on which account
the role is defined in.
* Start the program, using the command line option
`-config.write-to` to point the program to the specific
folder that your Prometheus master can read from.
* Add a `file_sd_config` to your Prometheus master:```
scrape_configs:
- job_name: msk
file_sd_configs:
- files:
- /path/to/msk_file_sd.yml
refresh_interval: 10m
# Drop unwanted labels using the labeldrop action
metric_relabel_configs:
- regex: cluster_arn
action: labeldrop
```That's it. You should begin seeing the program scraping the
AWS APIs and writing the discovery file (by default it does
that every minute, and by default Prometheus will reload the
file the minute it is written). After reloading your Prometheus
master configuration, this program will begin informing via
the discovery file of new targets that Prometheus must scrape.