{"id":17526314,"url":"https://github.com/f18m/cmonitor","last_synced_at":"2026-01-12T15:40:11.995Z","repository":{"id":37943285,"uuid":"178741309","full_name":"f18m/cmonitor","owner":"f18m","description":"A Docker/LXC/Kubernetes, database-free, lightweight container performance monitoring solution, perfect for ephemeral containers (e.g. containers used for DevOps automatic testing). Can also be used with InfluxDB, Prometheus and Grafana","archived":false,"fork":false,"pushed_at":"2025-12-12T20:42:04.000Z","size":17216,"stargazers_count":60,"open_issues_count":13,"forks_count":9,"subscribers_count":5,"default_branch":"master","last_synced_at":"2025-12-14T11:02:18.782Z","etag":null,"topics":["cgroups","containers","continuous-testing","cpu","devops","devops-tools","disk","docker","grafana","influxdb-client","kubernetes-monitoring","lxc-containers","memory","monitor","monitoring","performance","prometheus-client","system"],"latest_commit_sha":null,"homepage":"https://f18m.github.io/cmonitor","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/f18m.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null},"funding":{"github":"f18m"}},"created_at":"2019-03-31T20:56:09.000Z","updated_at":"2025-11-20T21:55:25.000Z","dependencies_parsed_at":"2024-05-28T11:59:44.490Z","dependency_job_id":"becd54c4-078c-4278-80b0-219bfc3706e2","html_url":"https://github.com/f18m/cmonitor","commit_stats":null,"previous_names":[],"tags_count":20,"template":false,"template_full_name":null,"purl":"pkg:github/f18m/cmonitor","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/f18m%2Fcmonitor","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/f18m%2Fcmonitor/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/f18m%2Fcmonitor/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/f18m%2Fcmonitor/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/f18m","download_url":"https://codeload.github.com/f18m/cmonitor/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/f18m%2Fcmonitor/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28341226,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-12T12:22:26.515Z","status":"ssl_error","status_checked_at":"2026-01-12T12:22:10.856Z","response_time":98,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cgroups","containers","continuous-testing","cpu","devops","devops-tools","disk","docker","grafana","influxdb-client","kubernetes-monitoring","lxc-containers","memory","monitor","monitoring","performance","prometheus-client","system"],"created_at":"2024-10-20T15:01:37.029Z","updated_at":"2026-01-12T15:40:11.973Z","avatar_url":"https://github.com/f18m.png","language":"C++","readme":"[![Build Status](https://github.com/f18m/cmonitor/actions/workflows/main.yml/badge.svg)](https://github.com/f18m/cmonitor/actions)\n[![Copr build status](https://copr.fedorainfracloud.org/coprs/f18m/cmonitor/package/cmonitor-collector/status_image/last_build.png)](https://copr.fedorainfracloud.org/coprs/f18m/cmonitor/ \"RPMs on Fedora COPR\")\n\n# cmonitor - lightweight container monitor\n\nA **Docker, LXC, Kubernetes, database-free, lightweight container performance monitoring solution**, perfect for ephemeral containers\n(e.g. containers used for DevOps automatic testing). Can also be used with InfluxDB, Prometheus and Grafana to monitor long-lived\ncontainers in real-time.\n\nThe project is composed by 2 parts: \n1) a **lightweight agent** (80KB native binary when built without Prometheus support; no JVM, Python or other interpreters needed) to collect actual CPU/memory/disk statistics (Linux-only)\n   and store them in a JSON file or stream them to a time-series database (InfluxDB and Prometheus are supported); this is the so-called `cmonitor-collector` utility;\n2) some simple **Python tools to process the generated JSONs**; the most important one is \"cmonitor_chart\" that turns the JSON into a self-contained HTML page\n   using [Google Charts](https://developers.google.com/chart) to visualize all collected data.\n\nThe collector utility is a cgroup-aware statistics collector; cgroups (i.e. Linux Control Groups) are the basic Linux technology used \nto create containers (you can [read more on them here](https://en.wikipedia.org/wiki/Cgroups)); this project thus aims at\nmonitoring your LXC/Docker/Kubernetes POD container performances by monitoring only the cgroup-level kernel-provided stats. \nHowever, considering that systemd runs all software inside cgroups, `cmonitor-collector` can also be used to sample statistics about\na software running outside any \"containerization\" technology (like LXC, Docker or Kubernetes).\n\nThis project supports only **Linux x86_64 architectures**.\n\nTable of contents of this README:\n\n- [cmonitor - lightweight container monitor](#cmonitor---lightweight-container-monitor)\n  - [Features](#features)\n  - [Yet-Another-Monitoring-Project?](#yet-another-monitoring-project)\n  - [Supported Linux Kernels](#supported-linux-kernels)\n  - [How to install](#how-to-install)\n    - [RPM (for Fedora, Centos)](#rpm-for-fedora-centos)\n    - [Debian package (for Debian, Ubuntu, etc)](#debian-package-for-debian-ubuntu-etc)\n    - [Docker](#docker)\n  - [How to build from sources](#how-to-build-from-sources)\n    - [On Fedora, Centos](#on-fedora-centos)\n    - [On Debian, Ubuntu](#on-debian-ubuntu)\n  - [How to use](#how-to-use)\n    - [Step 1: collect stats](#step-1-collect-stats)\n    - [Step 2: plot stats collected as JSON](#step-2-plot-stats-collected-as-json)\n    - [Usage scenarios and HTML result examples](#usage-scenarios-and-html-result-examples)\n      - [Monitoring the baremetal server (no containers)](#monitoring-the-baremetal-server-no-containers)\n      - [Monitoring your Docker container](#monitoring-your-docker-container)\n      - [Monitoring your Kubernetes POD](#monitoring-your-kubernetes-pod)\n    - [Connecting with InfluxDB and Grafana](#connecting-with-influxdb-and-grafana)\n    - [Connecting with Prometheus and Grafana](#connecting-with-prometheus-and-grafana)\n    - [Reference Manual](#reference-manual)\n  - [Project History](#project-history)\n  - [License](#license)\n\n## Features\n\nThis project collects performance data sampling system-wide Linux statistic files (i.e. sampling `/proc`):\n\n- per-CPU-core usage;\n- memory usage (and memory pressure information);\n- network traffic (PPS and MB/s or Mbps);\n- disk load;\n- average Linux load;\n\nand can sample cgroup-specific (read: container-specific) statistics from `/sys/fs/cgroup` like:\n\n- CPU usage as reported by the `cpuacct` (CPU accounting) cgroupv1 or by the  `cpu` cgroupv2;\n- CPU throttling reported under `cpuacct` cgroup;\n- CPU usage per-process and per-thread (useful for multithreaded application monitoring);\n- memory usage and memory pressure information as reported by the `memory` cgroup;\n- network usage measured by sampling the network interfaces associated to a cgroup network namespace;\n- disk usage as reported by the `blkio` cgroup;\n\nThe collector of statistics can be configured to collect all or a subset of above statistics.\nMoreover sub-second sampling is possible and up to 100 samples/sec can be collected in some cases (sampling all stats for a docker container\ntypically takes around 1msec). This allow to explore fast transients in CPU/memory/network usage.\n\nFinally the project allows you to easily post-process collected data to:\n* produce a **self-contained** HTML page which allows to visualize all the performance data easily using [Google Charts](https://developers.google.com/chart/);\n* extract statistics information e.g. average/median/peak CPU usage and CPU throttling, average/median/peak memory usage etc;\n* hold large amounts of statistics when connected to time-series databases like InfluxDB and Prometheus.\n\n## Yet-Another-Monitoring-Project?\n\nYou may be thinking \"yet another monitoring project\" for containers. Indeed there are already quite a few open source solutions, e.g.:\n\n- [cAdvisor](https://github.com/google/cadvisor): a Google-sponsored utility to monitor containers\n- [netdata](https://github.com/netdata/netdata): a web application targeting monitoring of large clusters\n- [collectd](https://collectd.org/): a system statics collection daemon (not much container-oriented though)\n- [metrics-server](https://github.com/kubernetes-sigs/metrics-server): the Kubernetes official metric server (Kubernetes only)\n- [process-exporter](https://github.com/ncabatoff/process-exporter): an utility to export generic processes' stats to Prometheus \n\nAlmost all of these are very complete solutions that allow you to monitor swarms of containers, in real time.\nThe downside is that all these projects require you to setup an infrastructure (usually a time-series database) that collects\nin real-time all the statistics and then have some powerful web platform (e.g. Graphana) to render those time-series.\nAll that is fantastic for **persistent** containers.\n\nThis project instead is focused on providing a database-free, lightweight container performance monitoring solution, \nperfect for **ephemeral** containers (e.g. containers used for DevOps automatic testing). The idea is much simpler:\n1) you collect data for your container (or, well, your physical server) using a small collector software (written in C++ to\n  avoid Java virtual machines, Python interpreters or the like!) that saves data on disk in JSON format;\n2) whenever needed, the JSON can be either converted to a **self-contained** HTML page for human inspection or some kind of\n   algorithm can be run to perform targeted analysis (e.g. imagine you need to search for time ranges where high-CPU usage was\n   combined with high-memory usage or where instead CPU usage was apparently low but CPU was throttled due to cgroup limits);\n3) the human-friendly HTML file, or the result of the analysis, can be then sent by email, stored in a tarball or as \"artifact\"\n   of your CI/CD. The idea is that these post-processing results will have no dependencies at all with any infrastructure,\n   so they can be consumed anywhere at anytime (in other words you don't need to make a time-series database available 24/7\n   to go and dig performance results of your containers).\n\nMoreover, cmonitor is the only tool (to the best of author's knowledge) that can collect CPU usage of multithreaded applications\nwith per-thread granularity.\n\n## Supported Linux Kernels\n\nCmonitor version 2.0 and higher supports both cgroups v1 and cgroups v2.\nThis means that the `cmonitor-collector` utility can run on any Linux kernel regardless of its version and its boot options \n(since boot options may alter the cgroups technology in use).\n\nNote that `cmonitor-collector` utility is currently unit-tested against:\n* cgroups created by Docker/systemd on Centos 7 (Linux kernel v3.10.0), click [here](collector/src/tests/centos7-Linux-3.10.0-x86_64-docker/README.md) for more info\n* cgroups created by Docker/systemd on Ubuntu 20.04 (Linux kernel v5.4.0), click [here](collector/src/tests/ubuntu20.04-Linux-5.4.0-x86_64-docker/README.md) for more info\n* cgroups created by Docker/systemd on Fedora 35 (Linux kernel v5.14.17), click [here](collector/src/tests/fedora35-Linux-5.14.17-x86_64-docker/README.md) for more info\n\nOther kernels will be tested in near future. Of course pull requests are welcome to extend coverage.\n\nRegarding cgroup driver, `cmonitor-collector` is tested against both the `cgroupfs` driver (used e.g. by Docker to create cgroups\nfor containers using cgroups v1) and the `systemd` driver (which creates cgroups for the baremetal environment, not for containers).\nTo find out which cgroup driver and which cgroup version you are using when launching e.g. Docker containers you can run:\n\n```\ndocker info | grep -i cgroup\n```\n\nYou may also be interested in  this article https://lwn.net/Articles/676831/ for more details on the docker vs systemd friction in Linux world.\n\n\n## How to install\n\n\n### RPM (for Fedora, Centos)\n\nYou can get started with cmonitor by installing a native RPM.\nThis project uses [COPR](https://copr.fedorainfracloud.org/coprs/f18m/cmonitor/) repository for maintaining\nalways up-to-date RPMs for Fedora and Centos distributions. Just run:\n\n```\nyum install -y yum-plugin-copr\nyum copr enable -y f18m/cmonitor\nyum install -y cmonitor-collector cmonitor-tools\n```\n\n(or use `dnf` if you prefer).\n\nNote that the RPM `cmonitor-collector` has no dependencies from Python and has very small set of dependencies (GNU libc and few others)\nso can be installed easily everywhere. The RPM `cmonitor-tools` instead requires Python3.\n\nFinally note that Fedora COPR infrastructure will retain only the very **latest** version of cmonitor RPMs.\nIf your CI/CD relies on a particular version of cmonitor, the suggestion is to download and store the RPM version you need.\n\n\n### Debian package (for Debian, Ubuntu, etc)\n\nYou can get started with cmonitor by installing it as a Debian package.\nThe debian packages are built using [Ubuntu private PPA service](https://launchpad.net/~francesco-montorsi/+archive/ubuntu/cmonitor). \nJust run:\n\n```\nadd-apt-repository ppa:francesco-montorsi/cmonitor\napt-get install cmonitor-collector cmonitor-tools\n```\n\nNote that the debian package `cmonitor-collector` has no dependencies from Python and has very small set of dependencies (GNU libc and few others)\nso can be installed easily everywhere. The debian package `cmonitor-tools` instead requires Python3.\n\nWARNING: I'm having troubles maintaining both the RPM, docker and Ubuntu packaging for this project, so typically the Ubuntu (.deb) package is\nupdated only later, when I have time. If you want to test very latest cmonitor release as .deb please let me know, I might be able to push the latest\nrelease in my PPA.\n\n\n### Docker\n\nAlternatively to native packages, you can use `cmonitor_collector` utility as a Docker:\n\n```\ndocker run -d \\\n    --rm \\\n    --name=cmonitor-baremetal-collector \\\n    --network=host \\\n    --pid=host \\\n    --volume=/sys:/sys:ro \\\n    --volume=/etc/os-release:/etc/os-release:ro \\\n    --volume=$(pwd):/perf:rw \\\n    f18m/cmonitor:latest \\\n    --sampling-interval=1  ...\n```\n\nwhich runs the Docker image for this project from [Docker Hub](https://hub.docker.com/r/f18m/cmonitor).\nNote that the Dockerfile entrypoint is `cmonitor_collector` and thus any [supported CLI option](#reference-manual) can be provided\nat the end of the `docker run` command.\nThe volume mount of `/etc/os-release` and the `--network=host` option are required to allow `cmonitor_collector` to correctly identify the real host\nbeing monitored (otherwise `cmonitor_collector` reports the randomly-generated hash by Docker for the hostname).\nThe `--pid=host` option is required to allow `cmonitor_collector` to monitor processes generated by other containers.\nFinally, the volume mount of `/sys` exposes all the cgroups of baremetal into the `cmonitor_collector` docker and thus enables the collector\nutility to access the stats of all other running containers; this is required by similar tools as well like [cAdvisor](https://github.com/google/cadvisor).\n\n\n## How to build from sources\n\nThe build process is composed by 2 steps, as for most projects:\n1) install dependencies\n2) compile cmonitor C/C++ code\n\nThe step 1 changes very much depending on whether you want to enable the Prometheus integration or not.\nIndeed to support prometheus the [prometheus-cpp client library](https://github.com/jupp0r/prometheus-cpp) needs to be installed, together\nwith all its dependencies (civetweb, openSSL, libcurl). Since prometheus-cpp library is not packaged in most distributions (at least as of Aug 2022)\nwe use Conan package manager to fetch the [prometheus-cpp Conan package](https://conan.io/center/prometheus-cpp).\nIf you are confident with Conan you can thus build from sources with Prometheus support by using PROMETHEUS_SUPPORT=1 flag to GNU make.\nIf instead you are not interested in Prometheus support or you have troubles with Conan, it's suggested to build with PROMETHEUS_SUPPORT=0 flag to GNU make.\n\n### On Fedora, Centos\n\nFirst of all, checkout this repository on your Linux box using git or decompressing a tarball of a release.\nThen run:\n\n```\n# install compiler tools \u0026 library dependencies with YUM:\nsudo dnf install -y gcc-c++ make gtest-devel fmt-devel benchmark-dev\n\n# install dependencies with Conan:\n# (this part can be skipped if you are not interested in Prometheus support)\npip3 install conan\nconan profile new default --detect\nconan profile update settings.compiler.libcxx=libstdc++11 default\nconan install . --build=missing\n# if all Conan steps were successful, then enable Prometheus support:\nexport PROMETHEUS_SUPPORT=1\n\n# finally build cmonitor C/C++ code:\nmake all -j\nmake test                                         # optional step to run unit tests\nsudo make install DESTDIR=/usr/local BINDIR=bin   # to install in /usr/local/bin\n```\n\n### On Debian, Ubuntu\n\nFirst of all, checkout this repository on your Linux box using git or decompressing a tarball of a release.\nThen run:\n\n```\nsudo apt install -y libgtest-dev libbenchmark-dev python3 libfmt-dev g++\nmake all -j\nmake test                                    # optional step to run unit tests\nsudo make install DESTDIR=/usr/local BINDIR=bin   # to install in /usr/local/bin\n```\n\n\n\n## How to use\n\n### Step 1: collect stats\n\nThe RPM/Debian packages \"cmonitor-collector\" install a single utility, named `cmonitor_collector`.\nIt can be launched as simply as:\n\n```\ncmonitor_collector --sampling-interval=3 --output-directory=/home\n```\n\n(on baremetal) or as a Docker:\n\n```\ndocker run -d \\\n    --rm \\\n    --name=cmonitor-baremetal-collector \\\n    --network=host \\\n    --pid=host \\\n    --volume=/sys:/sys:ro \\\n    --volume=/etc/os-release:/etc/os-release:ro \\\n    --volume=/home:/perf:rw \\\n    f18m/cmonitor:latest \\\n    --sampling-interval=1  ...\n```\n\nto produce in the `/home` folder a JSON with CPU/memory/disk/network stats for the container\nsampling all supported performance statistics every 3 seconds.\n\nOnce the JSON is produced, next steps are either:\n\n- inject that JSON inside InfluxDB (mostly useful for **persistent** containers that you want to monitor in real-time);\n  see section \"Connecting with InfluxDB and Grafana\" below;\n- or use the `cmonitor_chart` utility to convert that JSON into a self-contained HTML file (mostly useful for **ephemeral** containers);\n  see below for practical examples.\n\nSee [supported CLI option](#reference-manual) section for complete list of accepted options.\n\n\n\n### Step 2: plot stats collected as JSON\n\nTo plot the JSON containing the collected statistics, simply launch the `cmonitor_chart` utility installed together\nwith the RPM/Debian package, with the JSON collected from `cmonitor_collector`:\n\n```\ncmonitor_chart --input=/path/to/json-stats.json --output=\u003coptional HTML output filename\u003e\n```\n\nNote that to save space/bandwidth you can also gzip the JSON file and pass it gzipped directly to `cmonitor_chart`.\n\n\n\n### Usage scenarios and HTML result examples\n\n\n#### Monitoring the baremetal server (no containers)\n\nIf you want to monitor the performances of an entire server (or worker node in Kubernetes terminology), you can either:\na) install `cmonitor_collector` as RPM or APT package following instructions or\nb) use `cmonitor_collector` as a Docker;\nSee [How to install](#how-to-install) for more information.\n\nExample results:\n\n1) [baremetal1](https://f18m.github.io/cmonitor/examples/baremetal1.html): \n   example of graph generated with the performance stats collected from a physical (baremetal) server by running `cmonitor_collector` installed as RPM; \n   note that despite the absence of any kind of container, the `cmonitor_collector` utility (like just any other software in \n   modern Linux distributions) was running inside the default \"user.slice\" cgroup and collected both the stats of that cgroup \n   and all baremetal stats (which in this case mostly coincide since the \"user.slice\" cgroup contains almost all running processes of the server);\n   \n2) [baremetal2](https://f18m.github.io/cmonitor/examples/baremetal2.html): \n   This is a longer example of collected statistics (results in a larger file, may take some time to download) generated\n   with 9 hours of performance stats collected from a physical server running Centos7 and with 56 CPUs (!!); \n   the `cmonitor_collector` utility was installed as RPM and running inside the default \"user.slice\" cgroup so both \"CGroup\" and \"Baremetal\"\n   graphs are present;\n\n3) [docker_collecting_baremetal_stats](https://f18m.github.io/cmonitor/examples/docker-collecting-baremetal-stats.html): \n   example of graph generated with the performance stats collected from a physical server from `cmonitor_collector` Docker container;\n   in this case cgroup stat collection was explicitely disabled so that only baremetal performance graphs are present;\n   see [Docker installation](#docker) information as reference how the docker was started.\n\n\n#### Monitoring your Docker container\n\nIn this case you can simply install cmonitor as RPM or APT package following instructions in [How to install](#how-to-install)\nand then launch the `cmonitor_collector` utility as any other Linux daemon, specifying the name of the cgroup associated with\nthe docker container to monitor.\nFinding out the cgroup associated with a Docker container can require some detailed information about your OS / runtime configuration;\nfor example you should know whether you're using cgroups v1 or v2 and which Docker cgroup driver are you using (cgroupfs or systemd);\nplease see the [official Docker page](https://docs.docker.com/config/containers/runmetrics/#find-the-cgroup-for-a-given-container)\nfor more details.\nIn the following example a [Redis](https://hub.docker.com/_/redis) docker container is launched with the name 'userapp' and its\nCPU, memory, network and disk usage are monitored launching a `cmonitor_collector` instance:\n\n```\ndocker run --name userapp --detach redis:latest\n\nDOCKER_ID=$(docker ps -aq --no-trunc -f \"name=userapp\")\n\nCGROUP_NAME=docker/${DOCKER_ID}                       # when 'cgroupfs' driver is in use\nCGROUP_NAME=system.slice/docker-${DOCKER_ID}.scope    # when 'systemd' driver is in use\n\n# here we exploit the following fact: the cgroup of each Docker container \n# is always named 'docker/container-ID'... at least when using Moby engine\ncmonitor_collector \\\n   --num-samples=until-cgroup-alive \\\n   --cgroup-name=${CGROUP_NAME} \\\n   --collect=cgroup_threads,cgroup_cpu,cgroup_memory,cgroup_network --score-threshold=0 \\\n   --custom-metadata=cmonitor_chart_name:userapp \\\n   --sampling-interval=3 \\\n   --output-filename=docker-userapp.json\n```\n\nAlternatively the Redis Docker container (or any other one) can be monitored from `cmonitor_collector` running as a Docker itself:\n\n```\ndocker run -d \\\n    --rm \\\n    --name=cmonitor-collector \\\n    --network=host \\\n    --pid=host \\\n    --volume=/sys:/sys:ro \\\n    --volume=/etc/os-release:/etc/os-release:ro \\\n    --volume=/home:/perf:rw \\\n    f18m/cmonitor:latest \\\n   --num-samples=until-cgroup-alive \\\n   --cgroup-name=${CGROUP_NAME} \\\n   --collect=cgroup_threads,cgroup_cpu,cgroup_memory,cgroup_network --score-threshold=0 \\\n   --custom-metadata=cmonitor_chart_name:userapp \\\n   --sampling-interval=3 \\\n   --output-filename=docker-userapp.json\n```\n\nSee [Docker usage](#docker) paragraph for more info about the \"docker run\" options required.\nSee example #4 below to view the results produced by using the `cmonitor_collector` Docker with the command above.\n\nExample results:\n\n1) [docker_userapp](https://f18m.github.io/cmonitor/examples/docker-userapp.html): example of the chart generated by monitoring\n   from the baremetal a simple Redis docker simulating a simple application, doing some CPU and I/O. In this example the \n   `--collect=cgroup_threads` is further used to show Redis CPU usage by-thread.\n\n2) [docker_stress_test_cpu](https://f18m.github.io/cmonitor/examples/docker-stress-test-cpu.html): example of the chart generated\n   by monitoring from the baremetal a [stress-ng](https://hub.docker.com/r/alexeiled/stress-ng/) Docker with a CPU limit set. \n   This example shows how cmonitor will report \"CPU throttling\" and how useful it is to detect cases where a Docker is trying to\n   use too much CPU.\n\n3) [docker_stress_test_memory](https://f18m.github.io/cmonitor/examples/docker-stress-test-mem.html): example of the chart generated\n   by monitoring from the baremetal a [stress-ng](https://hub.docker.com/r/alexeiled/stress-ng/) Docker with a MEMORY limit set. \n   This example shows how cmonitor will report \"Memory allocation failures\" and how useful it is to detect cases where a Docker is trying to\n   use too much memory.\n\n4) [docker_collecting_docker_stats](https://f18m.github.io/cmonitor/examples/docker-collecting-docker-stats.html): example of the chart generated\n   by monitoring from  `cmonitor_collector` Docker a Redis Docker.\n   This example shows how cmonitor is able to run as a Docker-monitoring-other-Dockers.\n\n\n#### Monitoring your Kubernetes POD\n\nIn this case you can simply install cmonitor as RPM or APT package following instructions in [How to install](#how-to-install)\non all the worker nodes where Kubernetes might be scheduling the POD you want to monitor.\nThen, you can launch the `cmonitor_collector` utility as any other Linux daemon, specifying the name of the cgroup associated\nwith the Kubernetes POD (or more precisely, associated with one of the containers inside the Kubernetes POD, in case it contains\nalso sidecar containers).\nFinding the name of the cgroup associated with your POD is tricky. It depends on the specific Kubernetes Container Runtime Interface\n(CRI), but the following example shows a generic-enough procedure:\n\n```\nPODNAME=\u003cyour-pod-name\u003e\nCONTAINERNAME=\u003cmain-container-name\u003e\nCONTAINERID=$(kubectl get pod ${PODNAME} -o json | jq -r \".status.containerStatuses[] | select(.name==\\\"${CONTAINERNAME}\\\") | .containerID\" | sed  's@containerd://@@')\nFULL_CGROUP_NAME=$(find /sys/fs/cgroup -name ${CONTAINERID} | head -1 |sed 's@/sys/fs/cgroup/memory/@@')\n\ncmonitor_collector \\\n   --num-samples=until-cgroup-alive \\\n   --cgroup-name=${FULL_CGROUP_NAME} \\\n   --collect=cgroup_threads,cgroup_cpu,cgroup_memory --score-threshold=0 \\\n   --custom-metadata=cmonitor_chart_name:${PODNAME} \\\n   --sampling-interval=3 \\\n   --output-filename=pod-performances.json\n```\n\n\n### Connecting with InfluxDB and Grafana\n\nThe `cmonitor_collector` can be connected to an [InfluxDB](https://www.influxdata.com/) deployment to store collected data (this can happen\nin parallel to the JSON default storage). This can be done by simply providing the IP and port of the InfluxDB instance when launching\nthe collector:\n\n```\ncmonitor_collector \\\n   --remote-ip 1.2.3.4 --remote-port 8086 --remote influxdb\n```\n\nThe InfluxDB instance can then be used as data source for graphing tools like [Grafana](https://grafana.com/)\nwhich allow you to create nice interactive dashboards like the following one:\n\n![Basic Dashboard Example](examples/grafana-dashboards/BasicDashboardExample.png)\n\nYou can also play with the [live dashboard example](https://snapshot.raintank.io/dashboard/snapshot/JdX4hDukUCGuJHsXymM86KbFO5LC9GrY?orgId=2\u0026from=1558478922136\u0026to=1558479706448)\n\nTo setup easily and quickly the chain \"cmonitor_collector-InfluxDB-Grafana\" you can checkout the repo of this project and run:\n\n```\nmake -C examples regen_grafana_screenshots\n```\n\nwhich uses Docker files to deploy a temporary setup and fill the InfluxDB with 10minutes of data collected from the baremetal.\n\n\n### Connecting with Prometheus and Grafana\n\nThe `cmonitor_collector` can be connected to a [Prometheus](https://prometheus.io/) instance to store collected data (this can happen\nin parallel to the JSON default storage). This can be done by simply providing the IP and port for the Prometheus instance when launching the collector:\n\n```\ncmonitor_collector \\\n   --remote-ip 10.1.2.3 --remote-port 9092 --remote prometheus\n```\n\nThe Prometheus instance can then be used as data source for graphing tools like [Grafana](https://grafana.com/)\nwhich allow you to create nice interactive dashboards (see examples in InfluxDB section).\n\n\n### Reference Manual\n\nThe most detailed documentation on how to use cmonitor tool is available from `--help` option:\n\n```\ncmonitor_collector: Performance statistics collector.\nList of arguments that can be provided follows:\n\nData sampling options\n  -s, --sampling-interval=\u003cREQ ARG\u003e     Seconds between samples of data (default is 60 seconds). Minimum value is 0.01sec, i.e. 10msecs.\n  -c, --num-samples=\u003cREQ ARG\u003e           Number of samples to collect; special values are:\n                                           '0': means forever (default value)\n                                           'until-cgroup-alive': until the cgroup selected by --cgroup-name is alive\n  -k, --allow-multiple-instances        Allow multiple simultaneously-running instances of cmonitor_collector on this system.\n                                        Default is to block attempts to start more than one background instance.\n  -F, --foreground                      Stay in foreground.\n  -C, --collect=\u003cREQ ARG\u003e               Collect specified list of performance stats. Available performance stats are:\n                                          'cpu': collect per-core CPU stats from /proc/stat\n                                          'memory': collect memory stats from /proc/meminfo, /proc/vmstat\n                                          'disk': collect disk stats from /proc/diskstats\n                                          'network': collect network stats from /proc/net/dev\n                                          'load': collect system load stats from /proc/loadavg\n                                          'cgroup_cpu': collect CPU stats from the 'cpuacct' cgroup\n                                          'cgroup_memory': collect memory stats from 'memory' cgroup\n                                          'cgroup_network': collect network statistics by interface for the network namespace of the cgroup\n                                          'cgroup_processes': collect stats for each process inside the 'cpuacct' cgroup\n                                          'cgroup_threads': collect stats for each thread inside the 'cpuacct' cgroup\n                                          'all_baremetal': the combination of 'cpu', 'memory', 'disk', 'network'\n                                          'all_cgroup': the combination of 'cgroup_cpu', 'cgroup_memory', 'cgroup_processes'\n                                          'all': the combination of all previous stats (this is the default)\n                                        Note that a comma-separated list of above stats can be provided.\n  -e, --deep-collect                    Collect all available details for the performance statistics enabled by --collect.\n                                        By default, for each category, only the stats that are used by the 'cmonitor_chart' companion utility\n                                        are collected. With this option a more detailed but larger JSON / InfluxDB data stream is produced.\n  -g, --cgroup-name=\u003cREQ ARG\u003e           If cgroup sampling is active (--collect=cgroups*), this option allows to provide explicitly the name of\n                                        the cgroup to monitor. If 'self' value is passed (the default), the statistics of the cgroups where\n                                        cmonitor_collector runs will be collected. Note that this option is mostly useful when running\n                                        cmonitor_collector directly on the baremetal since a process running inside a container cannot monitor\n                                        the performances of other containers.\n  -t, --score-threshold=\u003cREQ ARG\u003e       If cgroup process/thread sampling is active (--collect=cgroup_processes/cgroup_threads) use the provided\n                                        score threshold to filter out non-interesting processes/threads. The 'score' is a number that is linearly\n                                        increasing with the CPU usage. Defaults to '1' to filter out all processes/threads having zero CPU usage.\n                                        Use '0' to turn off filtering by score.\n  -M, --custom-metadata=\u003cREQ ARG\u003e       Allows to specify custom metadata key:value pairs that will be saved into the JSON output (if saving data\n                                        locally) under the 'header.custom_metadata' path. Can be used multiple times. See usage examples below.\n\nOptions to save data locally\n  -m, --output-directory=\u003cREQ ARG\u003e      Write output JSON and .err files to provided directory (defaults to current working directory).\n  -f, --output-filename=\u003cREQ ARG\u003e       Name the output files using provided prefix instead of defaulting to the filenames:\n                                                hostname_\u003cyear\u003e\u003cmonth\u003e\u003cday\u003e_\u003chour\u003e\u003cminutes\u003e.json  (for JSON data)\n                                                hostname_\u003cyear\u003e\u003cmonth\u003e\u003cday\u003e_\u003chour\u003e\u003cminutes\u003e.err   (for error log)\n                                        Special argument 'stdout' means JSON output should be printed on stdout and errors/warnings on stderr.\n                                        Special argument 'none' means that JSON output must be disabled.\n  -P, --output-pretty                   Generate a pretty-printed JSON file instead of a machine-friendly JSON (the default).\n\nOptions to stream data remotely\n  -r, --remote=\u003cREQ ARG\u003e                Set the type of remote target: 'none' (default), 'influxdb' or 'prometheus'.\n  -i, --remote-ip=\u003cREQ ARG\u003e             When remote is InfluxDB: IP address or hostname of the InfluxDB instance to send measurements to;\n                                        When remote is Prometheus: listen address, defaults to 0.0.0.0 (to accept connections from all).\n  -p, --remote-port=\u003cREQ ARG\u003e           When remote is InfluxDB: port of server;\n                                        When remote is Prometheus: listen port, defaults to 8080.\n  -X, --remote-secret=\u003cREQ ARG\u003e         InfluxDB only: set the collector secret (by default use environment variable CMONITOR_SECRET).\n  -D, --remote-dbname=\u003cREQ ARG\u003e         InfluxDB only: set the InfluxDB database name (default is 'cmonitor').\n\nOther options\n  -v, --version                         Show version and exit\n  -d, --debug                           Enable debug mode; automatically activates --foreground mode\n  -h, --help                            Show this help\n\nExamples:\n    1) Collect data from OS every 5 mins all day:\n        cmonitor_collector -s 300 -c 288 -m /home/perf\n    2) Use the defaults (-s 60, collect forever), saving to custom file in background:\n        cmonitor_collector --output-filename=my_server_today\n    3) Collect data from a docker container:\n        DOCKER_NAME=your_docker_name\n        DOCKER_ID=$(docker ps -aq --no-trunc -f \"name=$DOCKER_NAME\")\n        cmonitor_collector --allow-multiple-instances --num-samples=until-cgroup-alive\n                        --cgroup-name=docker/$DOCKER_ID --custom-metadata='cmonitor_chart_name:$DOCKER_NAME'\n                        --custom-metadata='additional_metadata:some-data'\n    4) Monitor a docker container sending data to an InfluxDB (only, no JSON output):\n        cmonitor_collector --num-samples=until-cgroup-alive --cgroup-name=docker/$DOCKER_ID\n                        --output-filename=none --remote=influxdb --remote-ip myinfluxdb.foobar.com --remote-port 8086\n    5) Monitor a docker container and expose HTTP endpoint for Prometheus scraping (no JSON output):\n        cmonitor_collector -s 5 --num-samples=until-cgroup-alive --cgroup-name=docker/$DOCKER_ID\n                        --output-filename=none --remote=prometheus  --collect=all_cgroup --score-threshold=0\n        curl http://localhost:8080/metrics # test scraping\n    6) Pipe into 'myprog' half-a-day of sampled performance data:\n        cmonitor_collector --sampling-interval=30 --num-samples=1440 --output-filename=stdout --foreground | myprog\n\nNOTE: this is the cgroup-aware fork of original njmon software (see https://github.com/f18m/cmonitor)\n```\n\n\n## Project History\n\nThis project started as a fork of [Nigel's performance Monitor for Linux](http://nmon.sourceforge.net), adding cgroup-awareness;\nbut it has quickly evolved to a point where it shares very little code pieces with the original `njmon` tool.\n\nSome key differences now include:\n - cgroup-aware: several performance cgroup stats are collected by `cmonitor_collector` and plotted by `cmonitor_chart`\n - more command-line options for `cmonitor_collector`;\n - HTML page generated by `cmonitor_chart` differently organized;\n - `cmonitor_collector` is able to connect to InfluxDB directly and does not need intermediate Python scripts to transform\n   from JSON streamed data to InfluxDB-compatible stream.\n - Prometheus support\n\nThis fork supports only Linux x86_64 architectures; support for AIX/PowerPC (present in original `nmon`) has been dropped.\n\n- Original project: [http://nmon.sourceforge.net](http://nmon.sourceforge.net)\n- Other forks: [https://github.com/axibase/nmon](https://github.com/axibase/nmon)\n\n\n\n## License\n\nJust like the [original project](http://nmon.sourceforge.net), this project is licensed under [GNU GPL 2.0](LICENSE)\n","funding_links":["https://github.com/sponsors/f18m"],"categories":["\u003ca name=\"cpp\"\u003e\u003c/a\u003eC++"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ff18m%2Fcmonitor","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ff18m%2Fcmonitor","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ff18m%2Fcmonitor/lists"}