{"id":46490308,"url":"https://github.com/testillano/h2agent","last_synced_at":"2026-04-02T12:40:51.907Z","repository":{"id":40344865,"uuid":"346174236","full_name":"testillano/h2agent","owner":"testillano","description":"C++ HTTP/2 Mock Service which enables mocking HTTP/2 applications (also HTTP/1 supported).","archived":false,"fork":false,"pushed_at":"2026-03-08T12:53:55.000Z","size":2190,"stargazers_count":11,"open_issues_count":0,"forks_count":0,"subscribers_count":2,"default_branch":"master","last_synced_at":"2026-03-08T16:31:32.648Z","etag":null,"topics":["benchmarks","calculus","component-testing","cpp","demos","docker","function-testing","helm","http1","http2","katas","kubernetes","load-testing","mock","prometheus-metrics","proxy","pytest","rest-api","restful-api","tls-support"],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/testillano.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2021-03-09T23:36:18.000Z","updated_at":"2026-03-08T12:50:04.000Z","dependencies_parsed_at":"2023-02-12T18:45:37.333Z","dependency_job_id":"08889055-2df3-4a0c-96dd-002b91b9a910","html_url":"https://github.com/testillano/h2agent","commit_stats":null,"previous_names":[],"tags_count":86,"template":false,"template_full_name":null,"purl":"pkg:github/testillano/h2agent","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/testillano%2Fh2agent","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/testillano%2Fh2agent/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/testillano%2Fh2agent/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/testillano%2Fh2agent/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/testillano","download_url":"https://codeload.github.com/testillano/h2agent/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/testillano%2Fh2agent/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30413642,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-12T00:40:14.898Z","status":"online","status_checked_at":"2026-03-12T02:00:07.260Z","response_time":114,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["benchmarks","calculus","component-testing","cpp","demos","docker","function-testing","helm","http1","http2","katas","kubernetes","load-testing","mock","prometheus-metrics","proxy","pytest","rest-api","restful-api","tls-support"],"created_at":"2026-03-06T10:30:59.621Z","updated_at":"2026-04-02T12:40:51.894Z","avatar_url":"https://github.com/testillano.png","language":"C++","readme":"# C++ HTTP/2 Mock Service\n\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![Documentation](https://codedocs.xyz/testillano/h2agent.svg)](https://codedocs.xyz/testillano/h2agent/index.html)\n[![API](https://img.shields.io/badge/API-OpenAPI%203.1-6ba539.svg)](https://testillano.github.io/h2agent/api/)\n[![Coverage Status](https://img.shields.io/endpoint?url=https://testillano.github.io/h2agent/coverage/badge.json)](https://testillano.github.io/h2agent/coverage)\n[![Ask Me Anything !](https://img.shields.io/badge/Ask%20me-anything-1abc9c.svg)](https://github.com/testillano)\n[![Maintenance](https://img.shields.io/badge/Maintained%3F-yes-green.svg)](https://github.com/testillano/h2agent/graphs/commit-activity)\n[![CI](https://github.com/testillano/h2agent/actions/workflows/ci.yml/badge.svg)](https://github.com/testillano/h2agent/actions/workflows/ci.yml)\n[![Docker Pulls](https://img.shields.io/docker/pulls/testillano/h2agent.svg)](https://github.com/testillano/h2agent/pkgs/container/h2agent)\n\n`H2agent` is a network service agent that enables **mocking HTTP/2 applications** (also HTTP/1.x is supported via `nghttpx` reverse proxy).\nIt is mainly designed for testing, but could even simulate or complement advanced services.\n\nIt is being used intensively in E/// company, as part of testing environment for some telecommunication products.\n\n\n\nAs a brief **summary**, we could \u003cu\u003ehighlight the following features\u003c/u\u003e:\n\n* Mock types:\n\n  * Server (unique)\n  * Client (multiple clients may be provisioned)\n\n* Testing\n\n  * Functional/Component tests:\n  * System tests (KPI, High Load, Robustness).\n  * Congestion Control.\n  * Validations:\n    * Optionally tied to provision with machine states.\n    * Sequence validation can be decoupled (order by user preference).\n    * Available traffic history inspection (REST API).\n\n* Traffic protocols\n\n  * HTTP/1/2.\n  * UDP.\n\n* TLS/SSL support\n\n  * Server\n  * Client\n\n* Interfaces\n\n  * Administrative interface (REST API with JSON): update, create and delete items:\n    * Vault.\n    * File manager configuration.\n    * UDP events.\n    * Schemas (Rx/Tx).\n    * Logging system.\n    * Traffic classification and provisioning configuration.\n    * Client endpoints configuration.\n    * Events data (server and clients) summary and inspection.\n    * Events data configuration (global storage, history).\n  * [Gherkin (BDD)](tools/gherkin-h2agent): Given/When/Then driver for the REST API.\n    * Write mock scenarios in plain English.\n    * Dump mode: generate JSON provisions from `.feature` files without a running instance.\n  * Prometheus metrics (HTTP/1):\n    * Counters by method and result.\n    * Gauges and Histograms (response delays, message size for Rx/Tx).\n  * Log system (POSIX Levels).\n  * Command line:\n    * Administrative \u0026 traffic interfaces (addresses, ports, security certificates).\n    * Congestion control parameters (workers, maximum workers and maximum queue size).\n    * Schema configuration json documents to be referred (traffic validation).\n    * Vault json document.\n    * File manager configuration.\n    * Traffic server configuration (classification and provision).\n    * Metrics interface: histogram buckets for server and client and for histogram types.\n    * Clients connection behaviour (lazy/active).\n\n* Schema validation\n\n  * Administrative interface.\n  * Mock system (Tx/Rx requests).\n\n* Traffic classification\n\n  * Full matching.\n  * Regular expression matching.\n  * Priority regular expression matching.\n  * Query parameters filtering (sort/pass by/ignore).\n  * Query parameters delimiters (ampersand/semicolon)\n\n* Programming:\n\n  * User-defined machine state (FSM).\n  * Internal events system (data extraction from past events).\n  * Vault.\n  * File system operations.\n  * UDP writing operations.\n  * Response build (headers, body, status code, delay).\n  * Transformation algorithms: thousands of combinations\n    * Sources: uri, uri path, query parameters, bodies, request/responses bodies and paths, headers, eraser, math expressions, shell commands, random generation (ranges, sets), unix timestamps, strftime formats, sequences, dynamic variables, vaults, constant values, input state (working state), events, files (read).\n    * Filters: regular expression captures and regex/replace, append, prepend, basic arithmetics (sum, multiply), equality, condition variables, differences, json constraints, schema id, date/time parsing (strptime) and formatting (strftime), regex key search.\n    * Targets: dynamic variables, vaults (plain, typed object, json-string parsing), files (write), response body (as string, integer, unsigned, float, boolean, object and object from json string), UDP through unix socket (write), response body path (as string, integer, unsigned, float, boolean, object and object from json string), headers, status code, response delay, output state, events, break conditions.\n  * Multipart support.\n  * Pseudo-notification mechanism (response delayed by vault condition).\n\n* Training:\n\n  * Questions and answers for project documentation using **openai** (ChatGPT-based).\n  * Playground.\n  * Demo.\n  * Tester.\n  * Kata exercises.\n\n* Tools programs:\n\n  * Matching helper.\n  * Arash Partow helper (math expressions).\n  * HTTP/2 client.\n  * UDP server.\n  * UDP server to trigger active HTTP/2 client requests.\n  * UDP client.\n\n---\n\n\u003cdetails\u003e\n\u003csummary\u003eTable of Contents\u003c/summary\u003e\n\n- [Quick start](#quick-start)\n- [Scope](#scope)\n- [How can you use it ?](#how-can-you-use-it-)\n- [Static linking](#static-linking)\n- [Project image](#project-image)\n- [Build project with docker](#build-project-with-docker)\n- [Build project natively](#build-project-natively)\n- [Testing](#testing)\n- [Execution of main agent](#execution-of-main-agent)\n- [Execution of matching helper utility](#execution-of-matching-helper-utility)\n- [Execution of Arash Partow's helper utility](#execution-of-arash-partows-helper-utility)\n- [Execution of h2client utility](#execution-of-h2client-utility)\n- [UDP utilities](#udp-utilities)\n  - [Execution of udp-server utility](#execution-of-udp-server-utility)\n  - [Execution of udp-server-h2client utility](#execution-of-udp-server-h2client-utility)\n  - [Execution of udp-client utility](#execution-of-udp-client-utility)\n- [Working with unix sockets and docker containers](#working-with-unix-sockets-and-docker-containers)\n- [Execution with TLS support](#execution-with-tls-support)\n- [Metrics](#metrics)\n- [Traces and printouts](#traces-and-printouts)\n- [Training](#training)\n- [Management interface](#management-interface)\n- [Dynamic response delays](#dynamic-response-delays)\n- [Reserved Vault Entrys](#reserved-vaults)\n- [How it is delivered](#how-it-is-delivered)\n- [How it integrates in a service](#how-it-integrates-in-a-service)\n- [Troubleshooting](#troubleshooting)\n- [Contributing](#contributing)\n\n\u003c/details\u003e\n\n---\n\n## Quick start\n\n**Theory**\n\n* A ***[prezi](https://prezi.com/view/RFaiKzv6K6GGoFq3tpui/)*** presentation to show a complete and useful overview of the `h2agent` component architecture.\n* A conversational bot for [***questions \u0026 answers***](./README.md#questions-and-answers) based in *Open AI*.\n\n**Practice**\n\n* Brief exercises to ***[play](./README.md#Play)*** with, showing basic configuration \"games\" to have a quick overview of project possibilities.\n* Tester GUI tool to ***[test](./README.md#Test)*** with, allowing to make quick interactions through traffic and administrative interfaces.\n* A ***[demo](./README.md#Demo)*** exercise which presents a basic use case to better understand the project essentials.\n* And finally, a ***[kata](./README.md#Kata)*** training to acquire better knowledge of project capabilities.\n\nBullet list of exercises above, have a growing demand in terms of attention and dedicated time. For that reason, they are presented in the indicated order, facilitating and prioritizing simplicity for the user in the training process.\n\n## Scope\n\nWhen developing a network service, one often needs to integrate it with other services. However, integrating full-blown versions of such services in a development setup is not always suitable, for instance when they are either heavyweight or not fully developed.\n\n`H2agent` can be used to replace one (or many) of those, which allows development to progress and testing to be conducted in isolation against such a service.\n\n`H2agent` supports HTTP/2 as a network protocol (also HTTP/1.x via proxy) and JSON as a data interchange language.\n\nSo, `h2agent` could be used as:\n\n* **Server** mock: fully implemented.\n* **Client** mock: fully implemented.\n\nAlso, `h2agent` can be configured through **command-line** but also dynamically through an **administrative HTTP/2 interface** (`REST API`). This last feature makes the process a key element within an ecosystem of remotely controlled agents, enabling a reliable and powerful orchestration system to develop all kinds of functional, load and integration tests. So, in summary `h2agent` offers two execution planes:\n\n* **Traffic plane**: application flows.\n* **Control plane**: traffic flow orchestration, mocks behavior control and SUT surroundings monitoring and inspection.\n\nCheck the [releases](https://github.com/testillano/h2agent/releases) to get latest packages, or read the following sections to build all the artifacts needed to start playing:\n\n## How can you use it ?\n\n`H2agent` process (as well as other project binaries) may be used natively, as a `docker` container, or as part of `kubernetes` deployment.\n\nThe easiest way to build the project is using [containers](https://en.wikipedia.org/wiki/LXC) technology (this project uses `docker`): **to generate all the artifacts**, just type the following:\n\n```bash\n$ ./build.sh --auto\n```\n\nThe option `--auto` builds the \u003cu\u003ebuilder image\u003c/u\u003e (`--builder-image`) , then the \u003cu\u003eproject image\u003c/u\u003e (`--project-image`) and finally \u003cu\u003eproject executables\u003c/u\u003e (`--project`). Then you will have everything available to run binaries with different modes:\n\n* Run \u003cu\u003eh2agent project image\u003c/u\u003e with docker (`./run.sh` script at root directory can also be used):\n\n  ```bash\n  $ docker run --rm -it -p 8000:8000 -p 8074:8074 -p 8080:8080 ghcr.io/testillano/h2agent:latest # default entrypoint is h2agent process\n  ```\n\n  Exported ports correspond to server defaults: traffic(8000), administrative(8074) and metrics(8080), but of course you could configure your own externals.\n  You may override default entrypoint (`/opt/h2agent`) to run another binary packaged (check project `Dockerfile`), for example the simple client utility:\n\n  ```bash\n  $ docker run --rm -it --network=host --entrypoint \"/opt/h2client\" ghcr.io/testillano/h2agent:latest --uri http://localhost:8000/unprovisioned # run in another shell to get response from h2agent server launched above\n  ```\n\n  Or any other packaged utility (if you want to lighten the image size, write your own Dockerfile and get what you need):\n\n  ```bash\n  $ docker run --rm -it --network=host --entrypoint \"/opt/matching-helper\" ghcr.io/testillano/h2agent:latest --help\n  -or-\n  $ docker run --rm -it --network=host --entrypoint \"/opt/arashpartow-helper\" ghcr.io/testillano/h2agent:latest --help\n  -or-\n  $ docker run --rm -it --network=host --entrypoint \"/opt/h2client\" ghcr.io/testillano/h2agent:latest --help\n  -or-\n  $ docker run --rm -it --network=host --entrypoint \"/opt/udp-server\" ghcr.io/testillano/h2agent:latest --help\n  -or-\n  $ docker run --rm -it --network=host --entrypoint \"/opt/udp-server-h2client\" ghcr.io/testillano/h2agent:latest --help\n  -or-\n  $ docker run --rm -it --network=host --entrypoint \"/opt/udp-client\" ghcr.io/testillano/h2agent:latest --help\n  ```\n\n* HTTP/x proxy supporting HTTP/1.0, HTTP/1.1, HTTP/2 and HTTP/2 without HTTP/1.1 Upgrade (prior knowledge as `h2agent` provides), with docker (`./run.sh` script at root directory can also be used with some prepends):\n\n  This proxy, encapsulated within the Docker image, is latent until activated by configuration:\n  Proxy front-end ports are configured though environment variables: `H2AGENT_TRAFFIC_PROXY_PORT` (for traffic) and `H2AGENT_ADMIN_PROXY_PORT` (for administration). Proxy back-end ports are also configured, for traffic and administrative interface, by mean `H2AGENT_TRAFFIC_SERVER_PORT` (8000 by default) and `H2AGENT_ADMIN_SERVER_PORT` (8074 by default). Traffic frontend handles requests to the backend port exposed by the `h2agent` traffic server. Likewise, administrative requests are forwarded to the backend port of the `h2agent` administrative server.\n\n  The proxy feature complements `nghttp2 tatsuhiro library` which only provides HTTP/2 protocol without upgrade support from HTTP/1).\n\n  Note that:\n  `H2AGENT_TRAFFIC_SERVER_PORT` **must be aligned with `--traffic-server-port` h2agent parameter**, or `502 Bad Gateway` error will be obtained.\n  `H2AGENT_ADMIN_SERVER_PORT` **must be aligned with `--admin-server-port` h2agent parameter**, or `502 Bad Gateway` error will be obtained.\n\n  Example with proxy enabled for traffic interface (same applies to enable administrative interface, or both of them):\n\n  ```bash\n  $ docker run --rm -it -p 8555:8001 -p 8000:8000 -p 8074:8074 -p 8080:8080 -e H2AGENT_TRAFFIC_PROXY_PORT=8001 ghcr.io/testillano/h2agent:latest \u0026\n\n  $ curl -i http://localhost:8555/arbitrary/path # through proxy (same using --http1.0 or --http1.1)\n  HTTP/1.1 501 Not Implemented\n  Date: \u003cdate\u003e\n  Transfer-Encoding: chunked\n  Via: 2 nghttpx\n\n  $ curl -i --http2 http://localhost:8555/arbitrary/path # through proxy\n  HTTP/1.1 101 Switching Protocols\n  Connection: Upgrade\n  Upgrade: h2c\n\n  HTTP/2 501\n  date: \u003cdate\u003e\n  via: 2 nghttpx\n\n  $ curl -i --http2-prior-knowledge http://localhost:8555/arbitrary/path # through proxy\n  HTTP/2 501\n  date: \u003cdate\u003e\n  via: 2 nghttpx\n\n  $ curl -i --http2-prior-knowledge http://localhost:8000/arbitrary/path # directly to h2agent\n  HTTP/2 501\n  date: \u003cdate\u003e\n  ```\n\n  This mode is also useful to play with `nginx` balancing capabilities (check this [gist](https://gist.github.com/testillano/3f7ff732850f42a6e7ee625aa182e617)).\n\n* Run within `kubernetes` deployment: corresponding `helm charts` are normally packaged into releases. This is described in [\"how it is delivered\"](#How-it-is-delivered) section, but in summary, you could do the following:\n\n  ```bash\n  $ # helm dependency update helm/h2agent # no dependencies at the moment\n  $ helm install h2agent-example helm/h2agent --wait\n  $ pod=$(kubectl get pod -l app.kubernetes.io/name=h2agent --no-headers -o name)\n  $ kubectl exec ${pod} -c h2agent -- /opt/h2agent --help # run, for example, h2agent help\n  ```\n\n  You may enter the pod and play with helpers functions and examples (deployed with the chart under `/opt/utils`) which are anyway, automatically sourced on `bash` shell:\n\n  ```bash\n  $ kubectl exec -it ${pod} -- bash\n  ```\n\nIt is also possible to build the project natively (not using containers) installing all the dependencies on the local host:\n\n```bash\n$ ./build-native.sh # you may prepend non-empty DEBUG variable value in order to troubleshoot build procedure\n```\n\nSo, you could run `h2agent` (or any other binary available under `build/\u003cbuild type\u003e/bin`) directly:\n\n\n* Run \u003cu\u003eproject executable\u003c/u\u003e natively (standalone):\n\n  ```bash\n  $ build/Release/bin/h2agent \u0026 # default server at 0.0.0.0 with traffic/admin/prometheus ports: 8000/8074/8080\n  ```\n\n  Provide `-h` or `--help` to get **process help** (more information [here](#Execution-of-main-agent)) or execute any other project executable.\n\n  You may also play with project helpers functions and examples:\n\n  ```bash\n  $ source tools/helpers.bash # type help in any moment after sourcing\n  $ server_example # follow instructions or just source it: source \u003c(server_example)\n  $ client_example # follow instructions or just source it: source \u003c(client_example)\n  ```\n\n\n## Static linking\n\nBoth build helpers (`build.sh` and `build-native.sh` scripts) allow to force project static link, although this is [not recommended](https://stackoverflow.com/questions/57476533/why-is-statically-linking-glibc-discouraged):\n\n```bash\n$ STATIC_LINKING=TRUE ./build.sh --auto\n- or -\n$ STATIC_LINKING=TRUE ./build-native.sh\n```\n\nSo, you could run binaries regardless if needed libraries are available or not (including `glibc` with all its drawbacks).\n\n\n\n\nNext sections will describe in detail, how to build [project image](#Project-image) and project executables ([using docker](#Build-project-with-docker) or [natively](#Build-project-natively)).\n\n## Project image\n\nThis image is already available at `github container registry` and `docker hub` for every repository `tag`, and also for master as `latest`:\n\n```bash\n$ docker pull ghcr.io/testillano/h2agent:\u003ctag\u003e\n```\n\nYou could also build it using the script `./build.sh` located at project root:\n\n\n```bash\n$ ./build.sh --project-image\n```\n\nThis image is built with `./Dockerfile`.\nBoth `ubuntu` and `alpine` base images are supported, but the official image uploaded is the one based in `ubuntu`.\nIf you want to work with alpine-based images, you may build everything from scratch, including all docker base images which are project dependencies.\n\n## Build project with docker\n\n### Builder image\n\nThis image is already available at `github container registry` and `docker hub` for every repository `tag`, and also for master as `latest`:\n\n```bash\n$ docker pull ghcr.io/testillano/h2agent_builder:\u003ctag\u003e\n```\n\nYou could also build it using the script `./build.sh` located at project root:\n\n\n```bash\n$ ./build.sh --builder-image\n```\n\nThis image is built with `./Dockerfile.build`.\nBoth `ubuntu` and `alpine` base images are supported, but the official image uploaded is the one based in `ubuntu`.\nIf you want to work with alpine-based images, you may build everything from scratch, including all docker base images which are project dependencies.\n\n### Usage\n\nBuilder image is used to build the project. To run compilation over this image, again, just run with `docker`:\n\n```bash\n$ envs=\"-e MAKE_PROCS=$(grep processor /proc/cpuinfo -c) -e BUILD_TYPE=Release\"\n$ docker run --rm -it -u $(id -u):$(id -g) ${envs} -v ${PWD}:/code -w /code \\\n          ghcr.io/testillano/h2agent_builder:\u003ctag\u003e\n```\n\nYou could generate documentation passing extra arguments to the [entry point](https://github.com/testillano/nghttp2/blob/master/deps/build.sh) behind:\n\n```bash\n$ docker run --rm -it -u $(id -u):$(id -g) ${envs} -v ${PWD}:/code -w /code \\\n          ghcr.io/testillano/h2agent_builder::\u003ctag\u003e \"\" doc\n```\n\nYou could also build the library using the script `./build.sh` located at project root:\n\n\n```bash\n$ ./build.sh --project\n```\n\n## Build project natively\n\nIt may be hard to collect every dependency, so there is a native build **automation script**:\n\n```bash\n$ ./build-native.sh\n```\n\nNote 1: this script is tested on `ubuntu bionic`, then some requirements could be not fulfilled in other distributions.\n\nNote 2: once dependencies have been installed, you may just type `cmake . \u0026\u0026 make` to have incremental native builds.\n\nNote 3: if not stated otherwise, this document assumes that binaries (used on examples) are natively built.\n\n\n\nAnyway, we will describe the common steps for a `cmake-based` building project like this. Firstly you may install `cmake`:\n\n```bash\n$ sudo apt-get install cmake\n```\n\nAnd then generate the makefiles from project root directory:\n\n```bash\n$ cmake .\n```\n\nYou could specify type of build, 'Debug' or 'Release', for example:\n\n```bash\n$ cmake -DCMAKE_BUILD_TYPE=Debug .\n$ cmake -DCMAKE_BUILD_TYPE=Release .\n```\n\nYou could also change the compilers used:\n\n```bash\n$ cmake -DCMAKE_CXX_COMPILER=/usr/bin/g++     -DCMAKE_C_COMPILER=/usr/bin/gcc\n```\n\nor\n\n```bash\n$ cmake -DCMAKE_CXX_COMPILER=/usr/bin/clang++ -DCMAKE_C_COMPILER=/usr/bin/clang\n```\n\n### Requirements\n\nCheck the requirements described at building `dockerfile` (`./Dockerfile.build`) as well as all the ascendant docker images which are inherited:\n\n```\nh2agent builder (./Dockerfile.build)\n   |\nhttp2comm (https://github.com/testillano/http2comm)\n   |\nnghttp2 (https://github.com/testillano/nghttp2)\n```\n\n### Build\n\n```bash\n$ make\n```\n\n### Clean\n\n```bash\n$ make clean\n```\n\n### Documentation\n\n```bash\n$ make doc\n```\n\n```bash\n$ cd docs/doxygen\n$ tree -L 1\n     .\n     ├── Doxyfile\n     ├── html\n     ├── latex\n     └── man\n```\n\n### Install\n\n```bash\n$ sudo make install\n```\n\nOptionally you could specify another prefix for installation:\n\n```bash\n$ cmake -DMY_OWN_INSTALL_PREFIX=$HOME/applications/http2\n$ make install\n```\n\n### Uninstall\n\n```bash\n$ cat install_manifest.txt | sudo xargs rm\n```\n\n## Testing\n\n### Unit test\n\nCheck the badge above to know the current coverage level.\nYou can execute it after project building, for example for `Release` target:\n\n```bash\n$ build/Release/bin/unit-test # native executable\n- or -\n$ docker run -it --rm -v ${PWD}/build/Release/bin/unit-test:/ut --entrypoint \"/ut\" ghcr.io/testillano/h2agent:latest # docker\n```\n\nTo shortcut docker run execution, `./ut.sh` script at root directory can also be used.\nYou may provide extra arguments to Google test executable, for example:\n\n```bash\n$ ./ut.sh --gtest_list_tests # to list the available tests\n$ ./ut.sh --gtest_filter=Transform_test.ProvisionWithResponseBodyAsString # to filter and run 1 specific test\n$ ./ut.sh --gtest_filter=Transform_test.* # to filter and run 1 specific suite\netc.\n```\n\n#### Coverage\n\nCoverage reports can be generated using `./tools/coverage.sh`:\n\n```bash\n./tools/coverage.sh [ut|ct|all]\n```\n\n- `ut`: Unit test coverage only (default for CI)\n- `ct`: Component test coverage only (requires Kubernetes cluster)\n- `all`: Combined UT + CT coverage (default)\n\nReports are generated in:\n- `coverage/ut/` - Unit test coverage\n- `coverage/ct/` - Component test coverage\n- `coverage/combined/` - Combined coverage\n\nThe script builds Docker images from `Dockerfile.coverage.ut` and `Dockerfile.coverage.ct`, using `lcov` for instrumentation. A `firefox` instance is launched to display the report.\n\nBoth `ubuntu` and `alpine` base images are supported.\n\n### Component test\n\nComponent test is based in `pytest` framework. Just execute `ct/test.sh` to deploy the component test chart. Some cloud-native technologies are required: `docker`, `kubectl`, `minikube` and `helm`, for example:\n\n```bash\n$ docker version\nClient: Docker Engine - Community\n Version:           20.10.17\n API version:       1.41\n Go version:        go1.17.11\n Git commit:        100c701\n Built:             Mon Jun  6 23:02:56 2022\n OS/Arch:           linux/amd64\n Context:           default\n Experimental:      true\n\nServer: Docker Engine - Community\n Engine:\n  Version:          20.10.17\n  API version:      1.41 (minimum version 1.12)\n  Go version:       go1.17.11\n  Git commit:       a89b842\n  Built:            Mon Jun  6 23:01:02 2022\n  OS/Arch:          linux/amd64\n  Experimental:     false\n containerd:\n  Version:          1.6.6\n  GitCommit:        10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1\n runc:\n  Version:          1.1.2\n  GitCommit:        v1.1.2-0-ga916309\n docker-init:\n  Version:          0.19.0\n  GitCommit:        de40ad0\n\n$ kubectl version\nClient Version: version.Info{Major:\"1\", Minor:\"22\", GitVersion:\"v1.22.4\", GitCommit:\"b695d79d4f967c403a96986f1750a35eb75e75f1\", GitTreeState:\"clean\", BuildDate:\"2021-11-17T15:48:33Z\", GoVersion:\"go1.16.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"22\", GitVersion:\"v1.22.2\", GitCommit:\"8b5a19147530eaac9476b0ab82980b4088bbc1b2\", GitTreeState:\"clean\", BuildDate:\"2021-09-15T21:32:41Z\", GoVersion:\"go1.16.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n\n$ minikube version\nminikube version: v1.23.2\ncommit: 0a0ad764652082477c00d51d2475284b5d39ceed\n\n$ helm version\nversion.BuildInfo{Version:\"v3.7.1\", GitCommit:\"1d11fcb5d3f3bf00dbe6fe31b8412839a96b3dc4\", GitTreeState:\"clean\", GoVersion:\"go1.16.9\"}\n```\n\n### Benchmarking test\n\nThis test is useful to identify possible memory leaks, process crashes or performance degradation introduced with new fixes or features.\n\nReference:\n\n* VirtualBox VM with Linux Bionic (Ubuntu 18.04.3 LTS).\n\n* Running on Intel(R) Core(TM) i7-8650U CPU @1.90GHz.\n\n* Memory size: 15GiB.\n\n\n\nLoad testing is done with [h2load](https://nghttp2.org/documentation/h2load-howto.html) using the helper script `benchmark/start.sh` (check `-h|--help` for more information). Test scenarios are defined as profiles under `benchmark/tests/`, organized by mode:\n\n- `tests/server/\u003cname\u003e/`: benchmarks the h2agent **server** — h2load sends traffic, the monitor tracks the h2agent process.\n- `tests/client/\u003cname\u003e/`: benchmarks the h2agent **client** — the h2agent under test sends traffic through client provisions against its own server mock (fixture), the monitor tracks the same process.\n\nMode is auto-detected from the profile directory. Use `--list` to see available profiles and `--test \u003cname\u003e` to select one (defaults to `default`).\n\nAlso, reports are generated as markdown files under the profile's `reports/` subdirectory, including test metadata, resource usage (CPU, RSS) and prometheus counter deltas.\n\n#### Considerations\n\n* `h2agent` could be for example started with 5 worker threads to discard application bottlenecks.\n* Add histogram boundaries to better classify internal answer latencies for [metrics](#OAM).\n* Data storage is disabled in the script by default to prevent memory from growing and improve server response times (remember that storage shall be kept when provisions require data persistence).\n* In general, even with high traffic rates, you could get sneaky snapshots just enabling and then quickly disabling data storage, for example using [function helpers](#Helper-functions): `server_data_configuration --keep-all \u0026\u0026 server_data_configuration --discard-all`\n\n\n\nSo you may start the process, again, natively or using docker:\n\n```bash\n$ OPTS=(--verbose --prometheus-response-delay-seconds-histogram-boundaries \"100e-6,200e-6,300e-6,400e-6,1e-3,5e-3,10e-3,20e-3\")\n$ build/Release/bin/h2agent \"${OPTS[@]}\" # native executable\n- or -\n$ docker run --rm -it --network=host -v $(pwd -P):$(pwd -P) ghcr.io/testillano/h2agent:latest \"${OPTS[@]}\" # docker\n- or -\n$ XTRA_ARGS=\"-v $(pwd -P):$(pwd -P)\" ./run.sh # benchmark options are provided within run.sh script\n```\n\nIn other shell we launch the benchmark test:\n\n```bash\n$ benchmark/start.sh --list\n\nAvailable test profiles:\n\n  [server]\n    default                      Representative mix: vault, RegexReplace, timestamp, variable chaining and math.\n    ipv4-with-regexreplace       IPv4 construction from phone number using RegexReplace filter.\n    ipv4-with-split              IPv4 construction from phone number using Split filter.\n    legacy                       Original benchmark provision: large JSON response body with file I/O.\n    light                        Minimal echo: server returns a static JSON body with no transforms.\n\n  [client]\n    default                      3-step session flow (create, update, delete) against a server mock.\n\n$ benchmark/start.sh -y\n\n\nTest profile: default [server]\nDescription:  Representative mix: JSON response with vault, RegexReplace URI extraction, timestamp, variable chaining and math expression. No file I/O.\nMethod:       POST\nURI:          /app/v1/load-test/id-21\nBody:         {\"id\":\"1a8b8863\",\"name\":\"Ada Lovelace\",\"age\":198}\nHeaders:      {\"content-type\":\"application/json\"}\n\nInput File manager configuration to enable read cache (true|false)\n (or set 'H2AGENT__FILE_MANAGER_ENABLE_READ_CACHE_CONFIGURATION' to be non-interactive) [true]:\ntrue\n\nInput Server configuration to ignore request body (true|false)\n (or set 'H2AGENT__SERVER_TRAFFIC_IGNORE_REQUEST_BODY_CONFIGURATION' to be non-interactive) [false]:\nfalse\n\nInput Server configuration to perform dynamic request body allocation (true|false)\n (or set 'H2AGENT__SERVER_TRAFFIC_DYNAMIC_REQUEST_BODY_ALLOCATION_CONFIGURATION' to be non-interactive) [false]:\nfalse\n\nInput Server data storage configuration (discard-all|discard-history|keep-all)\n (or set 'H2AGENT__DATA_STORAGE_CONFIGURATION' to be non-interactive) [discard-all]:\ndiscard-all\n\nInput Server data purge configuration (enable-purge|disable-purge)\n (or set 'H2AGENT__DATA_PURGE_CONFIGURATION' to be non-interactive) [disable-purge]:\ndisable-purge\n\nInput H2agent endpoint address\n (or set 'H2AGENT__BIND_ADDRESS' to be non-interactive) [0.0.0.0]:\n0.0.0.0\n\nInput H2agent response delay in milliseconds\n (or set 'H2AGENT__RESPONSE_DELAY_MS' to be non-interactive) [0]:\n0\nstarting benchmark...\nspawning thread #0: 1 total client(s). 100000 total requests\nApplication protocol: h2c\nprogress: 10% done\nprogress: 20% done\nprogress: 30% done\nprogress: 40% done\nprogress: 50% done\nprogress: 60% done\nprogress: 70% done\nprogress: 80% done\nprogress: 90% done\nprogress: 100% done\n\nfinished in 784.31ms, 127501.09 req/s, 133.03MB/s\nrequests: 100000 total, 100000 started, 100000 done, 100000 succeeded, 0 failed, 0 errored, 0 timeout\nstatus codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx\ntraffic: 104.34MB (109407063) total, 293.03KB (300058) headers (space savings 95.77%), 102.33MB (107300000) data\n                     min         max         mean         sd        +/- sd\ntime for request:      230us     11.70ms       757us       215us    91.98%\ntime for connect:      136us       136us       136us         0us   100.00%\ntime to 1st byte:     1.02ms      1.02ms      1.02ms         0us   100.00%\nreq/s           :  127529.30   127529.30   127529.30        0.00   100.00%\n\nreal    0m0,790s\nuser    0m0,217s\nsys     0m0,073s\n+ set +x\n\nReport generated: benchmark/tests/default/reports/20260314_001500_h2load.md\n```\n\n## Execution of main agent\n\n### Command line\n\nYou may take a look to `h2agent` command line by just typing the build path, for example for `Release` target using native executable:\n\n\u003cdetails\u003e\n\u003csummary\u003eh2agent --help\u003c/summary\u003e\n\n```bash\n$ build/Release/bin/h2agent --help\nh2agent - HTTP/2 Agent service\n\nUsage: h2agent [options]\n\nOptions:\n\n[--name \u003cname\u003e]\n  Application/process name. Used in prometheus metrics 'source' label. Defaults to 'h2agent'.\n\n[-l|--log-level \u003cDebug|Informational|Notice|Warning|Error|Critical|Alert|Emergency\u003e]\n  Set the logging level; defaults to warning.\n\n[-v|--verbose]\n  Output log traces on console.\n\n[--ipv6]\n  IP stack configured for IPv6. Defaults to IPv4.\n\n[-b|--bind-address \u003caddress\u003e]\n  Servers local bind \u003caddress\u003e (admin/traffic/prometheus); defaults to '0.0.0.0' (ipv4) or '::' (ipv6).\n\n[-a|--admin-port \u003cport\u003e]\n  Admin local \u003cport\u003e; defaults to 8074.\n\n[-p|--traffic-server-port \u003cport\u003e]\n  Traffic server local \u003cport\u003e; defaults to 8000. Set '0' (or negative) to\n  disable (mock server service is enabled by default).\n\n[-m|--traffic-server-api-name \u003cname\u003e]\n  Traffic server API name; defaults to empty.\n\n[-n|--traffic-server-api-version \u003cversion\u003e]\n  Traffic server API version; defaults to empty.\n\n[-w|--traffic-server-worker-threads \u003cthreads\u003e]\n  Number of traffic server worker threads; defaults to 1 (inline processing,\n  no queue dispatcher). When set to 1, requests are processed directly within\n  the nghttp2 I/O thread, which is optimal for fast provisions (e.g. regex\n  matching with static responses). When set above 1, a queue dispatcher model\n  is activated: I/O threads enqueue requests and worker threads process them\n  asynchronously. This helps when provision logic is slow (response delays,\n  file I/O, etc.) as it keeps I/O threads responsive. For trivial logic, the\n  dispatch overhead may negate any benefit.\n\n[--admin-server-worker-threads \u003cthreads\u003e]\n  Number of admin server worker threads; defaults to 33 (max waiters + 1).\n  Higher values allow more concurrent blocking wait requests on global\n  variables. Blocked threads consume no CPU (condition variable sleep).\n\n[--traffic-server-max-worker-threads \u003cthreads\u003e]\n  Maximum number of worker threads; defaults to '--traffic-server-worker-threads'.\n  When set higher, additional threads are created on demand to handle traffic\n  spikes. Only effective when queue dispatcher is active (workers \u003e 1).\n\n[-t|--traffic-server-io-threads \u003cthreads\u003e]\n  Number of nghttp2 traffic server I/O threads; defaults to 1.\n  Connections are assigned to threads round-robin, so multiple\n  threads only help when multiple client connections are used.\n  Admin server hardcodes 1 nghttp2 thread(s).\n\n[--traffic-server-queue-dispatcher-max-size \u003csize\u003e]\n  The queue dispatcher model (which is activated for more than 1 server worker)\n  schedules a initial number of threads which could grow up to a maximum value\n  (given by '--traffic-server-max-worker-threads').\n  Optionally, a basic congestion control algorithm can be enabled by mean providing\n  a non-negative value to this parameter. When the queue size grows due to lack of\n  consumption capacity, a service unavailable error (503) will be answered skipping\n  context processing when the queue size reaches the value provided; defaults to -1,\n  which means that congestion control is disabled.\n\n[-k|--traffic-server-key \u003cpath file\u003e]\n  Path file for traffic server key to enable SSL/TLS; insecured by default.\n\n[-d|--traffic-server-key-password \u003cpassword\u003e]\n  When using SSL/TLS this may provided to avoid 'PEM pass phrase' prompt at process\n  start.\n\n[-c|--traffic-server-crt \u003cpath file\u003e]\n  Path file for traffic server crt to enable SSL/TLS; insecured by default.\n\n[-s|--secure-admin]\n  When key (-k|--traffic-server-key) and crt (-c|--traffic-server-crt) are provided,\n  only traffic interface is secured by default. This option secures admin interface\n  reusing traffic configuration (key/crt/password).\n\n[--schema \u003cpath file\u003e]\n  Path file for optional startup schema configuration.\n\n[--vault \u003cpath file\u003e]\n  Path file for optional startup vault(s) configuration.\n\n[--traffic-server-matching \u003cpath file\u003e]\n  Path file for optional startup traffic server matching configuration.\n\n[--traffic-server-provision \u003cpath file\u003e]\n  Path file for optional startup traffic server provision configuration.\n\n[--traffic-client-endpoint \u003cpath file\u003e]\n  Path file for optional startup traffic client endpoint configuration.\n\n[--traffic-client-provision \u003cpath file\u003e]\n  Path file for optional startup traffic client provision configuration.\n\n[--traffic-server-ignore-request-body]\n  Ignores traffic server request body reception processing as optimization in\n  case that its content is not required by planned provisions (enabled by default).\n\n[--traffic-server-dynamic-request-body-allocation]\n  When data chunks are received, the server appends them into the final request body.\n  In order to minimize reallocations over internal container, a pre reserve could be\n  executed (by design, the maximum received request body size is allocated).\n  Depending on your traffic profile this could be counterproductive, so this option\n  disables the default behavior to do a dynamic reservation of the memory.\n\n[--discard-data]\n  Disables data storage for events processed (enabled by default).\n  This invalidates some features like FSM related ones (in-state, out-state)\n  or event-source transformations.\n  This affects to both mock server-data and client-data storages,\n  but normally both containers will not be used together in the same process instance.\n\n[--discard-data-key-history]\n  Disables data key history storage (enabled by default).\n  Only latest event (for each key '[client endpoint/]method/uri')\n  will be stored and will be accessible for further analysis.\n  This limits some features like FSM related ones (in-state, out-state)\n  or event-source transformations or client triggers.\n  Implicitly disabled by option '--discard-data'.\n  Ignored for server-unprovisioned events (for troubleshooting purposes).\n  This affects to both mock server-data and client-data storages,\n  but normally both containers will not be used together in the same process instance.\n\n[--disable-purge]\n  Skips events post-removal when a provision on 'purge' state is reached (enabled by default).\n\n  This affects to both mock server-data and client-data purge procedures,\n  but normally both flows will not be used together in the same process instance.\n\n  In server mode, purge clears events for the current data key (method + URI).\n  In client mode, purge clears all events accumulated during the entire state chain,\n  where each step typically targets a different method+URI (see docs/api/README.md).\n\n[--prometheus-port \u003cport\u003e]\n  Prometheus local \u003cport\u003e; defaults to 8080.\n\n[--prometheus-response-delay-seconds-histogram-boundaries \u003ccomma-separated list of doubles\u003e]\n  Bucket boundaries for response delay seconds histogram; no boundaries are defined by default.\n  Scientific notation is allowed, i.e.: \"100e-6,200e-6,300e-6,400e-6,1e-3,5e-3,10e-3,20e-3\".\n  This affects to both mock server and client processing time values, but normally both flows\n  will not be used together in the same process instance. On the server, it's primarily aimed\n  at controlling local bottlenecks, so it makes more sense to use it on the client endpoint.\n\n[--prometheus-message-size-bytes-histogram-boundaries \u003ccomma-separated list of doubles\u003e]\n  Bucket boundaries for Rx/Tx message size bytes histogram; no boundaries are defined by default.\n  This affects to both mock 'server internal/client external' message size values,\n  but normally both flows will not be used together in the same process instance.\n\n[--disable-metrics]\n  Disables prometheus scrape port (enabled by default).\n\n[--long-term-files-close-delay-usecs \u003cmicroseconds\u003e]\n  Close delay after write operation for those target files with constant paths provided.\n  Normally used for logging files: we should have few of them. By default, 1000000\n  usecs are configured. Delay is useful to avoid I/O overhead under normal conditions.\n  Zero value means that close operation is done just after writting the file.\n\n[--short-term-files-close-delay-usecs \u003cmicroseconds\u003e]\n  Close delay after write operation for those target files with variable paths provided.\n  Normally used for provision debugging: we could have multiple of them. Traffic rate\n  could constraint the final delay configured to avoid reach the maximum opened files\n  limit allowed. By default, it is configured to 0 usecs.\n  Zero value means that close operation is done just after writting the file.\n\n[--remote-servers-lazy-connection]\n  By default connections are performed when adding client endpoints.\n  This option configures remote addresses to be connected on demand.\n\n[--traffic-client-worker-threads \u003cthreads\u003e]\n  Number of traffic client worker threads per endpoint; defaults to 1.\n  Each worker creates its own HTTP/2 connection to the endpoint.\n  Requests are dispatched round-robin (sequence % threads) across\n  workers, parallelizing response processing (transforms, JsonConstraint,\n  etc.) while a single timer drives the configured provision rate (cps).\n\n[-V|--version]\n  Program version.\n\n[-h|--help]\n  This help.\n\nTypical use cases:\n\n  Mock server (fast static responses):\n    h2agent [--traffic-server-io-threads \u003cN\u003e]\n    Default settings are optimal. Use '-t \u003cN\u003e' with N matching the number\n    of client connections to distribute I/O load across threads.\n\n  Mock server (simulated latency or heavy transforms):\n    h2agent -w \u003cN\u003e [--traffic-server-max-worker-threads \u003cM\u003e]\n    Workers handle slow provisions without blocking I/O threads.\n    Add '--traffic-server-queue-dispatcher-max-size \u003cS\u003e' to enable\n    congestion control (503 responses when queue exceeds S).\n\n  Traffic client (load generator):\n    h2agent --traffic-server-port 0 --traffic-client-worker-threads \u003cN\u003e\n    Disable server with port 0. Each worker opens its own connection,\n    multiplying effective throughput.\n\n  Benchmark:\n    h2agent --verbose --prometheus-response-delay-seconds-histogram-boundaries\n      \"100e-6,200e-6,300e-6,400e-6,1e-3,5e-3,10e-3,20e-3\"\n    Enables detailed latency histograms for performance analysis.\n```\n\n\u003c/details\u003e\n\n## Execution of matching helper utility\n\nThis utility could be useful to test regular expressions before putting them at provision objects (`requestUri` or transformation filters which use regular expressions).\n\n### Command line\n\nYou may take a look to `matching-helper` command line by just typing the build path, for example for `Release` target using native executable:\n\n\u003cdetails\u003e\n\u003csummary\u003ematching-helper --help\u003c/summary\u003e\n\n```bash\n$ build/Release/bin/matching-helper --help\nUsage: matching-helper [options]\n\nOptions:\n\n-r|--regex \u003cvalue\u003e\n  Regex pattern value to match against.\n\n-t|--test \u003cvalue\u003e\n  Test string value to be matched.\n\n[-f|--fmt \u003cvalue\u003e]\n  Optional regex-replace output format.\n\n[-h|--help]\n  This help.\n\nExamples:\n   matching-helper --regex \"https://(\\w+).(com|es)/(\\w+)/(\\w+)\" \\\n                   --test \"https://github.com/testillano/h2agent\" --fmt 'User: $3; Project: $4'\n   matching-helper --regex \"(a\\|b\\|)([0-9]{10})\" --test \"a|b|0123456789\" --fmt '$2'\n   matching-helper --regex \"1|3|5|9\" --test 2\n```\n\n\u003c/details\u003e\n\nExecution example:\n\n```bash\n$ build/Release/bin/matching-helper --regex \"(a\\|b\\|)([0-9]{10})\" --test \"a|b|0123456789\" --fmt '$2'\n\nRegex: (a\\|b\\|)([0-9]{10})\nTest:  a|b|0123456789\nFmt:   $2\n\nMatch result: true\nFmt result  : 0123456789\n```\n\n## Execution of Arash Partow's helper utility\n\nThis utility could be useful to test [Arash Partow's](https://github.com/ArashPartow/exprtk) mathematical expressions before putting them at provision objects (`math.*` source).\n\n### Command line\n\nYou may take a look to `arashpartow-helper` command line by just typing the build path, for example for `Release` target using native executable:\n\n\u003cdetails\u003e\n\u003csummary\u003earashpartow-helper --help\u003c/summary\u003e\n\n```bash\n$ build/Release/bin/arashpartow-helper --help\nUsage: arashpartow-helper [options]\n\nOptions:\n\n-e|--expression \u003cvalue\u003e\n  Expression to be calculated.\n\n[-h|--help]\n  This help.\n\nExamples:\n   arashpartow-helper --expression \"(1+sqrt(5))/2\"\n   arashpartow-helper --expression \"404 == 404\"\n   arashpartow-helper --expression \"cos(3.141592)\"\n\nArash Partow help: https://raw.githubusercontent.com/ArashPartow/exprtk/master/readme.txt\n```\n\n\u003c/details\u003e\n\nExecution example:\n\n```bash\n$ build/Release/bin/arashpartow-helper --expression \"404 == 404\"\n\nExpression: 404 == 404\n\nResult: 1\n```\n\n## Execution of h2client utility\n\nThis utility could be useful to test simple HTTP/2 requests.\n\n### Command line\n\nYou may take a look to `h2client` command line by just typing the build path, for example for `Release` target using native executable:\n\n\u003cdetails\u003e\n\u003csummary\u003eh2client --help\u003c/summary\u003e\n\n```bash\n$ build/Release/bin/h2client --help\nUsage: h2client [options]\n\nOptions:\n\n-u|--uri \u003cvalue\u003e\n URI to access.\n\n[-l|--log-level \u003cDebug|Informational|Notice|Warning|Error|Critical|Alert|Emergency\u003e]\n  Set the logging level; defaults to warning.\n\n[-v|--verbose]\n  Output log traces on console.\n\n[-t|--timeout-milliseconds \u003cvalue\u003e]\n  Time in milliseconds to wait for requests response. Defaults to 5000.\n\n[-m|--method \u003cPOST|GET|PUT|DELETE|HEAD\u003e]\n  Request method. Defaults to 'GET'.\n\n[--header \u003cvalue\u003e]\n  Header in the form 'name:value'. This parameter can occur multiple times.\n\n[-b|--body \u003cvalue\u003e]\n  Plain text for request body content.\n\n[--secure]\n Use secure connection.\n\n[--rc-probe]\n  Forwards HTTP status code into equivalent program return code.\n  So, any code greater than or equal to 200 and less than 400\n  indicates success and will return 0 (1 in other case).\n  This allows to use the client as HTTP/2 command probe in\n  kubernetes where native probe is only supported for HTTP/1.\n\n[-h|--help]\n  This help.\n\nExamples:\n   h2client --timeout 1 --uri http://localhost:8000/book/8472098362\n   h2client --method POST --header \"content-type:application/json\" --body '{\"foo\":\"bar\"}' --uri http://localhost:8000/data\n```\n\n\u003c/details\u003e\n\nExecution example:\n\n```bash\n$ build/Release/bin/h2client --timeout 1 --uri http://localhost:8000/book/8472098362\n\nClient endpoint:\n   Secure connection: false\n   Host:   localhost\n   Port:   8000\n   Method: GET\n   Uri: http://localhost:8000/book/8472098362\n   Path:   book/8472098362\n   Timeout for responses (ms): 5000\n\n\n Response status code: 200\n Response body: {\"author\":\"Ludwig von Mises\"}\n Response headers: [date: Sun, 27 Nov 2022 18:58:32 GMT]\n```\n\n## UDP utilities\n\n\u003e **Note:** Since `h2agent` now supports native HTTP/2 client capabilities and the `clientProvision` target (which triggers outgoing HTTP/2 flows directly from server transformations), most functional testing scenarios that previously required the UDP channel can be solved entirely within `h2agent`. The UDP tools below are primarily useful for **benchmarking** (controlled-rate load generation via `udp-client` + `udp-server-h2client`) and for **integration with external non-HTTP systems** that need to react to `h2agent` events through the `udpSocket.*` target. Among them, `udp-server-h2client` remains the most versatile, as it bridges UDP events to HTTP/2 requests towards isolated services.\n\n## Execution of udp-server utility\n\nThis utility could be useful to test UDP messages sent by `h2agent` (`udpSocket.*` target).\nYou can also use netcat in bash, to generate messages easily:\n\n```bash\necho -n \"\u003cmessage here\u003e\" | nc -u -q0 -w1 -U /tmp/udp.sock\n```\n\n### Command line\n\nYou may take a look to `udp-server` command line by just typing the build path, for example for `Release` target using native executable:\n\n\u003cdetails\u003e\n\u003csummary\u003eudp-server --help\u003c/summary\u003e\n\n```bash\n$ build/Release/bin/udp-server --help\nUsage: udp-server [options]\n\nOptions:\n\n-k|--udp-socket-path \u003cvalue\u003e\n  UDP unix socket path.\n\n[-e|--print-each \u003cvalue\u003e]\n  Print messages each specific amount (must be positive). Defaults to 1.\n  Setting datagrams estimated rate should take 1 second/printout and output\n  frequency gives an idea about UDP receptions rhythm.\n\n[-h|--help]\n  This help.\n\nExamples:\n   udp-server --udp-socket-path /tmp/udp.sock\n\nTo stop the process you can send UDP message 'EOF':\n   echo -n EOF | nc -u -q0 -w1 -U /tmp/udp.sock\n```\n\n\u003c/details\u003e\n\nExecution example:\n\n```bash\n$ build/Release/bin/udp-server --udp-socket-path /tmp/udp.sock\n\nPath: /tmp/udp.sock\nPrint each: 1 message(s)\n\nRemember:\n To stop process: echo -n EOF | nc -u -q0 -w1 -U /tmp/udp.sock\n\n\nWaiting for UDP messages...\n\n\u003ctimestamp\u003e                         \u003csequence\u003e      \u003cudp datagram\u003e\n___________________________________ _______________ _______________________________\n2023-08-02 19:16:36.340339 GMT      0               555000000\n2023-08-02 19:16:37.340441 GMT      1               555000001\n2023-08-02 19:16:38.340656 GMT      2               555000002\n\nExiting (EOF received) !\n```\n\n## Execution of udp-server-h2client utility\n\nThis utility could be useful to test UDP messages sent by `h2agent` (`udpSocket.*` target).\nYou can also use netcat in bash, to generate messages easily:\n\n```bash\necho -n \"\u003cmessage here\u003e\" | nc -u -q0 -w1 -U /tmp/udp.sock\n```\n\nThe difference with previous `udp-server` utility, is that this can trigger actively HTTP/2 requests for every UDP reception.\nThis makes possible coordinate actions between `h2agent` acting as a server, to create outgoing requests linked to its receptions through the UDP channel served in this external tool.\nPowerful parsing capabilities allow to create any kind of request dynamically using patterns `@{udp[.n]}` for uri, headers and body configured.\nPrometheus metrics are also available to measure the HTTP/2 performance towards the remote server (check it by mean, for example: `curl http://0.0.0.0:8081/metrics`).\n\n### Command line\n\nYou may take a look to `udp-server-h2client` command line by just typing the build path, for example for `Release` target using native executable:\n\n\u003cdetails\u003e\n\u003csummary\u003eudp-server-h2client --help\u003c/summary\u003e\n\n```bash\n$ build/Release/bin/udp-server-h2client --help\nUsage: udp-server-h2client [options]\n\nOptions:\n\nUDP server will trigger one HTTP/2 request for every reception, replacing optionally\ncertain patterns on method, uri, headers and/or body provided. Implemented patterns:\nfollowing:\n\n   @{udp}:      replaced by the whole UDP datagram received.\n   @{udp8}:     selects the 8 least significant digits in the UDP datagram, and may\n                be used to build valid IPv4 addresses for a given sequence.\n   @{udp.\u003cn\u003e}:  UDP datagram received may contain a pipe-separated list of tokens\n                and this pattern will be replaced by the nth one.\n   @{udp8.\u003cn\u003e}: selects the 8 least significant digits in each part if exists.\n\nTo stop the process you can send UDP message 'EOF'.\nTo print accumulated statistics you can send UDP message 'STATS' or stop/interrupt the process.\n\n[--name \u003cname\u003e]\n  Application/process name. Used in prometheus metrics 'source' label. Defaults to 'udp-server-h2client'.\n\n-k|--udp-socket-path \u003cvalue\u003e\n  UDP unix socket path.\n\n[-o|--udp-output-socket-path \u003cvalue\u003e]\n  UDP unix output socket path. Written for every response received. This socket must be previously created by UDP server (bind()).\n  Try this bash recipe to create an UDP server socket (or use another udp-server-h2client instance for that):\n     $ path=\"/tmp/udp2.sock\"\n     $ rm -f ${path}\n     $ socat -lm -ly UNIX-RECV:\"${path}\" STDOUT\n\n[--udp-output-value \u003cvalue\u003e]\n  UDP datagram to be written on output socket, for every response received. By default,\n  original received datagram is used (@{udp}). Same patterns described above are valid for this parameter.\n\n[-w|--workers \u003cvalue\u003e]\n  Number of worker threads to post outgoing requests and manage asynchronous timers (timeout, pre-delay).\n  Defaults to system hardware concurrency (8), however 2 could be enough.\n\n[-e|--print-each \u003cvalue\u003e]\n  Print UDP receptions each specific amount (must be positive). Defaults to 1.\n  Setting datagrams estimated rate should take 1 second/printout and output\n  frequency gives an idea about UDP receptions rhythm.\n\n[-l|--log-level \u003cDebug|Informational|Notice|Warning|Error|Critical|Alert|Emergency\u003e]\n  Set the logging level; defaults to warning.\n\n[-v|--verbose]\n  Output log traces on console.\n\n[-t|--timeout-milliseconds \u003cvalue\u003e]\n  Time in milliseconds to wait for requests response. Defaults to 5000.\n\n[-d|--send-delay-milliseconds \u003cvalue\u003e]\n  Time in seconds to delay before sending the request. Defaults to 0.\n  It also supports negative values which turns into random number in\n  the range [0,abs(value)].\n\n[-m|--method \u003cvalue\u003e]\n  Request method. Defaults to 'GET'. After optional parsing, should be one of:\n  POST|GET|PUT|DELETE|HEAD.\n\n-u|--uri \u003cvalue\u003e\n URI to access.\n\n[--header \u003cvalue\u003e]\n  Header in the form 'name:value'. This parameter can occur multiple times.\n\n[-b|--body \u003cvalue\u003e]\n  Plain text for request body content.\n\n[--secure]\n Use secure connection.\n\n[--prometheus-bind-address \u003caddress\u003e]\n  Prometheus local bind \u003caddress\u003e; defaults to 0.0.0.0.\n\n[--prometheus-port \u003cport\u003e]\n  Prometheus local \u003cport\u003e; defaults to 8081. Value of -1 disables metrics.\n\n[--prometheus-response-delay-seconds-histogram-boundaries \u003ccomma-separated list of doubles\u003e]\n  Bucket boundaries for response delay seconds histogram; no boundaries are defined by default.\n  Scientific notation is allowed, i.e.: \"100e-6,200e-6,300e-6,400e-6,1e-3,5e-3,10e-3,20e-3\".\n\n[--prometheus-message-size-bytes-histogram-boundaries \u003ccomma-separated list of doubles\u003e]\n  Bucket boundaries for Tx/Rx message size bytes histogram; no boundaries are defined by default.\n\n[-h|--help]\n  This help.\n\nExamples:\n   udp-server-h2client --udp-socket-path /tmp/udp.sock --print-each 1000 --timeout-milliseconds 1000 --uri http://0.0.0.0:8000/book/@{udp} --body \"ipv4 is @{udp8}\"\n   udp-server-h2client --udp-socket-path /tmp/udp.sock --print-each 1000 --method POST --uri http://0.0.0.0:8000/data --header \"content-type:application/json\" --body '{\"book\":\"@{udp}\"}'\n\n   To provide body from file, use this trick: --body \"$(jq -c '.' long-body.json)\"\n```\n\n\u003c/details\u003e\n\nExecution example:\n\n```bash\n$ build/Release/bin/udp-server-h2client -k /tmp/udp.sock -t 3000 -d -300 -u http://0.0.0.0:8000/data --header \"content-type:application/json\" -b '{\"foo\":\"@{udp}\"}'\n\nApplication/process name: udp-server-h2client\nUDP socket path: /tmp/udp.sock\nWorkers: 80\nPrint each: 1 message(s)\nLog level: Warning\nVerbose (stdout): false\nWorkers: 10\nMaximum workers: 40\nCongestion control is disabled\nPrometheus local bind address: 0.0.0.0\nPrometheus local port: 8081\nClient endpoint:\n   Secure connection: false\n   Host:   0.0.0.0\n   Port:   8000\n   Method: GET\n   Uri: http://0.0.0.0:8000/data\n   Path:   data\n   Headers: [content-type: application/json]\n   Body: {\"foo\":\"@{udp}\"}\n   Timeout for responses (ms): 3000\n   Send delay for requests (ms): random in [0,300]\n   Builtin patterns used: @{udp}\n\nRemember:\n To get prometheus metrics:       curl http://localhost:8081/metrics\n To send ad-hoc UDP message:      echo -n \u003cdata\u003e | nc -u -q0 -w1 -U /tmp/udp.sock\n To print accumulated statistics: echo -n STATS  | nc -u -q0 -w1 -U /tmp/udp.sock\n To stop process:                 echo -n EOF    | nc -u -q0 -w1 -U /tmp/udp.sock\n\n\nWaiting for UDP messages...\n\n\u003ctimestamp\u003e                         \u003csequence\u003e      \u003cudp datagram\u003e                  \u003caccumulated status codes\u003e\n___________________________________ _______________ _______________________________ ___________________________________________________________\n2023-08-02 19:16:36.340339 GMT      0               555000000                       0 2xx, 0 3xx, 0 4xx, 0 5xx, 0 timeouts, 0 connection errors\n2023-08-02 19:16:37.340441 GMT      1               555000001                       1 2xx, 0 3xx, 0 4xx, 0 5xx, 0 timeouts, 0 connection errors\n2023-08-02 19:16:38.340656 GMT      2               555000002                       2 2xx, 0 3xx, 0 4xx, 0 5xx, 0 timeouts, 0 connection errors\n\nExiting (EOF received) !\n\nstatus codes: 3 2xx, 0 3xx, 0 4xx, 0 5xx, 0 timeouts, 0 connection errors\n```\n\n## Execution of udp-client utility\n\nThis utility could be useful to test `udp-server`, and specially, `udp-server-h2client` tool.\nYou can also use netcat in bash, to generate messages easily, but this tool provide high load. This tool manages a monotonically increasing sequence within a given range, and allow to parse it over a pattern to build the datagram generated. Even, we could provide a list of patterns which will be randomized.\nAlthough we could launch multiple UDP clients towards the UDP server (such server must be unique due to non-oriented connection nature of UDP protocol), it is probably unnecessary: this client is fast enough to generate the required load.\n\n### Command line\n\nYou may take a look to `udp-client` command line by just typing the build path, for example for `Release` target using native executable:\n\n\u003cdetails\u003e\n\u003csummary\u003eudp-client --help\u003c/summary\u003e\n\n```bash\n$ build/Release/bin/udp-client --help\nUsage: udp-client [options]\n\nOptions:\n\n-k|--udp-socket-path \u003cvalue\u003e\n  UDP unix socket path.\n\n[--eps \u003cvalue\u003e]\n  Events per second. Floats are allowed (0.016667 would mean 1 tick per minute),\n  negative number means unlimited (depends on your hardware) and 0 is prohibited.\n  Defaults to 1.\n\n[-r|--rampup-seconds \u003cvalue\u003e]\n  Rampup seconds to reach 'eps' linearly. Defaults to 0.\n  Only available for speeds over 1 event per second.\n\n[-i|--initial \u003cvalue\u003e]\n  Initial value for datagram. Defaults to 0.\n\n[-f|--final \u003cvalue\u003e]\n  Final value for datagram. Defaults to unlimited.\n\n[--template \u003cvalue\u003e]\n  Template to build UDP datagram (patterns '@{seq}' and '@{seq[\u003c+|-\u003e\u003cinteger\u003e]}'\n  will be replaced by sequence number and shifted sequences respectively).\n  Defaults to '@{seq}'.\n  This parameter can occur multiple times to create a random set. For example,\n  passing '--template foo --template foo --template bar', there is a probability\n  of 2/3 to select 'foo' and 1/3 to select 'bar'.\n\n[-e|--print-each \u003cvalue\u003e]\n  Print messages each specific amount (must be positive). Defaults to 1.\n\n[-h|--help]\n  This help.\n\nExamples:\n   udp-client --udp-socket-path /tmp/udp.sock --eps 3500 --initial 555000000 --final 555999999 --template \"foo/bar/@{seq}\"\n   udp-client --udp-socket-path /tmp/udp.sock --eps 3500 --initial 555000000 --final 555999999 --template \"@{seq}|@{seq-8000}\"\n   udp-client --udp-socket-path /tmp/udp.sock --final 0 --template STATS # sends 1 single datagram 'STATS' to the server\n\nTo stop the process, just interrupt it.\n```\n\n\u003c/details\u003e\n\nExecution example:\n\n```bash\n$ build/Release/bin/udp-client --udp-socket-path /tmp/udp.sock --eps 1000 --initial 555000000 --print-each 1000\n\nPath: /tmp/udp.sock\nPrint each: 1 message(s)\nRange: [0, 18446744073709551615]\nPattern: @{seq}\nEvents per second: 1000\nRampup (s): 0\n\n\nGenerating UDP messages...\n\n\u003ctimestamp\u003e                         \u003ctime(s)\u003e \u003csequence\u003e      \u003cudp datagram\u003e\n___________________________________ _________ _______________ _______________________________\n2023-08-02 19:16:36.340339 GMT      0         0               555000000\n2023-08-02 19:16:37.340441 GMT      1         1000            555000999\n2023-08-02 19:16:38.340656 GMT      2         2000            555001999\n...\n\n```\n\n## Working with unix sockets and docker containers\n\nIn former sections we described the UDP utilities available at `h2agent`project. But we run them natively. As they are packaged into `h2agent` docker image, they can also be launched as docker containers selecting the appropriate entry point. The only thing to take into account is that the unix socket between UDP server (`udp-server` or `udp-server-h2client`) and client (`udp-client`) must be shared. This can be done through two alternatives:\n\n* Executing client and server within the same container.\n* Executing them in separate containers (recommended as docker best practice \"one container - one process\").\n\nTaking `udp-server` and `udp-client` as example:\n\nIn the **first case**, we will launch the second one (client) in foreground using `docker exec`:\n\n```bash\n$ docker run -d --rm -it --name udp --entrypoint /opt/udp-server ghcr.io/testillano/h2agent:latest -k /tmp/udp.sock\n$ docker exec -it udp /opt/udp-client -k /tmp/udp.sock # in foreground will throw client output\n```\n\nIf the client is launched in background (-d) you won't be able to follow process output (`docker logs -f udp` shows server output because it was launched in first place).\n\nIn the **second case**, which is the recommended, we need to create an external volume:\n\n```bash\n$ docker volume create --name=socketVolume\n```\n\nAnd then, we can run the containers in separated shells (or both in background with '-d' because know they have independent docker logs):\n\n```bash\n$ docker run --rm -it -v socketVolume:/tmp --entrypoint /opt/udp-server ghcr.io/testillano/h2agent:latest -k /tmp/udp.sock\n```\n\n```bash\n$ docker run --rm -it -v socketVolume:/tmp --entrypoint /opt/udp-client ghcr.io/testillano/h2agent:latest -k /tmp/udp.sock\n```\n\nThis can also be done with `docker-compose`:\n\n```yaml\nversion: '3.3'\n\nvolumes:\n  socketVolume:\n    external: true\n\nservices:\n  udpServer:\n    image: ghcr.io/testillano/h2agent:latest\n    volumes:\n      - socketVolume:/tmp\n    entrypoint: [\"/opt/udp-server\"]\n    command: [\"-k\", \"/tmp/udp.sock\"]\n\n  udpClient:\n    image: ghcr.io/testillano/h2agent:latest\n    depends_on:\n      - udpServer\n    volumes:\n      - socketVolume:/tmp\n    entrypoint: [\"/bin/bash\", \"-c\"] # we can also use bash entrypoint to ease command:\n    command: \u003e\n      \"/opt/udp-client -k /tmp/udp.sock\"\n```\n\n\n\n## Execution with TLS support\n\n`H2agent` server mock supports `SSL/TLS`. You may use helpers located under `tools/ssl` to create key and certificate files for client and server:\n\n```bash\n$ ls tools/ssl/\ncreate_ca-signed-certificates.sh  create_self-signed_certificates.sh\n```\n\nOnce executed, a hint will show how to proceed, mainly adding these parameters to the `h2agent`:\n\n```bash\n--traffic-server-key \u003cserver key file\u003e --traffic-server-crt \u003cserver certificate file\u003e --traffic-server-key-password \u003ckey password to avoid PEM Phrase prompt on startup\u003e\n```\n\nAs well as some `curl` hints (secure and insecure examples).\n\n## Metrics\n\nBased in [prometheus data model](https://prometheus.io/docs/concepts/data_model/) and implemented with [prometheus-cpp library](https://github.com/jupp0r/prometheus-cpp), those metrics are collected and exposed through the server scraping port (`8080` by default, but configurable at [command line](#Command-line) by mean `--prometheus-port` option) and could be retrieved using Prometheus or compatible visualization software like [Grafana](https://prometheus.io/docs/visualization/grafana/) or just browsing `http://localhost:8080/metrics`.\n\nMore information about implemented metrics [here](#OAM).\nTo play with grafana automation in `h2agent` project, go to `./tools/grafana` directory and check its [PLAY_GRAFANA.md](./tools/grafana/PLAY_GRAFANA.md) file to learn more about.\n\n## Traces and printouts\n\nTraces are managed by `syslog` by default, but could be shown verbosely at standard output (`--verbose`) depending on the traces design level and the current level assigned. For example:\n\n```bash\n$ ./h2agent --verbose \u0026\n[1] 27407\n\n\n88            ad888888b,\n88           d8\"     \"88                                                     ,d\n88                   a8P                                                     88\n88,dPPYba,        ,d8P\"   ,adPPYYba,   ,adPPYb,d8   ,adPPYba,  8b,dPPYba,  MM88MMM\n88P'    \"8a     a8P\"      \"\"     `Y8  a8\"    `Y88  a8P_____88  88P'   `\"8a   88\n88       88   a8P'        ,adPPPPP88  8b       88  8PP\"\"\"\"\"\"\"  88       88   88\n88       88  d8\"          88,    ,88  \"8a,   ,d88  \"8b,   ,aa  88       88   88,\n88       88  88888888888  `\"8bbdP\"Y8   `\"YbbdP\"Y8   `\"Ybbd8\"'  88       88   \"Y888\n                                       aa,    ,88\n                                        \"Y8bbdP\"\n\nhttps://github.com/testillano/h2agent\n\nQuick Start:    https://github.com/testillano/h2agent#quick-start\nPrezi overview: https://prezi.com/view/RFaiKzv6K6GGoFq3tpui/\nChatGPT:        https://github.com/testillano/h2agent/blob/master/README.md#questions-and-answers\n\n\n20/11/22 20:53:33 CET: Starting h2agent\nLog level: Warning\nVerbose (stdout): true\nIP stack: IPv4\nAdmin local port: 8074\nTraffic server (mock server service): enabled\nTraffic server local bind address: 0.0.0.0\nTraffic server local port: 8000\nTraffic server api name: \u003cnone\u003e\nTraffic server api version: \u003cnone\u003e\nTraffic server worker threads: 1\nTraffic server key password: \u003cnot provided\u003e\nTraffic server key file: \u003cnot provided\u003e\nTraffic server crt file: \u003cnot provided\u003e\nSSL/TLS disabled: both key \u0026 certificate must be provided\nTraffic secured: no\nAdmin secured: no\nSchema configuration file: \u003cnot provided\u003e\nVault configuration file: \u003cnot provided\u003e\nTraffic server process request body: true\nTraffic server pre reserve request body: true\nData storage: enabled\nData key history storage: enabled\nPurge execution: enabled\nTraffic server matching configuration file: \u003cnot provided\u003e\nTraffic server provision configuration file: \u003cnot provided\u003e\nPrometheus local bind address: 0.0.0.0\nPrometheus local port: 8080\nLong-term files close delay (usecs): 1000000\nShort-term files close delay (usecs): 0\nRemote servers lazy connection: false\n\n$ kill $!\n20/11/22 20:53:37 CET: [Warning]|/code/src/main.cpp:207(sighndl)|Signal received: 15\n20/11/22 20:53:37 CET: [Warning]|/code/src/main.cpp:194(myExit)|Terminating with exit code 1\n20/11/22 20:53:37 CET: [Warning]|/code/src/main.cpp:148(stopAgent)|Stopping h2agent timers service at 20/11/22 20:53:37 CET\n20/11/22 20:53:37 CET: [Warning]|/code/src/main.cpp:154(stopAgent)|Stopping h2agent admin service at 20/11/22 20:53:37 CET\n20/11/22 20:53:37 CET: [Warning]|/code/src/main.cpp:161(stopAgent)|Stopping h2agent traffic service at 20/11/22 20:53:37 CET\n20/11/22 20:53:37 CET: [Warning]|/code/src/main.cpp:198(myExit)|Stopping logger\n\n[1]+  Exit 1                  h2agent --verbose\n```\n\n## Training\n\n### Prepare the environment\n\n#### Working in project checkout\n\n##### Requirements\n\nSome utilities may be required, so please try to install them on your system. For example:\n\n```bash\n$ sudo apt-get install netcat\n$ sudo apt-get install curl\n$ sudo apt-get install jq\n$ sudo apt-get install dos2unix\n```\n\n##### Starting agent\n\nThen you may build project images and start the `h2agent` with its docker image:\n\n```bash\n$ ./build.sh --auto # builds project images\n$ ./run.sh --verbose # starts agent with docker by mean helper script\n```\n\nOr build native executable and run it from shell:\n\n```bash\n$ ./build-native.sh # builds executable\n$ build/Release/bin/h2agent --verbose # starts executable\n```\n\n#### Working in training container\n\nThe training image is already available at `github container registry` and `docker hub` for every repository `tag`, and also for master as `latest`:\n\n```bash\n$ docker pull ghcr.io/testillano/h2agent_training:\u003ctag\u003e\n```\n\nBoth `ubuntu` and `alpine` base images are supported, but the official image uploaded is the one based in `ubuntu`.\n\nYou may also find useful run the training image by mean the helper script `./tools/training.sh`. This script builds and runs an image based in `./Dockerfile.training` which adds the needed resources to run training resources. The image working directory is `/home/h2agent` making the experience like working natively over the git checkout and providing by mean symbolic links, main project executables.\n\nIf your are working in the training container, there is no need to build the project neither install requirements commented in previous section, just execute the process in background:\n\n```bash\nbash-5.1# ls -lrt\ntotal 12\ndrwxr-xr-x    5 root     root          4096 Dec 16 20:29 tools\ndrwxr-xr-x   12 root     root          4096 Dec 16 20:29 kata\ndrwxr-xr-x    2 root     root          4096 Dec 16 20:29 demo\nlrwxrwxrwx    1 root     root            12 Dec 16 20:29 h2agent -\u003e /opt/h2agent\nbash-5.1# ./h2agent --verbose \u0026\n```\n\n### Training resources\n\n#### Questions and answers\n\nA conversational bot is available in `./tools/questions-and-answers` directory. It is implemented in python using *langchain* and *OpenAI* (ChatGPT) technology. Also *Groq* model can be used if the proper key is detected. Check its [README.md](./tools/questions-and-answers/README.md) file to learn more about.\n\n#### Play\n\nA playground is available at `./tools/play-h2agent` directory. It is designed to guide through a set of easy examples. Check its [README.md](./tools/play-h2agent/README.md) file to learn more about.\n\n#### Test\n\nA GUI tester implemented in python is available at `./tools/test-h2agent` directory. It is designed to make quick interactions through traffic and administrative interfaces. Check its [README.md](./tools/test-h2agent/README.md) file to learn more about.\n\n#### Demo\n\nA demo is available at `./demo` directory. It is designed to introduce the `h2agent` in a funny way with an easy use case. Open its [README.md](./demo/README.md) file to learn more about.\n\n#### Kata\n\nA kata is available at `./kata` directory. It is designed to guide through a set of exercises with increasing complexity. Check its [README.md](./kata/README.md) file to learn more about.\n\n## Management interface\n\n`h2agent` listens on a specific management port (*8074* by default) for incoming requests, implementing a *REST API* to manage the process operation. Through the *API* we could program the agent behavior over *URI* path `/admin/v1/`.\n\nThe full API reference is documented using the [OpenAPI 3.1 specification](./docs/api/openapi.yaml) and rendered interactively at:\n\n**[https://testillano.github.io/h2agent/api/](https://testillano.github.io/h2agent/api/)**\n\nFor detailed conceptual documentation (matching algorithms, state machines, transformation pipeline with sources/targets/filters, triggering, data querying), see the [API User Guide](./docs/api/README.md).\n\nThe API is organized in the following groups:\n\n| Group | Endpoints | Description |\n|-------|-----------|-------------|\n| **schema** | `POST` `GET` `DELETE` `/admin/v1/schema` | Validation schemas for traffic checking |\n| **vault** | `POST` `GET` `DELETE` `/admin/v1/vault` | Shared variables between provisions |\n| | `GET` `/admin/v1/vault/\u003ckey\u003e/wait` | Block until variable changes (long-poll) |\n| **files** | `GET` `/admin/v1/files` | Processed files status |\n| **logging** | `GET` `PUT` `/admin/v1/logging` | Dynamic log level configuration |\n| **configuration** | `GET` `/admin/v1/configuration` | General process configuration |\n| **server/configuration** | `GET` `PUT` `/admin/v1/server/configuration` | Server request body reception settings |\n| **server-matching** | `POST` `GET` `/admin/v1/server-matching` | Traffic classification algorithm (FullMatching, FullMatchingRegexReplace, RegexMatching) |\n| **server-provision** | `POST` `GET` `DELETE` `/admin/v1/server-provision` | Server mock response behavior and transformation pipeline |\n| **server-data** | `GET` `PUT` `DELETE` `/admin/v1/server-data` | Server events storage and inspection |\n| **client-endpoint** | `POST` `GET` `DELETE` `/admin/v1/client-endpoint` | Remote server connection definitions |\n| **client-provision** | `POST` `GET` `DELETE` `/admin/v1/client-provision` | Client mock request behavior, triggering and transformation pipeline |\n| **client-data** | `GET` `PUT` `DELETE` `/admin/v1/client-data` | Client events storage and inspection |\n\n## Server-triggered client flows with serverEvent source\n\nWhen a server provision triggers a client provision (via `clientProvision.\u003cid\u003e` target), the client provision can read data directly from the originating server event using the `serverEvent` source. This avoids copying fields to intermediate `vault` variables and is the recommended approach when the client request must be built from server request data.\n\n**Source syntax:** `serverEvent.\u003cmethod\u003e.\u003curi\u003e.\u003cevent-number\u003e.\u003cjson-path\u003e`\n\n**Example:** A webhook receiver that forwards the notification body to another endpoint:\n\n```json\n// server-provision.json\n{\n  \"requestMethod\": \"POST\",\n  \"requestUri\": \"/api/v1/webhook/notify\",\n  \"responseCode\": 200,\n  \"responseBody\": {\"status\": \"received\"},\n  \"transform\": [\n    { \"source\": \"value.1\", \"target\": \"clientProvision.forwardNotification.initial\" }\n  ]\n}\n```\n\n```json\n// client-provision.json\n{\n  \"id\": \"forwardNotification\",\n  \"endpoint\": \"myServer\",\n  \"requestMethod\": \"POST\",\n  \"requestUri\": \"/api/v1/forward\",\n  \"requestHeaders\": {\"content-type\": \"application/json\"},\n  \"transform\": [\n    {\n      \"source\": \"serverEvent.POST./api/v1/webhook/notify.0.body\",\n      \"target\": \"request.body.json.object\"\n    }\n  ]\n}\n```\n\nThe client provision reads the last received body at `POST /api/v1/webhook/notify` and uses it as the outgoing request body — no `vault` needed. A full runnable example is available at `tools/play-h2agent/examples/ServerTriggersClientViaServerEvent`.\n\n\u003e **Note:** `serverEvent` requires server-data storage to be enabled (default). If `--discard-data` is set, the source will fail and the transformation is skipped.\n\nSimilarly, server provisions can access client event history using the `clientEvent` source. This is useful when the server acts as an intermediary: it triggers a client flow, and then uses the client response data to build its own response. The `clientEvent` source uses query-parameter addressing with `clientEndpointId`, `requestMethod`, `requestUri`, `eventNumber` and `eventPath` fields (see the [transformation pipeline](docs/api/README.md#transformation-pipeline) section for details).\n\n## Dynamic response delays\n\nThe provisioning model allows configuration of the response delay, in milliseconds, for a received request. This delay may be a fixed or a random value, but is always a single, static delay overall. However, an additional mechanism -the dynamic delay- can be employed as a pseudo-notification procedure to suppress (or block) answers under specific conditions. This feature is activated via a vault with the naming format: `__core:response-delay-ms:\u003crecvseq\u003e`.\n\nThis variable holds a millisecond value that postpones the answer for a specific reception identifier. The dynamic delay is ignored (and the answer is immediate) if the variable does not exist, holds an invalid (non-numeric) value, or a zeroed value. This procedure operates independently of the provisioned response delay, which is executed first (if configured).\n\nThe simplest way to use this feature is to configure the server provisioning to create this variable using the received server sequence as the unique reception identifier:\n\n```json\n{\n  \"requestMethod\": \"GET\",\n  \"requestUri\": \"/foo/bar\",\n  \"responseCode\": 200,\n  \"responseDelayMs\": 20,\n  \"responseHeaders\": {\n    \"content-type\": \"text/html\"\n  },\n  \"responseBody\": \"done!\",\n  \"transform\": [\n    {\n      \"source\": \"recvseq\",\n      \"target\": \"var.recvseq\"\n    },\n    {\n      \"source\": \"value.1\",\n      \"target\": \"vault.__core:response-delay-ms:@{recvseq}\"\n    }\n  ]\n}\n```\n\nIn that use case, when a GET request is received by the server, its own dynamic response delay is created with a value of 1 millisecond. You could confirm that just checking vaults hold by the h2agent process:\n\n```bash\n$ curl -s --http2-prior-knowledge http://localhost:8074/admin/v1/vault | jq '.'\n{\n  \"__core:response-delay-ms:1\": \"1\"\n}\n```\n\nThe procedure is as follows: the answer is first delayed by 20 ms (as configured in the provisioning model). Subsequently, the dynamic mechanism begins: the server checks the variable's value, which is currently 1 ms, thereby updating the timer expiration repeatedly until the variable is updated to zero milliseconds (an invalid value will also release the wait loop), or until the variable is removed. Caution should be exercised with small delay values, as they could provoke a burst of timer events in the server when checking the answer condition, especially if that condition takes too long to resolve (and the client-side request timeout is also large).\n\nThe server provisioning model can modify the variable upon subsequent receptions (it may need to correlate information from those requests with an auxiliary storage where the original server sequence is kept). However, these updates are typically managed externally through the administrative interface, for example:\n\n```bash\n$ curl --http2-prior-knowledge -XPOST http://localhost:8074/admin/v1/vault -H'content-type:application/json' -d'{\"__core:response-delay-ms:1\":\"0\"}'\n- or better: -\n$ curl --http2-prior-knowledge -XDELETE \"http://localhost:8074/admin/v1/vault?name=__core:response-delay-ms:1\"\n```\n\nHow the server sequence is determined is a separate issue. For example, the provisioning configuration could write UDP datagrams (containing the server sequence), which might trigger other external operations that ultimately lead to those administrative operations.\n\n## Blocking Wait on Vault Entrys\n\nThe endpoint `GET /admin/v1/vault/\u003ckey\u003e/wait` blocks until a variable changes, eliminating polling loops in test orchestration:\n\n```bash\n# Wait until SIGNAL equals \"done\" (30s timeout by default):\ncurl -sf --http2-prior-knowledge \"http://localhost:8074/admin/v1/vault/SIGNAL/wait?value=done\u0026timeoutMs=30000\"\n\n# Wait for any change on MY_FLAG:\ncurl -sf --http2-prior-knowledge \"http://localhost:8074/admin/v1/vault/MY_FLAG/wait?timeoutMs=5000\"\n```\n\nResponse (200 when met, 408 on timeout, 429 if too many concurrent waiters):\n\n```json\n{\n  \"result\": true,\n  \"key\": \"SIGNAL\",\n  \"value\": \"done\",\n  \"previousValue\": \"pending\"\n}\n```\n\n### Concurrency sizing\n\nEach blocking wait occupies one admin worker thread (sleeping, no CPU cost).\nThe default configuration provides 33 threads (max 32 concurrent waits + 1\nfree for normal admin operations). The number of concurrent waits at any\ninstant follows a binomial distribution.\n\n**Example**: 200 parallel test cases, 20% use waits, each test lasts 5s with\n3s spent in a blocking wait (p = 3/5 = 0.6):\n\n```\nBinomial(n=40, p=0.6): mean=24, stddev≈3.1\n\nConcurrent waits    Probability of exceeding\n       24           50%    (mean)\n       27           16%    (mean + 1σ)\n       30           2.6%   (mean + 2σ)\n       32           0.5%   (max waiters)\n```\n\nWith the default 33 admin threads, fewer than 0.5% of instants would hit the\nlimit. Adjust `--admin-server-worker-threads` if your workload requires more.\n\n## Reserved Vault Entrys\n\nThe following variables are used internally by the server engine (`__core`) to manage critical functions such as dynamic response latency and stream error handling. These variables **must not be manipulated or overwritten** by user configurations for different purposes as expected, to avoid interference.\n\n### Naming convention\n\nReserved vault entry names follow this structure:\n\n```\n__\u003cprefix\u003e:\u003creserved-function\u003e:\u003cadditional information\u003e\n```\n\n| Element | Separator | Description |\n| :--- | :--- | :--- |\n| Reserved prefix | `__` (leading double underscore) | Marks the variable as internal/reserved (e.g. `__core`). |\n| Hierarchical levels | `:` (colon) | Separates prefix, function and additional information. |\n| Words within a level | `-` (hyphen) | Separates words inside a single level (kebab-case), although for additional information it could be free |\n\nThis convention avoids dots (`.`), which are reserved as separator between vault key and json-pointer path in the provision syntax (`vault.\u003ckey\u003e.\u003cpath\u003e`). Using colons ensures reserved entries can be read and written through both the REST admin API and provision transformations without restrictions.\n\n---\n\n### 1. Dynamic Response Delay\n\n| Variable | Description |\n| :--- | :--- |\n| `__core:response-delay-ms:\u003crecvseq\u003e` | Stores a dynamic delay value (in milliseconds) that **postpones the response** to a request. It functions as a pseudo-notification mechanism. If the value is zero, non-numeric, or the variable is removed, the dynamic delay is ignored. \u003cu\u003eThis is an \"input\" variable\u003c/u\u003e as it is used to feed a procedure. |\n\n#### Components\n\n| Component | Example Value | Description |\n| :--- | :--- | :--- |\n| `__core` | N/A | **Reserved Prefix:** Indicates the variable belongs to the core system and is reserved. |\n| `response-delay-ms` | N/A | **Reserved Function:** Identifies the variable as the dynamic response delay (in milliseconds). |\n| `\u003crecvseq\u003e` | `12345` | **Sequence Identifier Value:** The unique reception sequence value (`server sequence`) for the specific request. |\n\n---\n\n### 2. Stream Error Indicator\n\n| Variable | Description |\n| :--- | :--- |\n| `__core:stream-error-traffic-server:\u003crecvseq\u003e-\u003cmethod\u003e-\u003curi\u003e` | Stores an [error code](https://datatracker.ietf.org/doc/html/rfc7540#section-7) indicating a stream or connection error detected from traffic server. Used to notify external failure conditions that must be reflected in the response. \u003cu\u003eThis is an \"output\" variable\u003c/u\u003e as it is dumped automatically on errors. |\n\n#### Components\n\n| Component | Example Value | Description |\n| :--- | :--- | :--- |\n| `__core` | N/A | **Reserved Prefix:** Indicates the variable belongs to the core system and is reserved. |\n| `stream-error-traffic-server` | N/A | **Reserved Function:** Identifies the variable as a stream or connection error indicator. |\n| `\u003crecvseq\u003e` | `12345` | **Sequence Identifier Value:** The unique reception sequence value (`server sequence`) for the request. |\n| `\u003cmethod\u003e` | `GET` | **Request Method Value:** The method used in the HTTP request (e.g., GET, POST, PUT, etc.). |\n| `\u003curi\u003e` | `/api/users` | **Request Uri Value:** The Uniform Resource Identifier (URI) or path of the HTTP request. |\n\n## How it is delivered\n\n`h2agent` is delivered in a `helm` chart called `h2agent` (`./helm/h2agent`) so you may integrate it in your regular `helm` chart deployments by just adding a few artifacts.\nThis chart deploys the `h2agent` pod based on the docker image with the executable under `./opt` together with some helper functions to be sourced on docker shell: `/opt/utils/helpers.bash` (default directory path can be modified through `utilsMountPath` helm chart value).\nTake as example the component test chart `ct-h2agent` (`./helm/ct-h2agent`), where the main chart is added as a file requirement but could also be added from helm repository:\n\n## How it integrates in a service\n\n1. Add the project's [helm repository](https://testillano.github.io/helm/) with alias `erthelm`:\n\n   ```bash\n    helm repo add erthelm https://testillano.github.io/helm\n   ```\n\n2. Add one dependency to your `Chart.yaml` file per each service you want to mock with `h2agent` service (use alias when two or more dependencies are included).\n\n   ```yaml\n   dependencies:\n     - name: h2agent\n       version: 1.0.0\n       repository: alias:erthelm\n       alias: h2server\n\n     - name: h2agent\n       version: 1.0.0\n       repository: alias:erthelm\n       alias: h2server2\n\n     - name: h2agent\n       version: 1.0.0\n       repository: alias:erthelm\n       alias: h2client\n   ```\n\n3. Refer to `h2agent` values through the corresponding dependency alias, for example `.Values.h2server.image` to access process repository and tag.\n\n### Agent configuration files\n\nSome [command line](#Command-line) arguments used by the `h2agent` process are files, so they could be added by mean a `config map` (key \u0026 certificate for secured connections and matching/provision configuration files).\n\n## Troubleshooting\n\n### Helper functions\n\nAs we commented [above](#How-it-is-delivered), the `h2agent` helm chart packages a helper functions script which is very useful for troubleshooting. This script is also available for native usage (`./tools/helpers.bash`):\n\n```bash\n$ source ./tools/helpers.bash\n```\n\nThis will show one-line help for every helper function.\n\n### OAM\n\nYou could use any visualization framework to analyze metrics information from `h2agent` but perhaps the simplest way to do it is using the `metrics` function  (just a direct `curl` command to the scrape port) from [function helpers](#Helper-functions): `metrics`.\n\nSo, a direct scrape (for example towards the agent after its *component test*) would be something like this:\n\n```bash\n$ kubectl exec -it -n ns-ct-h2agent h2agent-55b9bd8d4d-2hj9z -- sh -c \"curl http://localhost:8080/metrics\"\n```\n\nOn native execution, it is just a simple `curl` native request:\n\n```bash\n$ curl http://localhost:8080/metrics\n```\n\nMetrics implemented could be divided **counters**, **gauges** or **histograms**:\n\n- **Counters**:\n  - Processed requests (successful/errored(service unavailable, method not allowed, method not implemented, wrong api name or version, unsupported media type)/unsent(conection error))\n  - Processed responses (successful/timed-out)\n  - Non provisioned requests\n  - Purged contexts (successful/failed)\n  - File system and Unix sockets operations (successful/failed)\n\n\n- **Gauges and histograms**:\n\n  - Response delay seconds\n  - Message size bytes for receptions\n  - Message size bytes for transmissions\n\n\n\nThe metrics naming in this project, includes a family prefix which is the project applications name (`h2agent` or `udp_server_h2client`) and the endpoint category (`traffic_server`, `admin_server`, `traffic_client` for `h2agent`, and empty (as implicit), for `udp-server-h2client`). This convention and the labels provided (`[label] `: source, method, status_code, rst_stream_goaway_error_code, operation, result), are designed to ease metrics identification when using monitoring systems like [grafana](https://www.grafana.com).\n\nThe label ''**source**'': one of these labels is the source of information, which could be optionally dynamic (if `--name` parameter is provided to the applications, so we could have `h2agent` by default, or `h2agentB` to be more specific, although grafana provides the `instance` label anyway), or dynamic anyway for the case of client endpoints, which provisioned names are also part of source label.\n\nIn general: `source value = \u003cprocess name\u003e[_\u003cendpoint identifier\u003e]`, where the endpoint identifier has sense for `h2agent` clients as multiple client endpoints could be provisioned. For example:\n\n* No process name provided:\n\n  * h2agent (traffic_server/admin_server/file_manager/socket_manager, are part of the family name).\n  * h2agent_myClient (traffic_client is part of family name)\n  * udp-server-h2client (we omit endpoint identifier, as unique and implicit in default process name)\n* Process name provided (`--name h2agentB` or `--name udp-server-h2clientB`):\n\n  * h2agentB (traffic_server/admin_server/file_manager/socket_manager, are part of the family name).\n  * h2agentB_myClient (traffic_client is part of family name)\n  * udp-server-h2clientB (we omit endpoint identifier, as unique and \u003cu\u003eshould be implicit\u003c/u\u003e in process name)\n\n\n\nThese are the groups of metrics implemented in the project:\n\n\n\n#### HTTP/2 clients\n\n```\nCounters provided by http2comm library and h2agent itself(*):\n\n   h2agent_traffic_client_observed_requests_sents_counter [source] [method]\n   h2agent_traffic_client_observed_requests_unsent_counter [source] [method]\n   h2agent_traffic_client_observed_responses_received_counter [source] [method] [status_code] [rst_stream_goaway_error_code]\n   h2agent_traffic_client_observed_responses_timedout_counter [source] [method]\n   h2agent_traffic_client_provisioned_requests_counter (*) [source] [result: successful/failed]\n   h2agent_traffic_client_purged_contexts_counter (*) [source] [result: successful/failed]\n   h2agent_traffic_client_unexpected_response_status_code_counter (*) [source]\n\nGauges provided by http2comm library:\n\n   h2agent_traffic_client_responses_delay_seconds_gauge [source] [method] [status_code] [rst_stream_goaway_error_code]\n   h2agent_traffic_client_sent_messages_size_bytes_gauge [source] [method]\n   h2agent_traffic_client_received_messages_size_bytes_gauge [source] [method] [status_code] [rst_stream_goaway_error_code]\n\nHistograms provided by http2comm library:\n\n   h2agent_traffic_client_responses_delay_seconds [source] [method] [status_code] [rst_stream_goaway_error_code]\n   h2agent_traffic_client_sent_messages_size_bytes [source] [method]\n   h2agent_traffic_client_received_messages_size_bytes [source] [method] [status_code] [rst_stream_goaway_error_code]\n```\n\n\n\nAs commented, same metrics described above, are also generated for the other application 'udp-server-h2client':\n\n\n\n```\nCounters provided by http2comm library:\n\n   udp_server_h2client_observed_requests_sents_counter [source] [method]\n   udp_server_h2client_observed_requests_unsent_counter [source] [method]\n   udp_server_h2client_observed_responses_received_counter [source] [method] [status_code] [rst_stream_goaway_error_code]\n   udp_server_h2client_observed_responses_timedout_counter [source] [method]\n\nGauges provided by http2comm library:\n\n   udp_server_h2client_responses_delay_seconds_gauge [source] [method] [status_code] [rst_stream_goaway_error_code]\n   udp_server_h2client_sent_messages_size_bytes_gauge [source] [method]\n   udp_server_h2client_received_messages_size_bytes_gauge [source] [method] [status_code] [rst_stream_goaway_error_code]\n\nHistograms provided by http2comm library:\n\n   udp_server_h2client_responses_delay_seconds [source] [method] [status_code] [rst_stream_goaway_error_code]\n   udp_server_h2client_sent_messages_size_bytes [source] [method]\n   udp_server_h2client_received_messages_size_bytes [source] [method] [status_code] [rst_stream_goaway_error_code]\n```\n\n\n\nExamples:\n\n```bash\nudp_server_h2client_responses_delay_seconds_bucket{source=\"customer\",method=\"POST\",status_code=\"201\",le=\"0.005\"} 52\nh2agent_traffic_client_observed_responses_timedout_counter{source=\"http2proxy_myClient\",method=\"POST\"} 1\nh2agent_traffic_client_observed_responses_received_counter{source=\"h2agent_myClient\",method=\"POST\",status_code=\"201\"} 9776\n```\n\nNote that 'histogram' is not part of histograms' category metrics name suffix (as counters and gauges do). The reason is to avoid confusion because metrics created are not actually histogram containers (except bucket). So, 'sum' and 'count' can be used to represent latencies, but not directly as histograms but doing some intermediate calculations:\n\n```bash\nrate(h2agent_traffic_client_responses_delay_seconds_sum[2m])/rate(h2agent_traffic_client_responses_delay_seconds_count[2m])\n```\n\nSo, previous expression (rate is the mean variation in given time interval) is better without 'histogram' in the names, and helps to represent the latency updated in real time (every 2 minutes in the example).\n\n#### HTTP/2 servers\n\nWe have two groups of server metrics. One for administrative operations (1 administrative server interface) and one for traffic events (1 traffic server interface):\n\n```\nCounters provided by http2comm library and h2agent itself(*):\n\n   h2agent_[traffic|admin]_server_observed_requests_accepted_counter [source] [method]\n   h2agent_[traffic|admin]_server_observed_requests_errored_counter [source] [method]\n   h2agent_[traffic|admin]_server_observed_responses_counter [source] [method] [status_code] [rst_stream_goaway_error_code]\n   h2agent_traffic_server_provisioned_requests_counter (*) [source] [result: successful/failed]\n   h2agent_traffic_server_purged_contexts_counter (*) [source] [result: successful/failed]\n\nGauges provided by http2comm library:\n\n   h2agent_[traffic|admin]_server_responses_delay_seconds_gauge [source] [method] [status_code] [rst_stream_goaway_error_code]\n   h2agent_[traffic|admin]_server_received_messages_size_bytes_gauge [source] [method]\n   h2agent_[traffic|admin]_server_sent_messages_size_bytes_gauge [source] [method] [status_code] [rst_stream_goaway_error_code]\n\nHistograms provided by http2comm library:\n\n   h2agent_[traffic|admin]_server_responses_delay_seconds [source] [method] [status_code] [rst_stream_goaway_error_code]\n   h2agent_[traffic|admin]_server_received_messages_size_bytes [source] [method]\n   h2agent_[traffic|admin]_server_sent_messages_size_bytes [source] [method] [status_code] [rst_stream_goaway_error_code]\n```\n\nFor example:\n\n```bash\nh2agent_traffic_server_received_messages_size_bytes_bucket{source=\"myServer\",method=\"POST\",status_code=\"201\",le=\"322\"} 38\nh2agent_traffic_server_provisioned_requests_counter{source=\"h2agent\",result=\"failed\"} 234\nh2agent_traffic_server_purged_contexts_counter{source=\"h2agent\",result=\"successful\"} 2361\n```\n\n#### File system\n\n```\nCounters provided by h2agent:\n\n   h2agent_file_manager_operations_counter [source] [operation: open/close/write/empty/delayedClose/instantClose] [result: successful/failed]\n```\n\nFor example:\n\n```bash\nh2agent_file_manager_operations_counter{source=\"h2agent\",operation=\"open\",result=\"failed\"} 0\n```\n\n#### UDP via sockets\n\n```\nCounters provided by h2agent:\n\n   h2agent_socket_manager_operations_counter [source] [operation: open/write/delayedWrite/instantWrite] [result: successful/failed]\n```\n\nFor example:\n\n```bash\nh2agent_socket_manager_operations_counter{source=\"myServer\",operation=\"write\",result=\"successful\"} 25533\n```\n\n\n\n## Contributing\n\nCheck the project [contributing guidelines](./CONTRIBUTING.md).\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftestillano%2Fh2agent","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftestillano%2Fh2agent","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftestillano%2Fh2agent/lists"}