{"id":13807530,"url":"https://github.com/synacker/daggy","last_synced_at":"2025-04-04T22:07:33.669Z","repository":{"id":42998171,"uuid":"148928130","full_name":"synacker/daggy","owner":"synacker","description":"Daggy - Data Aggregation Utility and C/C++ developer library for data streams catching","archived":false,"fork":false,"pushed_at":"2024-10-02T20:26:38.000Z","size":7637,"stargazers_count":153,"open_issues_count":2,"forks_count":15,"subscribers_count":5,"default_branch":"master","last_synced_at":"2025-03-28T21:07:31.893Z","etag":null,"topics":["aggregation","cross-platform-app","cross-platform-development","data-aggregation","extensible","monitoring","process","qt","serverless-framework","ssh-client","ssh2","stream-processing","streaming"],"latest_commit_sha":null,"homepage":"https://daggy.gitbook.io","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/synacker.png","metadata":{"files":{"readme":"docs/README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2018-09-15T18:17:16.000Z","updated_at":"2024-12-30T22:23:48.000Z","dependencies_parsed_at":"2023-01-16T16:45:09.862Z","dependency_job_id":"fb4a9367-108a-4def-826f-deb68cfb04fa","html_url":"https://github.com/synacker/daggy","commit_stats":{"total_commits":639,"total_committers":5,"mean_commits":127.8,"dds":0.2942097026604069,"last_synced_commit":"9097ef24651d48ac098028b357029892409aa987"},"previous_names":[],"tags_count":14,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/synacker%2Fdaggy","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/synacker%2Fdaggy/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/synacker%2Fdaggy/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/synacker%2Fdaggy/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/synacker","download_url":"https://codeload.github.com/synacker/daggy/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247256112,"owners_count":20909240,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["aggregation","cross-platform-app","cross-platform-development","data-aggregation","extensible","monitoring","process","qt","serverless-framework","ssh-client","ssh2","stream-processing","streaming"],"created_at":"2024-08-04T01:01:26.459Z","updated_at":"2025-04-04T22:07:33.648Z","avatar_url":"https://github.com/synacker.png","language":"C++","readme":"---\ndescription: Common information about Daggy and Getting Started\n---\n\n# About Daggy\n\n![Daggy Workflow](https://github.com/synacker/daggy/actions/workflows/daggy-github-actions.yaml/badge.svg)\n\n![Daggy](daggy\\_logo.svg)\n\n**Daggy - Data Aggregation Utility and C/C++ developer library for data streams catching**\n\n**Daggy** main goals are server-less, cross-platform, simplicity and ease-of-use.\n\n**Daggy** can be helpful for developers, QA, DevOps and engineers for debug, analyze and control any data streams, including requests and responses, in distributed network systems, for example, based on micro-service architecture.\n\n{% hint style=\"info\" %}\nIn short terms, daggy run local or remote processes at the same time, simultaneously read output from processes, stream and aggregate them under once session\n{% endhint %}\n\n{% embed url=\"https://youtu.be/oeNSwv9oYDc\" %}\nDaggy Screencast\n{% endembed %}\n\n- [Introduction and goal concepts](#introduction-and-goal-concepts)\n  * [Daggy High Level Design](#daggy-high-level-design)\n  * [Basic terms](#basic-terms)\n- [Getting Started](#getting-started)\n  * [Getting Daggy](#getting-daggy)\n    + [Fedora](#fedora)\n    + [Windows](#windows)\n    + [Linux](#linux)\n    + [MacOS](#macos)\n    + [Install from source with conan](#install-from-source-with-conan)\n    + [Install from source with cmake (choose for maintainers)](#--install-from-source-with-cmake--choose-for-maintainers---)\n    + [Add as conan package dependency](#add-as-conan-package-dependency)\n  * [Check installation of Daggy Core C++17/20 interface](#check-installation-of-daggy-core-c--17-20-interface)\n  * [Check installation of Daggy Core C11 interface](#check-installation-of-daggy-core-c11-interface)\n  * [Check installation of Daggy Console application](#check-installation-of-daggy-console-application)\n- [Getting Started data aggregation and streaming with Daggy Console Application](#getting-started-data-aggregation-and-streaming-with-daggy-console-application)\n  * [Simple Sources](#simple-sources)\n    + [Example of Data Aggregation Sources with multiple commands and remote data aggregation and streaming](#example-of-data-aggregation-sources-with-multiple-commands-and-remote-data-aggregation-and-streaming)\n\n\u003csmall\u003e\u003ci\u003e\u003ca href='http://ecotrust-canada.github.io/markdown-toc/'\u003eTable of contents generated with markdown-toc\u003c/a\u003e\u003c/i\u003e\u003c/small\u003e\n\n## Introduction and goal concepts\n\nThe **Daggy Project** consist of:\n\n1. **Core** - library for streams aggregation and catching\n2. **Daggy** - console application for aggregation streams into files\n\n### Daggy High Level Design\n\n![](daggy\\_hld.svg)\\\n**Daggy High Level Design**\n\n### Basic terms\n\nThe main goal of Daggy Software System is obtaining the data from **envorinments** that located in **sources** to **streams** into **aggregators** and via **providers**.\n\n**Environment** contains data for **streams**. Out of box, **Core** supports local and remote environments, but can be extended by **user defined environments**. **Local Environment** is located on the same host, that **Daggy Core** instance. **Remote Environment** is located on the different from **Daggy Core** instance host. **User defined environment** can be located anywhere, likes databases, network disks, etc.\n\n**Sources** are declarations, how to obtain the data from **environments**. It descirbes, which kind of data need to be conveted to **streams** and which **provider** will need.\n\nThere is example of **sources** that contains once **local environment** and once **remote environment**:\n\n```yaml\naliases:  \n    - \u0026my_commands\n        pingYa:\n            exec: ping ya.ru\n            extension: log\n        pingGoo:\n            exec: ping goo.gl\n            extension: log\n        \n    - \u0026ssh_auth\n        user: {{env_USER}}\n        passphrase: {{env_PASSWORD}}\n            \nsources:\n    local_environment:\n        type: local\n        commands: *my_commands\n    remote_environment:\n        host: 192.168.1.9\n        type: ssh2\n        parameters: *ssh_auth\n        commands: *my_commands\n```\n\nThe **streams** from **local environment** are generates via **local provider** (looks at `type: local`).\n\nThe **streams** from **remote environment** are generates via **ssh2 provider** (looks at `type: ssh2`).\n\nOut of box **Core** provides **local and ssh2 providers**. Both providers obtains the data for **streams** from processes - the **local provider** runs local process and generates streams from process channels (_stdin_ and _stdout_). **Ssh2 provider** runs remote processes via _ssh2_ protocol and also generates **streams** from process channels. The Daggy Core can be extended by **user defined provider** that will generate streams, for example, from http environment.\n\n**Providers** generate **streams** by parts via **commands**. The each part has unique _seq\\_num_ value, uninterruptedly and consistently. It means, that full data from **stream** can be obtain by adding parts of **stream** in _seq\\_num_ ascending order. Each **stream** can be generated by **command**.\n\nThe **Core** translates **streams** from any count of providers in once **Core Streams Session**. The **streams** from **Core Streams Session** can be aggregated by **aggregators** or viewed by **user**.\n\nOut of box, the **Core** provides several types of **aggregators**:\n\n1. _File_ - aggregates streams into files at runtime, as data arrives. This aggregator is used by **Daggy Console Application**.\n2. _Console_ - aggreagates streams into console output. This aggregator is used by **Daggy Console Application**.\n3. _Callback_ - aggregates streams into ANSI C11 callbacks. This aggregator is used by **Core ANSI C11 Interface**.\n\nThe **Core** library can be extended by **user defined aggregators**.\n\n## Getting Started\n\n### Getting Daggy\n\n#### Fedora\n\n```bash\nsudo dnf install daggy daggy-devel\n```\n\n#### Windows\n\nDownload installer or portable version from [releases page](https://github.com/synacker/daggy/releases).\n\n#### Linux\n\nDownload rpm/deb or portable version from [releases page](https://github.com/synacker/daggy/releases).\n\n#### MacOS\n\nDownload portable version from [releases page](https://github.com/synacker/daggy/releases) or install via homebrew:\n\n```shell\nbrew tap synacker/daggy\nbrew install --build-from-source daggy\n```\n\n#### Install from source with conan\n\n{% hint style=\"info\" %}\n**Build requirenments:** [Conan](https://conan.io), [cmake](https://cmake.org), [git](https://git-scm.com) and C++17/20 compiler.\n{% endhint %}\n\n```bash\ngit clone https://github.com/synacker/daggy.git\nmkdir build\ncd build\nconan install ../daggy --build=missing -o package_deps=True\nconan build ../daggy\n```\n\n#### Install from source with cmake (choose for maintainers)\n\n{% hint style=\"info\" %}\n**System dependencies:** qt6 (Core and Network), libssh2, libyaml-cpp, kainjow-mustache\n{% endhint %}\n\n```bash\ngit clone https://github.com/synacker/daggy.git\nmkdir build\ncd build\ncmake -DVERSION=2.1.3 ../daggy/src -DBUILD_SHARED_LIBS=ON\ncmake --build .\n```\n\n#### Add as conan package dependency\n\nGet daggy [from conan-center](https://conan.io/center/daggy).\n\n{% code title=\"conanfile.py\" %}\n```python\ndef requirements(self):\n    self.requires(\"daggy/2.1.2\")\n```\n{% endcode %}\n\n### Check installation of Daggy Core C++17/20 interface\n\n{% code title=\"test.cpp\" %}\n```cpp\n#include \u003cDaggyCore/Core.hpp\u003e\n#include \u003cDaggyCore/Sources.hpp\u003e\n#include \u003cDaggyCore/aggregators/CFile.hpp\u003e\n#include \u003cDaggyCore/aggregators/CConsole.hpp\u003e\n\n#include \u003cQCoreApplication\u003e\n#include \u003cQTimer\u003e\n\nnamespace {\nconstexpr const char* json_data = R\"JSON(\n{\n    \"sources\": {\n        \"localhost\" : {\n            \"type\": \"local\",\n            \"commands\": {\n                \"ping1\": {\n                    \"exec\": \"ping 127.0.0.1\",\n                    \"extension\": \"log\"\n                },\n                \"ping2\": {\n                    \"exec\": \"ping 127.0.0.1\",\n                    \"extension\": \"log\",\n                    \"restart\": true\n                }\n            }\n        }\n    }\n}\n)JSON\";\n}\n\nint main(int argc, char** argv) \n{\n    QCoreApplication app(argc, argv);\n    daggy::Core core(*daggy::sources::convertors::json(json_data));\n\n    daggy::aggregators::CFile file_aggregator(\"test\");\n    daggy::aggregators::CConsole console_aggregator(\"test\");\n\n    core.connectAggregator(\u0026file_aggregator);\n    core.connectAggregator(\u0026console_aggregator);\n\n    QObject::connect(\u0026core, \u0026daggy::Core::stateChanged, \u0026core,\n    [\u0026](DaggyStates state){\n        if(state == DaggyFinished)\n            app.quit();      \n    });\n\n    QTimer::singleShot(3000, \u0026core, [\u0026]()\n    {\n        core.stop();\n    });\n\n    QTimer::singleShot(5000, \u0026core, [\u0026]()\n    {\n        app.exit(-1);\n    });\n\n    core.prepare();\n    core.start();\n\n    return app.exec();\n}\n```\n{% endcode %}\n\n### Check installation of Daggy Core C11 interface\n\n{% code title=\"test.c\" %}\n```c\n#include \u003cstdio.h\u003e\n#ifdef _WIN32\n#include \u003cWindows.h\u003e\n#else\n#include \u003cunistd.h\u003e\n#endif\n\n#include \u003cDaggyCore/Core.h\u003e\n\nconst char* json_data =\n\"{\\\n    \\\"sources\\\": {\\\n        \\\"localhost\\\" : {\\\n            \\\"type\\\": \\\"local\\\",\\\n            \\\"commands\\\": {\\\n                \\\"ping1\\\": {\\\n                    \\\"exec\\\": \\\"ping 127.0.0.1\\\",\\\n                    \\\"extension\\\": \\\"log\\\"\\\n                },\\\n                \\\"ping2\\\": {\\\n                    \\\"exec\\\": \\\"ping 127.0.0.1\\\",\\\n                    \\\"extension\\\": \\\"log\\\"\\\n                    }\\\n            }\\\n        }\\\n    }\\\n}\"\n;\n\nvoid sleep_ms(int milliseconds)\n{\n    #ifdef WIN32\n        Sleep(milliseconds);\n    #elif _POSIX_C_SOURCE \u003e= 199309L\n        struct timespec ts;\n        ts.tv_sec = milliseconds / 1000;\n        ts.tv_nsec = (milliseconds % 1000) * 1000000;\n        nanosleep(\u0026ts, NULL);\n    #else\n        usleep(milliseconds * 1000);\n    #endif\n}\n\nint quit_after_time(void* msec)\n{\n    sleep_ms(*(int*)(msec));\n    libdaggy_app_stop();\n    return 0;\n}\n\nvoid on_daggy_state_changed(DaggyCore core, DaggyStates state);\n\nvoid on_provider_state_changed(DaggyCore core, const char* provider_id, DaggyProviderStates state);\nvoid on_provider_error(DaggyCore core, const char* provider_id, DaggyError error);\n\nvoid on_command_state_changed(DaggyCore core, const char* provider_id, const char* command_id, DaggyCommandStates state, int exit_code);\nvoid on_command_stream(DaggyCore core, const char* provider_id, const char* command_id, DaggyStream stream);\nvoid on_command_error(DaggyCore core, const char* provider_id, const char* command_id, DaggyError error);\n\nint main(int argc, char** argv)\n{\n    DaggyCore core;\n    libdaggy_app_create(argc, argv);\n    libdaggy_core_create(json_data, Json, \u0026core);\n    libdaggy_connect_aggregator(core,\n                                on_daggy_state_changed,\n                                on_provider_state_changed,\n                                on_provider_error,\n                                on_command_state_changed,\n                                on_command_stream,\n                                on_command_error);\n    libdaggy_core_start(core);\n    int time = 5000;\n    libdaggy_run_in_thread(quit_after_time, \u0026time);\n    return libdaggy_app_exec();\n}\n\nvoid on_daggy_state_changed(DaggyCore core, DaggyStates state)\n{\n    printf(\"Daggy state changed: %d\\n\", state);\n}\n\nvoid on_provider_state_changed(DaggyCore core, const char* provider_id, DaggyProviderStates state)\n{\n    printf(\"Provider %s state changed: %d\\n\", provider_id, state);\n}\n\nvoid on_provider_error(DaggyCore core, const char* provider_id, DaggyError error)\n{\n    printf(\"Provider %s error. Code: %d, Category: %s\\n\", provider_id, error.error, error.category);\n}\n\nvoid on_command_state_changed(DaggyCore core, const char* provider_id, const char* command_id, DaggyCommandStates state, int exit_code)\n{\n    printf(\"Command %s in provider %s state changed: %d\\n\", command_id, provider_id, state);\n}\n\nvoid on_command_stream(DaggyCore core, const char* provider_id, const char* command_id, DaggyStream stream)\n{\n    printf(\"Command %s in provider %s has stream from session %s: %li\\n\", command_id, provider_id, stream.session, stream.seq_num);\n}\n\nvoid on_command_error(DaggyCore core, const char* provider_id, const char* command_id, DaggyError error)\n{\n    printf(\"Command %s in provider %s has error. Code: %d, Category: %s\\n\", command_id, provider_id, error.error, error.category);\n}\n```\n{% endcode %}\n\n### Check installation of Daggy Console application\n\n```bash\ndaggy --help\nUsage: daggy [options] *.yaml|*.yml|*.json\n\nOptions:\n  -o, --output \u003cfolder\u003e       Set output folder\n  -f, --format \u003cjson|yaml\u003e    Source format\n  -i, --stdin                 Read data aggregation sources from stdin\n  -t, --timeout \u003ctime in ms\u003e  Auto complete timeout\n  -h, --help                  Displays help on commandline options.\n  --help-all                  Displays help including Qt specific options.\n  -v, --version               Displays version information.\n\nArguments:\n  file                        data aggregation sources file\n```\n\n## Getting Started data aggregation and streaming with Daggy Console Application\n\n### Simple Sources\n\n**Create simple.yaml**\n\n```yaml\nsources:\n    localhost:\n        type: local\n        commands:\n            pingYa:\n                exec: ping ya.ru\n                extension: log\n```\n\n**Run daggy**\n\n```bash\ndaggy simple.yaml\n```\n\n**Check console output**\n\n```\n23:07:23:977 | AppStat  | Start aggregation in 01-04-20_23-07-23-977_simple\n23:07:23:977 | ProvStat | localhost       | New state: Started\n23:07:23:977 | CommStat | localhost       | pingYa          | New state: Starting\n23:07:23:977 | CommStat | localhost       | pingYa          | New state: Started\n```\n\n_There are all commands from **simple.yaml/simple.json** are streams in **01-04-20\\_23-07-23-977\\_simple** with output files_\n\n**Tailing streams from Simple Data Source**\n\n```\ntail -f 01-04-20_23-07-23-977_simple/*\n64 bytes from ya.ru (87.250.250.242): icmp_seq=99 ttl=249 time=21.2 ms\n64 bytes from ya.ru (87.250.250.242): icmp_seq=100 ttl=249 time=18.8 ms\n64 bytes from ya.ru (87.250.250.242): icmp_seq=101 ttl=249 time=23.5 ms\n64 bytes from ya.ru (87.250.250.242): icmp_seq=102 ttl=249 time=18.8 ms\n64 bytes from ya.ru (87.250.250.242): icmp_seq=103 ttl=249 time=18.8 ms\n64 bytes from ya.ru (87.250.250.242): icmp_seq=104 ttl=249 time=17.4 ms\n64 bytes from ya.ru (87.250.250.242): icmp_seq=105 ttl=249 time=17.4 ms\n64 bytes from ya.ru (87.250.250.242): icmp_seq=106 ttl=249 time=20.1 ms\n64 bytes from ya.ru (87.250.250.242): icmp_seq=107 ttl=249 time=25.8 ms\n64 bytes from ya.ru (87.250.250.242): icmp_seq=108 ttl=249 time=35.1 ms\n64 bytes from ya.ru (87.250.250.242): icmp_seq=109 ttl=249 time=21.1 ms\n```\n\n**Stop data aggregation and streaming**\n\n_Type **CTRL+C** for stopping data aggregation and streaming. Type **CTRL+C** twice for hard stop application, without waiting cancelation of child local and remote processes._\n\n```\n23:07:23:977 | AppStat  | Start aggregation in 01-04-20_23-07-23-977_simple\n23:07:23:977 | ProvStat | localhost       | New state: Started\n23:07:23:977 | CommStat | localhost       | pingYa          | New state: Starting\n23:07:23:977 | CommStat | localhost       | pingYa          | New state: Started\n^C23:17:56:667 | ProvStat | localhost       | New state: Finishing\n23:17:56:668 | CommStat | localhost       | pingYa          | New state: Finished. Exit code: 0\n23:17:56:668 | ProvStat | localhost       | New state: Finished\n23:17:56:668 | AppStat  | Stop aggregation in 01-04-20_23-07-23-977_simple\n```\n\n**Investigate aggregated data**\n\n```bash\nls -l 01-04-20_23-07-23-977_simple/\n-rw-r--r-- 1 muxa muxa 45574 апр  1 23:17 localhost-pingYa.log\n```\n\n#### Example of Data Aggregation Sources with multiple commands and remote data aggregation and streaming\n\n```yaml\naliases:  \n    - \u0026my_commands\n        pingYa:\n            exec: ping ya.ru\n            extension: log\n        pingGoo:\n            exec: ping goo.gl\n            extension: log\n        \n    - \u0026ssh_auth\n        user: {{env_USER}}\n        passphrase: {{env_PASSWORD}}\n            \nsources:\n    localhost:\n        type: local\n        commands: *my_commands\n    remotehost:\n        host: 192.168.1.9\n        type: ssh2\n        parameters: *ssh_auth\n        commands: *my_commands\n    remotehost2:\n        host: 192.168.1.9\n        type: ssh2\n        parameters: *ssh_auth\n        commands: *my_commands\n    remotehost3:\n        host: 192.168.1.9\n        type: ssh2\n        parameters: *ssh_auth\n        commands: *my_commands\n```\n","funding_links":[],"categories":["Table of Contents","Others"],"sub_categories":["Streaming Library"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsynacker%2Fdaggy","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsynacker%2Fdaggy","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsynacker%2Fdaggy/lists"}