{"id":13434946,"url":"https://github.com/heroku/logplex","last_synced_at":"2025-03-18T02:30:39.231Z","repository":{"id":1232087,"uuid":"1165646","full_name":"heroku/logplex","owner":"heroku","description":"[DEPRECATED] Heroku log router","archived":true,"fork":false,"pushed_at":"2022-02-14T10:25:12.000Z","size":8683,"stargazers_count":981,"open_issues_count":0,"forks_count":96,"subscribers_count":135,"default_branch":"master","last_synced_at":"2024-12-27T05:06:33.675Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Erlang","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":"tastejs/todomvc","license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/heroku.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":"CODEOWNERS","security":null,"support":null}},"created_at":"2010-12-13T19:19:10.000Z","updated_at":"2024-12-17T19:35:32.000Z","dependencies_parsed_at":"2022-08-16T12:40:22.988Z","dependency_job_id":null,"html_url":"https://github.com/heroku/logplex","commit_stats":null,"previous_names":[],"tags_count":75,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/heroku%2Flogplex","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/heroku%2Flogplex/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/heroku%2Flogplex/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/heroku%2Flogplex/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/heroku","download_url":"https://codeload.github.com/heroku/logplex/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":244143866,"owners_count":20405290,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-07-31T03:00:28.368Z","updated_at":"2025-03-18T02:30:38.701Z","avatar_url":"https://github.com/heroku.png","language":"Erlang","readme":"# Logplex [DEPRECATED]\n\n_This project is officially retired and no longer maintained._\n\nLogplex is a distributed syslog log router, able to merge and redistribute multiple incoming streams of syslog logs to individual subscribers.\n\nA typical logplex installation will be a cluster of distributed Erlang nodes connected in a mesh, with one or more redis instances (which can be sharded). The cluster may or may not be sitting behind a load-balancer or proxy, but any of them may be contacted at any time for ideal scenarios.\n\nApplications sitting on their own node or server need to send their log messages either to a local syslog, or through [log shuttle](https://github.com/heroku/log-shuttle), which will then forward them to one instance of a logplex router.\n\nOn the other end of the spectrum, consumers may subscribe to a logplex instance, which will then merge streams of incoming log messages and forward them to the subscriber. Alternatively, the consumer may register a given endpoint (say, a database behind the proper API) and logplex nodes will be able to push messages to that end-point as they come in.\n\nFor more details, you can look at stream management documentation in `doc/`.\n\n\u003c!-- markdown-toc start - Don't edit this section. Run M-x markdown-toc-generate-toc again --\u003e\n**Table of Contents**\n\n- [Logplex](#logplex)\n- [Erlang Version Requirements](#erlang-version-requirements)\n- [Development](#development)\n    - [Local development](#local-development)\n        - [build](#build)\n        - [develop](#develop)\n        - [test](#test)\n    - [Docker development](#docker-development)\n        - [develop](#develop)\n        - [test](#test)\n    - [Data setup](#data-setup)\n- [Supervision Tree](#supervision-tree)\n- [Processes](#processes)\n    - [-](#-)\n    - [config_redis](#configredis)\n    - [logplex_drain_sup](#logplexdrainsup)\n    - [nsync](#nsync)\n    - [redgrid](#redgrid)\n    - [logplex_realtime](#logplexrealtime)\n    - [logplex_stats](#logplexstats)\n    - [logplex_tail](#logplextail)\n    - [logplex_redis_writer_sup](#logplexrediswritersup)\n    - [logplex_shard](#logplexshard)\n    - [logplex_api](#logplexapi)\n    - [logplex_syslog_sup](#logplexsyslogsup)\n    - [logplex_logs_rest](#logplexlogsrest)\n- [Realtime Metrics](#realtime-metrics)\n\n\u003c!-- markdown-toc end --\u003e\n\n\n# Erlang Version Requirements\n\nAs of Logplex v93, Logplex requires Erlang 18. Logplex is currently tested againts OTP-18.1.3.\n\nPrior versions of Logplex are designed to run on R16B03 and 17.x.\n\n# Development\n\n## Local development\n\n### build\n\n    $ ./rebar3 as public compile\n\n### develop\n\nrun\n\n    $ INSTANCE_NAME=`hostname` \\\n      LOGPLEX_CONFIG_REDIS_URL=\"redis://localhost:6379\" \\\n      LOGPLEX_REDGRID_REDIS_URL=\"redis://localhost:6379\" \\\n      LOCAL_IP=\"127.0.0.1\" \\\n      LOGPLEX_COOKIE=123 \\\n      LOGPLEX_AUTH_KEY=123 \\\n      erl -name logplex@`hostname` -pa ebin -env ERL_LIBS deps -s logplex_app -setcookie ${LOGPLEX_COOKIE} -config sys\n\n\n### test\n\nGiven an empty local redis (v2.6ish):\n\n    $ ./rebar3 as public,test compile\n    $ INSTANCE_NAME=`hostname` \\\n      LOGPLEX_CONFIG_REDIS_URL=\"redis://localhost:6379\" \\\n      LOGPLEX_SHARD_URLS=\"redis://localhost:6379\" \\\n      LOGPLEX_REDGRID_REDIS_URL=\"redis://localhost:6379\" \\\n      LOCAL_IP=\"127.0.0.1\" \\\n      LOGPLEX_COOKIE=123 \\\n      ERL_LIBS=`pwd`/deps/:$ERL_LIBS \\\n      ct_run -spec logplex.spec -pa ebin\n\nRuns the common test suite for logplex.\n\n## Docker development\n\n### develop\n\nRequires a working install of Docker and Docker Compose.\nFollow the [installations](https://docs.docker.com/installation/#installation)\nsteps outlined docs.docker.com.\n```\ndocker-compose build         # Run once\ndocker-compose run compile   # Run everytime source files change\ndocker-compose up logplex    # Run logplex post-compilation\n```\n\nTo connect to the above logplex Erlang shell:\n\n```\ndocker exec -it logplex_logplex_1 bash -c \"TERM=xterm bin/connect\"\n```\n\n### test\n\n    docker-compose run test\n\n## Data setup\n\ncreate creds\n\n\n    1\u003e logplex_cred:store(logplex_cred:grant('full_api', logplex_cred:grant('any_channel', logplex_cred:rename(\u003c\u003c\"Local-Test\"\u003e\u003e, logplex_cred:new(\u003c\u003c\"local\"\u003e\u003e, \u003c\u003c\"password\"\u003e\u003e))))).\n    ok\n\nhit healthcheck\n\n    $ curl http://local:password@localhost:8001/healthcheck\n    {\"status\":\"normal\"}\n\ncreate a channel\n\n    $ curl -d '{\"tokens\": [\"app\"]}' http://local:password@localhost:8001/channels\n    {\"channel_id\":1,\"tokens\":{\"app\":\"t.feff49f1-4d55-4c9e-aee1-2d2b10e69b42\"}}\n\npost a log msg\n\n    $ curl -v \\\n    -H \"Content-Type: application/logplex-1\" \\\n    -H \"Logplex-Msg-Count: 1\" \\\n    -d \"116 \u003c134\u003e1 2012-12-10T03:00:48.123456Z erlang t.feff49f1-4d55-4c9e-aee1-2d2b10e69b42 console.1 - - Logsplat test message 1\" \\\n    http://local:password@localhost:8601/logs\n\ncreate a log session\n\n    $ curl -d '{\"channel_id\": \"1\"}' http://local:password@localhost:8001/v2/sessions\n    {\"url\":\"/sessions/9d53bf70-7964-4429-a589-aaa4df86fead\"}\n\nfetch logs for session\n\n    $ curl http://local:password@localhost:8001/sessions/9d53bf70-7964-4429-a589-aaa4df86fead\n    2012-12-10T03:00:48Z+00:00 app[console.1]: test message 1\n\n# Supervision Tree\n\n\u003ctable\u003e\n\u003ctr\u003e\u003ctd\u003elogplex_app\u003c/td\u003e\u003ctd\u003e logplex_sup\u003c/td\u003e\u003ctd\u003e \u003ca href=\"#logplex_db\"\u003elogplex_db\u003c/a\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003c/tr\u003e\n                          \u003ctr\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e \u003ca href=\"#config_redis\"\u003econfig_redis\u003c/a\u003e (redo)\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003c/tr\u003e\n                          \u003ctr\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e \u003ca href=\"#logplex_drain_sup\"\u003elogplex_drain_sup\u003c/a\u003e\u003c/td\u003e\u003ctd\u003e logplex_http_drain\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003c/tr\u003e\n                                                                                \u003ctr\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e logplex_tcpsyslog_drain\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003c/tr\u003e\n                                                                                \u003ctr\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e logplex_tlssyslog_drain\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003c/tr\u003e\n                          \u003ctr\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e \u003ca href=\"#nsync\"\u003ensync\u003c/a\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003c/tr\u003e\n                          \u003ctr\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e \u003ca href=\"#redgrid\"\u003eredgrid\u003c/a\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003c/tr\u003e\n                          \u003ctr\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e \u003ca href=\"#logplex_realtime\"\u003elogplex_realtime\u003c/a\u003e\u003c/td\u003e\u003ctd\u003e redo\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003c/tr\u003e\n                          \u003ctr\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e \u003ca href=\"#logplex_stats\"\u003elogplex_stats\u003c/a\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003c/tr\u003e\n                          \u003ctr\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e \u003ca href=\"#logplex_tail\"\u003elogplex_tail\u003c/a\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003c/tr\u003e\n                          \u003ctr\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e \u003ca href=\"#logplex_redis_writer_sup\"\u003elogplex_redis_writer_sup\u003c/a\u003e (logplex_worker_sup)\u003c/td\u003e\u003ctd\u003e logplex_redis_writer\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003c/tr\u003e\n                          \u003ctr\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e \u003ca href=\"#logplex_shard\"\u003elogplex_shard\u003c/a\u003e\u003c/td\u003e\u003ctd\u003e redo\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003c/tr\u003e\n                          \u003ctr\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e \u003ca href=\"#logplex_api\"\u003elogplex_api\u003c/a\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003c/tr\u003e\n                          \u003ctr\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e \u003ca href=\"#logplex_syslog_sup\"\u003elogplex_syslog_sup\u003c/a\u003e\u003c/td\u003e\u003ctd\u003e tcp_proxy_sup\u003c/td\u003e\u003ctd\u003e tcp_proxy\u003c/td\u003e\u003c/tr\u003e\n                          \u003ctr\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e \u003ca href=\"#logplex_logs_rest\"\u003elogplex_logs_rest\u003c/a\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003ctd\u003e\u003c/td\u003e\u003c/tr\u003e\n\u003c/table\u003e\n\n# Processes\n\n### logplex_db\n\nStarts and supervises a number of ETS tables:\n\n```\nchannels\ntokens\ndrains\ncreds\nsessions\n```\n\n### config_redis\n\nA [redo](https://github.com/heroku/redo) redis client process connected to the logplex config redis.\n\n### logplex_drain_sup\n\nAn empty one_for_one supervisor. Supervises\n[HTTP](./src/logplex_http_drain.erl),\n[TCP Syslog](./src/logplex_tcpsyslog_drain.erl) and\n[TLS Syslog](./src/logplex_tcpsyslog_drain.erl) drain processes.\n\n### nsync\n\nAn [nsync](https://github.com/heroku/nsync) process connected to the logplex config redis. Callback module is [nsync_callback](./src/nsync_callback.erl).\n\nNsync is an Erlang redis replication client. It allows the logplex node to act as a redis slave and sync the logplex config redis data into memory.\n\n### redgrid\n\nA [redgrid](https://github.com/heroku/redgrid) process that registers the node in a central redis server to facilitate discovery by other nodes.\n\n### logplex_realtime\n\nCaptures realtime metrics about the running logplex node. This metrics are exported using [folsom_cowboy](https://github.com/voidlock/folsom_cowboy) and are available for consumption via HTTP.\n\nMemory Usage information is available:\n```shell\n\u003e curl -s http://localhost:5565/_memory | jq '.'\n{\n  \"total\": 27555464,\n  \"processes\": 10818248,\n  \"processes_used\": 10818136,\n  \"system\": 16737216,\n  \"atom\": 388601,\n  \"atom_used\": 371948,\n  \"binary\": 789144,\n  \"code\": 9968116,\n  \"ets\": 789128\n}\n```\nAs is general VM statistics:\n```shell\n\u003e curl -s http://localhost:5565/_statistics | jq '.'\n{\n  \"context_switches\": 40237,\n  \"garbage_collection\": {\n    \"number_of_gcs\": 7676,\n    \"words_reclaimed\": 20085443\n  },\n  \"io\": {\n    \"input\": 9683207,\n    \"output\": 2427112\n  },\n  \"reductions\": {\n    \"total_reductions\": 6584440,\n    \"reductions_since_last_call\": 6584440\n  },\n  \"run_queue\": 0,\n  \"runtime\": {\n    \"total_run_time\": 1140,\n    \"time_since_last_call\": 1140\n  },\n  \"wall_clock\": {\n    \"total_wall_clock_time\": 207960,\n    \"wall_clock_time_since_last_call\": 207748\n  }\n}\n```\nSeveral custom logplex metrics are also exported via a special `/_metrics` endpoint:\n```shell\n\u003e curl -s http://localhost:5565/_metrics | jq '.'\n[\n  \"drain.delivered\",\n  \"drain.dropped\",\n  \"message.processed\",\n  \"message.received\"\n]\n```\nThese can then be queried individually:\n```shell\n\u003e curl -s http://localhost:5565/_metrics/message.received | jq '.'\n{\n  \"type\": \"gauge\",\n  \"value\": 1396\n}\n```\n\n### logplex_stats\n\nOwns the `logplex_stats` ETS table. Prints channel, drain and system stats every 60 seconds.\n\n### logplex_tail\n\nMaintains the `logplex_tail` ETS table that is used to register tail sessions.\n\n### logplex_redis_writer_sup\n\nStarts a [logplex_worker_sup](./src/logplex_worker_sup.erl) process, registered as `logplex_redis_writer_sup`, that supervises [logplex_redis_writer](./src/logplex_redis_writer.erl) processes.\n\n### logplex_shard\n\nOwns the `logplex_shard_info` ETS table.  Starts a separate read and write redo client for each redis shard found in the `logplex_shard_urls` var.\n\n### logplex_api\n\nBlocks waiting for nsync to finish replicating data into memory before starting a mochiweb acceptor that handles API requests for managing channels/tokens/drains/sessions.\n\n### logplex_syslog_sup\n\nSupervises a [tcp_proxy_sup](./src/tcp_proxy_sup.erl) process that supervises a [tcp_proxy](./src/tcp_proxy.erl) process that accepts syslog messages over TCP.\n\n### logplex_logs_rest\n\nStarts a `cowboy_tcp_transport` process and serves as the callback for processing HTTP log input.\n\n# Realtime Metrics\n\nLogplex can send realtime metrics to Redis via pubsub and to a drain channel as\nlogs. The following metrics are currently logged in this fashion:\n\n    * `message_received`\n    * `message_processed`\n    * `drain_delivered`\n    * `drain_dropped`\n\nTo log these metrics to an internal drain channel, you'll need to set the\n`INTERNAL_METRICS_CHANNEL_ID` environment variable to a drain token that has\nalready been created.\n","funding_links":[],"categories":["Erlang","Logging","日志"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fheroku%2Flogplex","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fheroku%2Flogplex","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fheroku%2Flogplex/lists"}