{"id":20826699,"url":"https://github.com/cedrickchee/commitlog","last_synced_at":"2026-04-19T23:33:38.807Z","repository":{"id":247131001,"uuid":"421820536","full_name":"cedrickchee/commitlog","owner":"cedrickchee","description":"Commitlog is a distributed commit log service","archived":false,"fork":false,"pushed_at":"2021-11-22T13:25:49.000Z","size":242,"stargazers_count":2,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-01-18T17:49:32.724Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/cedrickchee.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-10-27T13:02:19.000Z","updated_at":"2024-11-01T11:30:21.000Z","dependencies_parsed_at":"2024-07-06T20:05:37.686Z","dependency_job_id":null,"html_url":"https://github.com/cedrickchee/commitlog","commit_stats":null,"previous_names":["cedrickchee/commitlog"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cedrickchee%2Fcommitlog","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cedrickchee%2Fcommitlog/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cedrickchee%2Fcommitlog/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cedrickchee%2Fcommitlog/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/cedrickchee","download_url":"https://codeload.github.com/cedrickchee/commitlog/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":243174023,"owners_count":20248218,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-17T23:09:48.058Z","updated_at":"2025-12-26T00:22:06.445Z","avatar_url":"https://github.com/cedrickchee.png","language":"Go","funding_links":[],"categories":[],"sub_categories":[],"readme":"# commitlog\n\nCommitlog is a distributed commit log service.\n\nThis project was created to learn and make a distributed service from first\nprinciples -- distributed computing ideas like service discovery, consensus, and\nload balancing.\n\n## Build a Log Package\n\nLogs—which are sometimes also called _write-ahead logs_, _transaction logs_, or\n_commit logs_—are at the heart of storage engines, message queues, version\ncontrol, and replication and consensus algorithms. As you build distributed\nservices, you'll face problems that you can solve with logs.\n\n## Prerequisites\n\nDownload and install these softwares:\n- Go 1.13+\n- [Cloudflare's CFSSL](https://github.com/cloudflare/cfssl) v1.6.1+\n\nInstall the Cloudflare CFSSL CLIs by running the following commands:\n\n```sh\n$ go get github.com/cloudflare/cfssl/cmd/cfssl@v1.6.1\n$ go get github.com/cloudflare/cfssl/cmd/cfssljson@v1.6.1\n```\n\nThe `cfssl` program, which is the canonical command line utility using the CFSSL\npackages.\n\nThe `cfssljson` program, which takes the JSON output from the `cfssl` and\nprograms and writes certificates, keys, CSRs, and bundles to disk.\n\n## Set Up\n\nFirst, initialize our CA and generate certs.\n\n```sh\n$ make init\nmkdir -p .config/\n\n$ make gencert\n# Generating self-signed root CA certificate and private key\ncfssl gencert \\\n\t\t-initca test/ca-csr.json | cfssljson -bare ca\n2021/11/03 00:40:00 [INFO] generating a new CA key and certificate from CSR\n2021/11/03 00:40:00 [INFO] generate received request\n2021/11/03 00:40:00 [INFO] received CSR\n2021/11/03 00:40:00 [INFO] generating key: rsa-2048\n2021/11/03 00:40:01 [INFO] encoded CSR\n2021/11/03 00:40:01 [INFO] signed certificate with serial number 8147356830437551462081232968300531993326047229\n\n# Generating self-signed server certificate and private key\ncfssl gencert \\\n\t\t-ca=ca.pem \\\n\t\t-ca-key=ca-key.pem \\\n\t\t-config=test/ca-config.json \\\n\t\t-profile=server \\\n\t\ttest/server-csr.json | cfssljson -bare server\n2021/11/03 00:40:01 [INFO] generate received request\n2021/11/03 00:40:01 [INFO] received CSR\n2021/11/03 00:40:01 [INFO] generating key: rsa-2048\n2021/11/03 00:40:01 [INFO] encoded CSR\n2021/11/03 00:40:01 [INFO] signed certificate with serial number 402261474156200490360083500727362811589620720837\n\n# Generating multiple client certs and private keys\ncfssl gencert \\\n\t\t-ca=ca.pem \\\n\t\t-ca-key=ca-key.pem \\\n\t\t-config=test/ca-config.json \\\n\t\t-profile=client \\\n\t\t-cn=\"root\" \\\n\t\ttest/client-csr.json | cfssljson -bare root-client\n2021/11/03 00:40:01 [INFO] generate received request\n2021/11/03 00:40:01 [INFO] received CSR\n2021/11/03 00:40:01 [INFO] generating key: rsa-2048\n2021/11/03 00:40:01 [INFO] encoded CSR\n2021/11/03 00:40:01 [INFO] signed certificate with serial number 627656603718368551111127300914672850426637790593\n\ncfssl gencert \\\n\t\t-ca=ca.pem \\\n\t\t-ca-key=ca-key.pem \\\n\t\t-config=test/ca-config.json \\\n\t\t-profile=client \\\n\t\t-cn=\"nobody\" \\\n\t\ttest/client-csr.json | cfssljson -bare nobody-client\n2021/11/03 00:40:01 [INFO] generate received request\n2021/11/03 00:40:01 [INFO] received CSR\n2021/11/03 00:40:01 [INFO] generating key: rsa-2048\n2021/11/03 00:40:01 [INFO] encoded CSR\n2021/11/03 00:40:01 [INFO] signed certificate with serial number 651985188211974854103183240288947068257645013148\n\nmv *.pem *.csr /home/neo/dev/work/repo/github/commitlog/.config\n```\n\n## Test\n\nNow, run your tests with `$ make test`. If all is well, your tests pass and\nyou've made a distributed service that can replicate data.\n\n```sh\n# Test output\n$ make test\ncp test/policy.csv \"/home/neo/dev/work/repo/github/commitlog/.config/policy.csv\"\ncp test/model.conf \"/home/neo/dev/work/repo/github/commitlog/.config/model.conf\"\ngo test -race ./...\n?   \tgithub.com/cedrickchee/commitlog/api/v1\t[no test files]\nok  \tgithub.com/cedrickchee/commitlog/internal/agent\t11.445s\n?   \tgithub.com/cedrickchee/commitlog/internal/auth\t[no test files]\n?   \tgithub.com/cedrickchee/commitlog/internal/config\t[no test files]\nok  \tgithub.com/cedrickchee/commitlog/internal/discovery\t(cached)\nok  \tgithub.com/cedrickchee/commitlog/internal/log\t(cached)\nok  \tgithub.com/cedrickchee/commitlog/internal/server\t0.275s\nok  \tgithub.com/cedrickchee/commitlog/pkg/freeport\t(cached)\n```\n\n## What Is Raft and How Does It Work?\n\n[Raft](https://raft.github.io/) is a [consensus](https://en.wikipedia.org/wiki/Consensus_(computer_science)) algorithm that is designed to be easy to understand and implement.\n\nRaft breaks consensus into two parts: leader election and log replication.\n\nThe following is a short documentation about Raft's leader election and log replication steps.\n\n### Leader Election\n\nA Raft cluster has one leader and the rest of the servers are followers. The\nleader maintains power by sending heartbeat requests to its followers. If the\nfollower times out waiting for a heartbeat request from the leader, then the\nfollower becomes a candidate and begins an election to decide the next leader.\n\n### Log Replication\n\nThe leader accepts client requests, each of which represents some command to run\nacross the cluster. (In a key-value service for example, you'd have a command to\nassign a key's value.) For each request, the leader appends the command to its\nlog and then requests its followers to append the command to their logs. After a\nmajority of followers have replicated the command—when the leader considers the\ncommand committed—the leader executes the command with a finite-state machine\nand responds to the client with the result. The leader tracks the highest\ncommitted offset and sends this in the requests to its followers. When a\nfollower receives a request, it executes all commands up to the highest\ncommitted offset with its finite-state machine. All Raft servers run the same\nfinite-state machine that defines how to handle each command.\n\nThe recommended number of servers in a Raft cluster is three and five (odd number) because Raft will handle `(N–1)/2` failures, where `N` is the size of your cluster.\n\nTest the distributed log:\n\n```sh\n$ make testraft\ncp test/policy.csv \"/home/neo/dev/work/repo/github/commitlog/.config/policy.csv\"\ncp test/model.conf \"/home/neo/dev/work/repo/github/commitlog/.config/model.conf\"\ngo test -v -race ./internal/log/distributed_test.go\n=== RUN   TestMultipleNodes\n2021-11-12T19:02:26.341+0800 [INFO]  raft: Initial configuration (index=0): []\n2021-11-12T19:02:26.342+0800 [INFO]  raft: Node at 127.0.0.1:24337 [Follower] entering Follower state (Leader: \"\")\n2021-11-12T19:02:26.423+0800 [WARN]  raft: Heartbeat timeout from \"\" reached, starting election\n2021-11-12T19:02:26.423+0800 [INFO]  raft: Node at 127.0.0.1:24337 [Candidate] entering Candidate state in term 2\n2021-11-12T19:02:26.430+0800 [DEBUG] raft: Votes needed: 1\n2021-11-12T19:02:26.430+0800 [DEBUG] raft: Vote granted from 0 in term 2. Tally: 1\n2021-11-12T19:02:26.430+0800 [INFO]  raft: Election won. Tally: 1\n2021-11-12T19:02:26.430+0800 [INFO]  raft: Node at 127.0.0.1:24337 [Leader] entering Leader state\n2021-11-12T19:02:27.358+0800 [INFO]  raft: Initial configuration (index=0): []\n2021-11-12T19:02:27.358+0800 [INFO]  raft: Node at 127.0.0.1:24338 [Follower] entering Follower state (Leader: \"\")\n2021-11-12T19:02:27.358+0800 [INFO]  raft: Updating configuration with AddStaging (1, 127.0.0.1:24338) to [{Suffrage:Voter ID:0 Address:127.0.0.1:24337} {Suffrage:Voter ID:1 Address:127.0.0.1:24338}]\n2021-11-12T19:02:27.359+0800 [INFO]  raft: Added peer 1, starting replication\n2021/11/12 19:02:27 [DEBUG] raft-net: 127.0.0.1:24338 accepted connection from: 127.0.0.1:58832\n2021-11-12T19:02:27.365+0800 [WARN]  raft: Failed to get previous log: 3 rpc error: code = Code(404) desc = offset out of range: 3 (last: 0)\n2021-11-12T19:02:27.365+0800 [WARN]  raft: AppendEntries to {Voter 1 127.0.0.1:24338} rejected, sending older logs (next: 1)\n2021-11-12T19:02:27.367+0800 [INFO]  raft: pipelining replication to peer {Voter 1 127.0.0.1:24338}\n2021/11/12 19:02:27 [DEBUG] raft-net: 127.0.0.1:24338 accepted connection from: 127.0.0.1:58834\n2021-11-12T19:02:27.371+0800 [INFO]  raft: Initial configuration (index=0): []\n2021-11-12T19:02:27.371+0800 [INFO]  raft: Node at 127.0.0.1:24339 [Follower] entering Follower state (Leader: \"\")\n2021-11-12T19:02:27.372+0800 [INFO]  raft: Updating configuration with AddStaging (2, 127.0.0.1:24339) to [{Suffrage:Voter ID:0 Address:127.0.0.1:24337} {Suffrage:Voter ID:1 Address:127.0.0.1:24338} {Suffrage:Voter ID:2 Address:127.0.0.1:24339}]\n2021-11-12T19:02:27.372+0800 [INFO]  raft: Added peer 2, starting replication\n2021/11/12 19:02:27 [DEBUG] raft-net: 127.0.0.1:24339 accepted connection from: 127.0.0.1:44036\n2021-11-12T19:02:27.374+0800 [WARN]  raft: Failed to get previous log: 4 rpc error: code = Code(404) desc = offset out of range: 4 (last: 0)\n2021-11-12T19:02:27.375+0800 [WARN]  raft: AppendEntries to {Voter 2 127.0.0.1:24339} rejected, sending older logs (next: 1)\n2021-11-12T19:02:27.375+0800 [INFO]  raft: pipelining replication to peer {Voter 2 127.0.0.1:24339}\n2021/11/12 19:02:27 [DEBUG] raft-net: 127.0.0.1:24339 accepted connection from: 127.0.0.1:44038\n2021-11-12T19:02:27.477+0800 [INFO]  raft: Updating configuration with RemoveServer (1, ) to [{Suffrage:Voter ID:0 Address:127.0.0.1:24337} {Suffrage:Voter ID:2 Address:127.0.0.1:24339}]\n2021-11-12T19:02:27.477+0800 [INFO]  raft: Removed peer 1, stopping replication after 7\n2021-11-12T19:02:27.478+0800 [INFO]  raft: aborting pipeline replication to peer {Voter 1 127.0.0.1:24338}\n2021/11/12 19:02:27 [ERR] raft-net: Failed to flush response: write tcp 127.0.0.1:24338-\u003e127.0.0.1:58832: write: broken pipe\n--- PASS: TestMultipleNodes (1.25s)\nPASS\nok  \tcommand-line-arguments\t1.279s\n```\n\n\nTest distributed service end-to-end (uses Raft for consensus and log replication\nand uses Serf for service discovery and cluster membership):\n\n```sh\n$ make testagent \ncp test/policy.csv \"/home/neo/dev/work/repo/github/commitlog/.config/policy.csv\"\ncp test/model.conf \"/home/neo/dev/work/repo/github/commitlog/.config/model.conf\"\ngo test -v -race ./internal/agent/agent_test.go\n=== RUN   TestAgent\n2021-11-13T16:09:38.974+0800 [INFO]  raft: Initial configuration (index=0): []\n2021-11-13T16:09:38.975+0800 [INFO]  raft: Node at [::]:23314 [Follower] entering Follower state (Leader: \"\")\n2021-11-13T16:09:40.028+0800 [WARN]  raft: Heartbeat timeout from \"\" reached, starting election\n2021-11-13T16:09:40.028+0800 [INFO]  raft: Node at [::]:23314 [Candidate] entering Candidate state in term 2\n2021-11-13T16:09:40.033+0800 [DEBUG] raft: Votes needed: 1\n2021-11-13T16:09:40.033+0800 [DEBUG] raft: Vote granted from 0 in term 2. Tally: 1\n2021-11-13T16:09:40.033+0800 [INFO]  raft: Election won. Tally: 1\n2021-11-13T16:09:40.033+0800 [INFO]  raft: Node at [::]:23314 [Leader] entering Leader state\n2021/11/13 16:09:40 [INFO] serf: EventMemberJoin: 0 127.0.0.1\n2021-11-13T16:09:41.004+0800 [INFO]  raft: Initial configuration (index=0): []\n2021-11-13T16:09:41.004+0800 [INFO]  raft: Node at [::]:23316 [Follower] entering Follower state (Leader: \"\")\n2021/11/13 16:09:41 [INFO] serf: EventMemberJoin: 1 127.0.0.1\n2021/11/13 16:09:41 [DEBUG] memberlist: Initiating push/pull sync with:  127.0.0.1:23313\n2021/11/13 16:09:41 [DEBUG] memberlist: Stream connection from=127.0.0.1:51938\n2021/11/13 16:09:41 [INFO] serf: EventMemberJoin: 1 127.0.0.1\n2021-11-13T16:09:41.013+0800 [INFO]  raft: Updating configuration with AddStaging (1, 127.0.0.1:23316) to [{Suffrage:Voter ID:0 Address:[::]:23314} {Suffrage:Voter ID:1 Address:127.0.0.1:23316}]\n2021/11/13 16:09:41 [INFO] serf: EventMemberJoin: 0 127.0.0.1\n2021-11-13T16:09:41.013+0800 [INFO]  raft: Added peer 1, starting replication\n2021-11-13T16:09:41.013+0800\tDEBUG\tmembership\tdiscovery/membership.go:165\tfailed to join\t{\"error\": \"node is not the leader\", \"name\": \"0\", \"rpc_addr\": \"127.0.0.1:23314\"}\n2021/11/13 16:09:41 [DEBUG] raft-net: [::]:23316 accepted connection from: 127.0.0.1:46600\n2021-11-13T16:09:41.025+0800 [INFO]  raft: Initial configuration (index=0): []\n2021-11-13T16:09:41.025+0800 [INFO]  raft: Node at [::]:23318 [Follower] entering Follower state (Leader: \"\")\n2021/11/13 16:09:41 [INFO] serf: EventMemberJoin: 2 127.0.0.1\n2021/11/13 16:09:41 [DEBUG] memberlist: Initiating push/pull sync with:  127.0.0.1:23313\n2021/11/13 16:09:41 [DEBUG] memberlist: Stream connection from=127.0.0.1:51942\n2021-11-13T16:09:41.030+0800 [WARN]  raft: Failed to get previous log: 3 rpc error: code = Code(404) desc = offset out of range: 3 (last: 0)\n2021-11-13T16:09:41.030+0800 [WARN]  raft: AppendEntries to {Voter 1 127.0.0.1:23316} rejected, sending older logs (next: 1)\n2021/11/13 16:09:41 [INFO] serf: EventMemberJoin: 2 127.0.0.1\n2021-11-13T16:09:41.031+0800 [INFO]  raft: Updating configuration with AddStaging (2, 127.0.0.1:23318) to [{Suffrage:Voter ID:0 Address:[::]:23314} {Suffrage:Voter ID:1 Address:127.0.0.1:23316} {Suffrage:Voter ID:2 Address:127.0.0.1:23318}]\n2021-11-13T16:09:41.031+0800 [INFO]  raft: Added peer 2, starting replication\n2021/11/13 16:09:41 [INFO] serf: EventMemberJoin: 1 127.0.0.1\n2021-11-13T16:09:41.031+0800 [INFO]  raft: pipelining replication to peer {Voter 1 127.0.0.1:23316}\n2021/11/13 16:09:41 [INFO] serf: EventMemberJoin: 0 127.0.0.1\n2021-11-13T16:09:41.031+0800\tDEBUG\tmembership\tdiscovery/membership.go:165\tfailed to join\t{\"error\": \"node is not the leader\", \"name\": \"1\", \"rpc_addr\": \"127.0.0.1:23316\"}\n2021-11-13T16:09:41.032+0800\tDEBUG\tmembership\tdiscovery/membership.go:165\tfailed to join\t{\"error\": \"node is not the leader\", \"name\": \"0\", \"rpc_addr\": \"127.0.0.1:23314\"}\n2021/11/13 16:09:41 [DEBUG] raft-net: [::]:23318 accepted connection from: 127.0.0.1:39724\n2021-11-13T16:09:41.045+0800 [WARN]  raft: Failed to get previous log: 4 rpc error: code = Code(404) desc = offset out of range: 4 (last: 0)\n2021-11-13T16:09:41.046+0800 [WARN]  raft: AppendEntries to {Voter 2 127.0.0.1:23318} rejected, sending older logs (next: 1)\n2021-11-13T16:09:41.046+0800 [INFO]  raft: pipelining replication to peer {Voter 2 127.0.0.1:23318}\n2021/11/13 16:09:41 [DEBUG] raft-net: [::]:23316 accepted connection from: 127.0.0.1:46606\n2021/11/13 16:09:41 [DEBUG] raft-net: [::]:23318 accepted connection from: 127.0.0.1:39728\n2021/11/13 16:09:41 [INFO] serf: EventMemberJoin: 2 127.0.0.1\n2021/11/13 16:09:41 [DEBUG] serf: messageJoinType: 1\n2021/11/13 16:09:41 [DEBUG] serf: messageJoinType: 1\n2021/11/13 16:09:41 [DEBUG] serf: messageJoinType: 2\n2021/11/13 16:09:41 [DEBUG] serf: messageJoinType: 1\n2021/11/13 16:09:41 [DEBUG] serf: messageJoinType: 2\n\n...\n\n2021-11-13T16:09:44.053+0800\tINFO\tserver\tzap/options.go:212\tfinished unary call with code OK\t{\"grpc.start_time\": \"2021-11-13T16:09:44+08:00\", \"system\": \"grpc\", \"span.kind\": \"server\", \"grpc.service\": \"log.v1.Log\", \"grpc.method\": \"Produce\", \"peer.address\": \"127.0.0.1:42768\", \"grpc.code\": \"OK\", \"grpc.time_ns\": 686930}\n2021-11-13T16:09:44.055+0800\tINFO\tserver\tzap/options.go:212\tfinished unary call with code OK\t{\"grpc.start_time\": \"2021-11-13T16:09:44+08:00\", \"system\": \"grpc\", \"span.kind\": \"server\", \"grpc.service\": \"log.v1.Log\", \"grpc.method\": \"Consume\", \"peer.address\": \"127.0.0.1:42768\", \"grpc.code\": \"OK\", \"grpc.time_ns\": 375171}\n2021-11-13T16:09:47.083+0800\tINFO\tserver\tzap/options.go:212\tfinished unary call with code OK\t{\"grpc.start_time\": \"2021-11-13T16:09:47+08:00\", \"system\": \"grpc\", \"span.kind\": \"server\", \"grpc.service\": \"log.v1.Log\", \"grpc.method\": \"Consume\", \"peer.address\": \"127.0.0.1:46612\", \"grpc.code\": \"OK\", \"grpc.time_ns\": 388583}\n2021-11-13T16:09:47.085+0800\tERROR\tserver\tzap/options.go:212\tfinished unary call with code Code(404)\t{\"grpc.start_time\": \"2021-11-13T16:09:47+08:00\", \"system\": \"grpc\", \"span.kind\": \"server\", \"grpc.service\": \"log.v1.Log\", \"grpc.method\": \"Consume\", \"peer.address\": \"127.0.0.1:42768\", \"error\": \"rpc error: code = Code(404) desc = offset out of range: 1\", \"grpc.code\": \"Code(404)\", \"grpc.time_ns\": 337750}\ngithub.com/grpc-ecosystem/go-grpc-middleware/logging/zap.DefaultMessageProducer\n\t/home/neo/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.3.0/logging/zap/options.go:212\ngithub.com/grpc-ecosystem/go-grpc-middleware/logging/zap.UnaryServerInterceptor.func1\n\t/home/neo/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.3.0/logging/zap/server_interceptors.go:39\ngithub.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1\n\t/home/neo/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.3.0/chain.go:25\ngithub.com/grpc-ecosystem/go-grpc-middleware/tags.UnaryServerInterceptor.func1\n\t/home/neo/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.3.0/tags/interceptors.go:23\ngithub.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1\n\t/home/neo/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.3.0/chain.go:25\ngithub.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1\n\t/home/neo/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.3.0/chain.go:34\ngithub.com/cedrickchee/commitlog/api/v1._Log_Consume_Handler\n\t/home/neo/dev/work/repo/github/commitlog/api/v1/log_grpc.pb.go:189\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/home/neo/go/pkg/mod/google.golang.org/grpc@v1.41.0/server.go:1279\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/home/neo/go/pkg/mod/google.golang.org/grpc@v1.41.0/server.go:1608\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\t/home/neo/go/pkg/mod/google.golang.org/grpc@v1.41.0/server.go:923\n2021/11/13 16:09:47 [DEBUG] serf: messageLeaveType: 0\n2021/11/13 16:09:47 [DEBUG] serf: messageLeaveType: 0\n...\n2021/11/13 16:09:47 [INFO] serf: EventMemberLeave: 0 127.0.0.1\n2021/11/13 16:09:47 [DEBUG] serf: messageLeaveType: 0\n2021/11/13 16:09:47 [DEBUG] serf: messageLeaveType: 0\n...\n2021/11/13 16:09:47 [INFO] serf: EventMemberLeave: 0 127.0.0.1\n2021-11-13T16:09:47.588+0800\tDEBUG\tmembership\tdiscovery/membership.go:165\tfailed to leave\t{\"error\": \"node is not the leader\", \"name\": \"0\", \"rpc_addr\": \"127.0.0.1:23314\"}\n2021/11/13 16:09:47 [INFO] serf: EventMemberLeave: 0 127.0.0.1\n2021-11-13T16:09:47.588+0800\tDEBUG\tmembership\tdiscovery/membership.go:165\tfailed to leave\t{\"error\": \"node is not the leader\", \"name\": \"0\", \"rpc_addr\": \"127.0.0.1:23314\"}\n2021/11/13 16:09:48 [ERR] raft-net: Failed to accept connection: mux: server closed\n2021-11-13T16:09:48.790+0800 [INFO]  raft: aborting pipeline replication to peer {Voter 1 127.0.0.1:23316}\n2021-11-13T16:09:48.790+0800 [INFO]  raft: aborting pipeline replication to peer {Voter 2 127.0.0.1:23318}\n2021/11/13 16:09:48 [DEBUG] serf: messageLeaveType: 1\n2021/11/13 16:09:48 [DEBUG] serf: messageLeaveType: 1\n...\n2021/11/13 16:09:49 [INFO] serf: EventMemberLeave: 1 127.0.0.1\n2021/11/13 16:09:49 [DEBUG] serf: messageLeaveType: 1\n...\n2021/11/13 16:09:49 [INFO] serf: EventMemberLeave: 1 127.0.0.1\n2021-11-13T16:09:49.412+0800\tDEBUG\tmembership\tdiscovery/membership.go:165\tfailed to leave\t{\"error\": \"node is not the leader\", \"name\": \"1\", \"rpc_addr\": \"127.0.0.1:23316\"}\n2021/11/13 16:09:50 [DEBUG] serf: messageLeaveType: 1\n2021-11-13T16:09:50.415+0800 [WARN]  raft: Heartbeat timeout from \"[::]:23314\" reached, starting election\n2021-11-13T16:09:50.415+0800 [INFO]  raft: Node at [::]:23318 [Candidate] entering Candidate state in term 3\n2021/11/13 16:09:50 [DEBUG] raft-net: [::]:23316 accepted connection from: 127.0.0.1:46614\n2021-11-13T16:09:50.421+0800 [ERROR] raft: Failed to make RequestVote RPC to {Voter 0 [::]:23314}: dial tcp [::]:23314: connect: connection refused\n2021-11-13T16:09:50.423+0800 [DEBUG] raft: Votes needed: 2\n2021-11-13T16:09:50.423+0800 [DEBUG] raft: Vote granted from 2 in term 3. Tally: 1\n2021-11-13T16:09:50.442+0800 [WARN]  raft: Rejecting vote request from [::]:23318 since we have a leader: [::]:23314\n2021-11-13T16:09:50.917+0800 [WARN]  raft: Heartbeat timeout from \"[::]:23314\" reached, starting election\n2021-11-13T16:09:50.917+0800 [INFO]  raft: Node at [::]:23316 [Candidate] entering Candidate state in term 3\n2021-11-13T16:09:50.921+0800 [ERROR] raft: Failed to make RequestVote RPC to {Voter 0 [::]:23314}: dial tcp [::]:23314: connect: connection refused\n2021-11-13T16:09:50.924+0800 [DEBUG] raft: Votes needed: 2\n2021-11-13T16:09:50.924+0800 [DEBUG] raft: Vote granted from 1 in term 3. Tally: 1\n2021/11/13 16:09:50 [DEBUG] raft-net: [::]:23318 accepted connection from: 127.0.0.1:39744\n2021-11-13T16:09:50.943+0800 [INFO]  raft: Duplicate RequestVote for same term: 3\n2021/11/13 16:09:51 [ERR] raft-net: Failed to accept connection: mux: server closed\n2021/11/13 16:09:51 [INFO] serf: EventMemberLeave: 2 127.0.0.1\n2021-11-13T16:09:51.927+0800 [WARN]  raft: Election timeout reached, restarting election\n2021-11-13T16:09:51.928+0800 [INFO]  raft: Node at [::]:23318 [Candidate] entering Candidate state in term 4\n2021/11/13 16:09:51 [ERR] raft-net: Failed to decode incoming command: transport shutdown\n2021-11-13T16:09:51.932+0800 [ERROR] raft: Failed to make RequestVote RPC to {Voter 1 127.0.0.1:23316}: EOF\n2021-11-13T16:09:51.933+0800 [ERROR] raft: Failed to make RequestVote RPC to {Voter 0 [::]:23314}: dial tcp [::]:23314: connect: connection refused\n2021-11-13T16:09:51.935+0800 [DEBUG] raft: Votes needed: 2\n2021-11-13T16:09:51.936+0800 [DEBUG] raft: Vote granted from 2 in term 4. Tally: 1\n2021/11/13 16:09:51 [INFO] serf: EventMemberLeave: 1 127.0.0.1\n2021/11/13 16:09:51 [INFO] serf: EventMemberFailed: 2 127.0.0.1\n2021/11/13 16:09:52 [ERR] raft-net: Failed to accept connection: mux: server closed\n--- PASS: TestAgent (13.06s)\nPASS\nok  \tcommand-line-arguments\t13.088s\n```\n\n## Deployment\n\n### Deploy Applications with Kubernetes Locally\n\nWe will deploy a cluster of our service. We will:\n\n- Set up with Kubernetes and Helm so that we can orchestrate our service on both\n  our local machine and later on a cloud platform.\n- Run a cluster of your service on your machine.\n\n**Install `kubectl`**\n\nThe Kubernetes command-line tool, `kubectl`, is used to run commands against\nKubernetes clusters.\n\nIf you're using Linux, you can install `kubectl` by referring to \n[\"Install and Set Up kubectl on Linux\"](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/)\n\nWe need a Kubernetes cluster and its API for `kubectl` to call and do anything.\nWe'll use the Kind tool to run a local Kubernetes cluster in Docker.\n\n#### Use Kind for Local Development and Continuous Integration\n\n[Kind](https://kind.sigs.k8s.io) is a tool developed by the Kubernetes team to\nrun local Kubernetes clusters using Docker containers as nodes.\n\nTo install Kind, run the following:\n\n```sh\n$ go get sigs.k8s.io/kind\ngo: downloading sigs.k8s.io/kind v0.11.1\ngo: downloading github.com/spf13/cobra v1.1.1\ngo: downloading github.com/alessio/shellescape v1.4.1\ngo: downloading golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c\ngo: downloading github.com/BurntSushi/toml v0.3.1\ngo: downloading github.com/evanphx/json-patch/v5 v5.2.0\ngo: downloading github.com/pelletier/go-toml v1.8.1\ngo: downloading gopkg.in/yaml.v2 v2.2.8\n```\n\nYou can create a Kind cluster by running:\n\n```sh\n$ kind create cluster\nCreating cluster \"kind\" ...\n ✓ Ensuring node image (kindest/node:v1.21.1) 🖼 \n ✓ Preparing nodes 📦  \n ✓ Writing configuration 📜 \n ✓ Starting control-plane 🕹️ \n ✓ Installing CNI 🔌 \n ✓ Installing StorageClass 💾 \nSet kubectl context to \"kind-kind\"\nYou can now use your cluster with:\n\nkubectl cluster-info --context kind-kind\n\nThanks for using kind! 😊\n```\n\nYou can then verify that Kind created your cluster and configured `kubectl` to\nuse it by running the following:\n\n```sh\n$ kubectl cluster-info\n\u003e Kubernetes control plane is running at https://127.0.0.1:36313\nCoreDNS is running at https://127.0.0.1:36313/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\n```\n\nWe have a running Kubernetes cluster now--let's run our service on it.\n\nTo run our service in Kubernetes, we'll need a Docker image, and our Docker\nimage will need an executable entry point. We've written an agent CLI that\nserves as our service's executable.\n\n### Build Your Docker Image\n\nBuild the image and load it into your Kind cluster by running:\n\n```sh\n$ make build-docker\ndocker build -t github.com/cedrickchee/commitlog:0.0.1 .\nSending build context to Docker daemon  1.341MB\nStep 1/7 : FROM golang:1.17.3-alpine AS build\n1.17.3-alpine: Pulling from library/golang\n97518928ae5f: Pull complete \nb78c28b3bbf7: Pull complete \n248309d37e25: Pull complete \nc91f41641737: Pull complete \ne372233a5e04: Pull complete \nDigest: sha256:55da409cc0fe11df63a7d6962fbefd1321fedc305d9969da636876893e289e2d\nStatus: Downloaded newer image for golang:1.17.3-alpine\n ---\u003e 3a38ce03c951\nStep 2/7 : WORKDIR /go/src/commitlog\n ---\u003e Running in 45cf5ec9d633\nRemoving intermediate container 45cf5ec9d633\n ---\u003e 8e78d40c5f5d\nStep 3/7 : COPY . .\n ---\u003e 60bc04f1db2f\nStep 4/7 : RUN CGO_ENABLED=0 go build -o /go/bin/commitlog ./cmd/commitlog\n ---\u003e Running in 9ddbb287dae4\ngo: downloading github.com/spf13/cobra v1.2.1\ngo: downloading github.com/spf13/viper v1.9.0        \ngo: downloading github.com/hashicorp/raft v1.1.1\ngo: downloading github.com/soheilhy/cmux v0.1.5     \ngo: downloading go.uber.org/zap v1.19.1\n...\ngo: downloading github.com/mattn/go-colorable v0.1.6\ngo: downloading github.com/hashicorp/errwrap v1.0.0\ngo: downloading golang.org/x/crypto v0.0.0-20210817164053-32db794688a5\nRemoving intermediate container 9ddbb287dae4\n ---\u003e 6d2eb3145770\nStep 5/7 : FROM scratch\n ---\u003e \nStep 6/7 : COPY --from=build /go/bin/commitlog /bin/commitlog\n ---\u003e bd5bc56b75bf\nStep 7/7 : ENTRYPOINT [\"/bin/commitlog\"]\n ---\u003e Running in 98d7271a2a24\nRemoving intermediate container 98d7271a2a24\n ---\u003e 2d0f44d05f46\nSuccessfully built 2d0f44d05f46\nSuccessfully tagged github.com/cedrickchee/commitlog:0.0.1\n\n$ kind load docker-image github.com/cedrickchee/commitlog:0.0.1\nImage: \"github.com/cedrickchee/commitlog:0.0.1\" with ID \"sha256:2d0f44d05f46ecbf6860bd5240bfbd90d4cf4814f3fd90c1bee3c75d7bb460bc\" not yet present on node \"kind-control-plane\", loading...\n```\n\n### Configure and Deploy Your Service with Helm\n\nLet's look at how we can configure and run a cluster of our service in\nKubernetes with Helm.\n\n[Helm](https://helm.sh) is the package manager for Kubernetes that enables you\nto distribute and install services in Kubernetes.\n\nTo [install Helm from script](https://helm.sh/docs/intro/install/#from-script),\nrun this command:\n\n```sh\n$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3\n$ chmod 700 get_helm.sh\n\n# workaround script broke on linux: the https://github.com/helm/helm/issues/10266\n$ export DESIRED_VERSION=v3.7.1\n$ ./get_helm.sh\nDownloading https://get.helm.sh/helm-v3.7.1-linux-amd64.tar.gz\nVerifying checksum... Done.\nPreparing to install helm into /usr/local/bin\nhelm installed into /usr/local/bin/helm\n```\n\nNow we're ready to deploy the service in our Kubernetes cluster.\n\n#### Install Your Helm Chart\n\nWe can install our Helm chart in our Kind cluster to run a cluster of our service.\n\nYou can see what Helm renders by running:\n\n```sh\n$ helm template commitlog deploy/commitlog\n```\n\nNow, install the Chart by running this command:\n\n```sh\n$ helm install commitlog deploy/commitlog\nNAME: commitlog\nLAST DEPLOYED: Tue Nov 16 23:00:38 2021\nNAMESPACE: default\nSTATUS: deployed\nREVISION: 1\nTEST SUITE: None\n```\n\nWait a few seconds and you'll see Kubernetes set up three pods. You can list\nthem by running `$ kubectl get pods`. When all three pods are ready, we can try\nrequesting the API.\n\n```sh\nNAME          READY   STATUS    RESTARTS   AGE\ncommitlog-0   1/1     Running   0          3m37s\ncommitlog-1   1/1     Running   0          2m27s\ncommitlog-2   1/1     Running   0          77s\n```\n\nWe can tell Kubernetes to forward a pod or a Service's port to a port on your\ncomputer so you can request a service running inside Kubernetes without a load\nbalancer:\n\n```sh\n$ kubectl port-forward pod/commitlog-0 8500:8400\nForwarding from 127.0.0.1:8500 -\u003e 8400\nForwarding from [::1]:8500 -\u003e 8400\n```\n\nNow we can request our service from a program running outside Kubernetes at\n`:8500`.\n\nRun the command to request our service to get and print the list of servers:\n\n```sh\n$ go run cmd/getservers/main.go -addr=\":8500\"\n```\n\nYou should see the following output:\n\n```sh\nservers:\n- id:\"commitlog-0\" rpc_addr:\"0.0.0.0:8400\"\n- id:\"commitlog-1\" rpc_addr:\"0.0.0.0:8400\"\n- id:\"commitlog-2\" rpc_addr:\"0.0.0.0:8400\"\n```\n\nThis means all three servers in our cluster have successfully joined the cluster\nand are coordinating with each other.\n\n#### Deploy Service with Kubernetes to the Cloud\n\n1. Create a Google Kubernetes Engine (GKE) cluster\n   - Sign up with Google Cloud\n   - Create a Kubernetes cluster\n   - Install and authenticate gcloud CLI\n\t \n\t Get your project's ID, and configure gcloud to use the project by default \n\t by running the following:\n\n\t\t```sh\n\t\t$ PROJECT_ID=$(gcloud projects list | tail -n 1 | cut -d' ' -f1)\n\t\t$ gcloud config set project $PROJECT_ID\n\t\t```\n\n   - Push our service's image to Google's Container Registry (GCR)\n\n\t\t```sh\n\t\t$ gcloud auth configure-docker\n\t\t$ docker tag github.com/cedrickchee/commitlog:0.0.1 \\\n\t\t\tgcr.io/$PROJECT_ID/commitlog:0.0.1\n\t\t$ docker push gcr.io/$PROJECT_ID/commitlog:0.0.1\n\t\t```\n\n\t- Configure kubectl\n\n\t  The last bit of setup allows kubectl and Helm to call our GKE cluster:\n\n\t\t```sh\n\t\t$ gcloud container clusters get-credentials commitlog --zone us-central1-a\n\t\tFetching cluster endpoint and auth data.\n\t\tkubeconfig entry generated for commitlog.\n\t\t```\n\n2. Install Metacontroller\n   \n   [Metacontroller](https://metacontroller.app) is a Kubernetes add-on that\n   makes it easy to write and deploy custom controllers with simple scripts.\n\n   Install the Metacontroller chart:\n\n\t```sh\n\t$ kubectl create namespace metacontroller\n\t$ helm install metacontroller metacontroller\n\tNAME: metacontroller\n\tLAST DEPLOYED: Sat Nov 20 23:33:42 2021\n\tNAMESPACE: default\n\tSTATUS: deployed\n\tREVISION: 1\n\tTEST SUITE: None\n\t```\n\n3. Deploy our service to our GKE cluster and try it.\n\nDeploy our distributed service to the Cloud. Run the following command:\n\n```sh\n$ helm install commitlog commitlog \\\n--set image.repository=gcr.io/$PROJECT_ID/commitlog \\\n--set service.lb=true\n```\n\nYou can watch as the services come up by passing the `-w` flag:\n\n```sh\n$ kubectl get services -w\nNAME          TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE\ncommitlog     ClusterIP      None           \u003cnone\u003e        8400/TCP,8401/TCP,8401/UDP   8m53s\ncommitlog-0   LoadBalancer   10.96.149.92   \u003cpending\u003e     8400:32735/TCP               8m25s\ncommitlog-1   LoadBalancer   10.96.15.175   \u003cpending\u003e     8400:30945/TCP               8m25s\ncommitlog-2   LoadBalancer   10.96.90.220   \u003cpending\u003e     8400:32552/TCP               8m25s\nkubernetes    ClusterIP      10.96.0.1      \u003cnone\u003e        443/TCP                      14m\n```\n\nWhen all three load balancers are up, we can verify that our client connects to\nour service running in the Cloud and that our service nodes discovered each\nother:\n\n```sh\n$ ADDR=$(kubectl get service -l app=service-per-pod -o go-template=\\\n'{{range .items}}\\\n{{(index .status.loadBalancer.ingress 0).ip}}{{\"\\n\"}}\\\n{{end}}'\\\n| head -n 1)\n\n$ go run cmd/getservers/main.go -addr=$ADDR:8400\nservers:\n- id:\"commitlog-0\" rpc_addr:\"commitlog-0.commitlog.default.svc.cluster.local:8400\"\n- id:\"commitlog-1\" rpc_addr:\"commitlog-1.commitlog.default.svc.cluster.local:8400\"\n- id:\"commitlog-2\" rpc_addr:\"commitlog-2.commitlog.default.svc.cluster.local:8400\"\n```\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcedrickchee%2Fcommitlog","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcedrickchee%2Fcommitlog","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcedrickchee%2Fcommitlog/lists"}