Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/brainlid/demo_service_pool_with_leader
Demonstration project that uses Registry to coordinate a Service Pool where the services have a single leader.
https://github.com/brainlid/demo_service_pool_with_leader
meetup-demonstration
Last synced: 9 days ago
JSON representation
Demonstration project that uses Registry to coordinate a Service Pool where the services have a single leader.
- Host: GitHub
- URL: https://github.com/brainlid/demo_service_pool_with_leader
- Owner: brainlid
- License: mit
- Created: 2017-11-08T13:37:34.000Z (about 7 years ago)
- Default Branch: master
- Last Pushed: 2017-11-22T18:00:43.000Z (about 7 years ago)
- Last Synced: 2024-12-07T12:27:25.029Z (16 days ago)
- Topics: meetup-demonstration
- Language: Elixir
- Homepage:
- Size: 32.2 KB
- Stars: 0
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# ServicePoolLeader
Demo project that shows how to create a pool of services across multiple nodes
in a cluster where only one is a "Leader" at a time. When the Leader node goes
down, a different node's service takes over as the Leader.Using only built-in Elixir/Erlang features.
## Usage
You can start multiple nodes on a single machine (each in a different terminal
window).Start 3 named nodes. Use these same names since the `join/0` function assumes
they are named this way to make it easy for playing with it.```bash
iex --sname a@localhost -S mix
iex --sname b@localhost -S mix
iex --sname c@localhost -S mix
```In one of the IEx terminals, run `join/0` to link up the nodes as a cluster.
```elixir
ServicePoolLeader.join()
```Perform some work using the simple/naive approach. Try it from different nodes.
Who does the work?```elixir
ServicePoolLeader.simple_work()
ServicePoolLeader.simple_work(10)
ServicePoolLeader.simple_work(5)
```The other primary example is the `ServicePoolLeader.Coordinator` module. It
coordinates the `TrackedService`.The design decisions were to keep the complexity in the `Coordinator` and out
of the services being managed.Experiment with those examples using the following:
```elixir
ServicePoolLeader.tracked_work()
ServicePoolLeader.tracked_work("a")
ServicePoolLeader.tracked_work(10)
```What happens when you kill one of the services? (Can use `:observer.start` to
explore and kill it.)What happens when you kill one of the nodes? (`ctrl+c, ctrl+c`)
Some things to experiment with:
* Start multiple nodes that join as a cluster.
* Use `:observer.start` to see how it's running, inspect ETS tables and kill things.
* Kill the leader node. What happens?
* Restart that node and rejoin the cluster.
* Kill a non-leader node, what happened? Rejoin.
* Do a kind shutdown on the leader node, what happened?, Rejoin.## Leadership Election Options
There are a number of Leader Election strategies and options you might consider.
* Hardware differences? Prefer to run on the "big" machine?
* Other services running? Prefer to run on a different node than service Y.
* May want to avoid two IO intensive services from thrashing the disk on the same node.
* Allows it to still run together in a single-node setup.
* Oldest running assumes all else is equal.This demo project uses the "longest running" service as to how a leader is
elected.## Features Used for Discussion/Experimentation
* Clustering
* `:observer.start` for killing and viewing ETS tables
* Process monitoring
* `:pg2`
* `:ets` tables
* `GenServer.multi_call()`
* Supervisor `:rest_for_one` strategy
* `Registry` example is built-in too## Ideas for future experiments
* Provide a function or MFA to call when new leader should be appointed
because the current leader left or went down. Externally define the function
that selects the leader.
* Add a new Supervisor to manage the Coordinator and TrackedService so the
`:rest_for_one` strategy is more controlled.
* TrackedService have unique behavior based on being a the leader. Could ask
Coordinator if it is the leader.