https://github.com/micro-tools/node-pohl
distributed redis queue messaging, makes sure that only a single instance actually works on the task, sequential callback implementation
https://github.com/micro-tools/node-pohl
callback-style fast messages redis sequential
Last synced: 8 months ago
JSON representation
distributed redis queue messaging, makes sure that only a single instance actually works on the task, sequential callback implementation
- Host: GitHub
- URL: https://github.com/micro-tools/node-pohl
- Owner: micro-tools
- License: mit
- Created: 2016-08-30T15:10:45.000Z (over 9 years ago)
- Default Branch: master
- Last Pushed: 2022-04-14T05:07:34.000Z (over 3 years ago)
- Last Synced: 2025-04-01T07:24:21.552Z (9 months ago)
- Topics: callback-style, fast, messages, redis, sequential
- Language: JavaScript
- Homepage:
- Size: 198 KB
- Stars: 4
- Watchers: 1
- Forks: 2
- Open Issues: 6
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# node-pohl
[](https://travis-ci.org/krystianity/node-pohl)
- reliable distributed rpc messages via redis in simple callback-style
- ES6, coverage 66%+, lightweight, scalable & fast
- uses pub & sub, but makes sure that only a single instance actually works on the task via redlock algorithm.
- kind of rpc library that feels like making simple callbacks except for the fact that you can make them between services and not classes
- simplistic circuit breaker to prevent timeout waves
- metric events
- pauseable receiver (.pause(), .resume()) for green/blue deployment scenarios
#use case
- image you have enterprise microservices that have to make rpc/rest/soap calls to dispatch other information during incoming requests
- now think of the potential overhead that protocols like http can cause, you might most likely have to wait for 20-150ms overhead might depend on your infrastructure
- there has to be a way to have a standing connection, that sends messages reliably, distributed, scalable and fast with a fixed overhead that can be calculated
- all you need is a redis cluster/sentinel setup (works with single instance as well..), include this lib in your services and call 2 functions thats about it
- message queuing, task locking, failovers or timeouts have been wrapped in a super simple callback-like syntax that also scales inside of your software to multiple topics and endless message/task types
- overhead is between 3-5ms constantly
- benchmarks hits 7500 rpc/s (full-roundtrip) on mobile i7 @ 2GHZ and Redis 3.2.1 single docker instance, with a single sender
#how to use/install
- `npm install pohl`
- check ./example/index.js for a usage example
- run example with `npm start`
- run tests with `npm test` (requires localhost redis with default conf)
- run benchmark with `npm run benchmark`
#other
- License: MIT
- Author: Christian Fröhlingsdorf