Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/livewareproblems/hyper
Erlang implementation of HyperLogLog
https://github.com/livewareproblems/hyper
erlang hacktoberfest
Last synced: 17 days ago
JSON representation
Erlang implementation of HyperLogLog
- Host: GitHub
- URL: https://github.com/livewareproblems/hyper
- Owner: LivewareProblems
- License: mit
- Created: 2021-03-22T14:54:59.000Z (almost 4 years ago)
- Default Branch: main
- Last Pushed: 2024-06-25T16:47:57.000Z (6 months ago)
- Last Synced: 2024-09-19T23:18:57.443Z (3 months ago)
- Topics: erlang, hacktoberfest
- Language: Erlang
- Homepage:
- Size: 436 KB
- Stars: 16
- Watchers: 2
- Forks: 1
- Open Issues: 7
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- License: LICENSE
Awesome Lists containing this project
README
# HyperLogLog for Erlang
![Hex.pm](https://img.shields.io/hexpm/v/hyper?style=flat-square)
![Hex.pm](https://img.shields.io/hexpm/l/hyper?style=flat-square)This is an implementation of the HyperLogLog algorithm in
Erlang. Using HyperLogLog you can estimate the cardinality of very
large data sets using constant memory. The relative error is `1.04 *
sqrt(2^P)`. When creating a new HyperLogLog filter, you provide the
precision P, allowing you to trade memory for accuracy. The union of
two filters is lossless.In practice this allows you to build efficient analytics systems. For
example, you can create a new filter in each mapper and feed it a
portion of your dataset while the reducers simply union together all
filters they receive. The filter you end up with is exactly the same
filter as if you would sequentially insert all data into a single
filter.In addition to the base algorithm, we have implemented the new estimator as
based on Mean Limit as described this great [paper by Otmar Ertl][].
This new estimator greatly improves the estimates for lower cardinalities while
using a single estimator for the whole range of cardinalities.## TODO
- [x] Use rebar3
- [x] Work on OTP 23
- [x] Fix the estimator
- [x] Fix `reduce_precision`
- [x] add `reduce_precision` for array, allowing unions
- [ ] Better document the main module
- [x] Move documentation to ExDoc
- [x] Delete dead code
- [x] Rework test suite to be nice to modify
- [ ] Rework Intersection using this [paper by Otmar Ertl][]
- [ ] Redo benchmarks## Usage
```erlang
1> hyper:insert(<<"foobar">>, hyper:insert(<<"quux">>, hyper:new(4))).
{hyper,4,
{hyper_binary,{dense,<<0,0,0,0,0,0,0,0,64,0,0,0>>,
[{8,1}],
1,16}}}2> hyper:card(v(-1)).
2.136502281992361
```The errors introduced by estimations can be seen in this example:
```erlang
3> rand:seed(exsss, {1, 2, 3}).
{#{bits => 58,jump => #Fun,
next => #Fun,type => exsss,
uniform => #Fun,
uniform_n => #Fun},
[117085240290607817|199386643319833935]}
4> Run = fun (P, Card) -> hyper:card(lists:foldl(fun (_, H) -> Int = rand:uniform(10000000000000), hyper:insert(<>, H) end, hyper:new(P), lists:seq(1, Card))) end.
#Fun
5> Run(12, 10_000).
10038.192365345985
6> Run(14, 10_000).
9967.916262642864
7> Run(16, 10_000).
9972.832893293473
```A filter can be persisted and read later. The serialized struct is formatted for usage with jiffy:
```erlang
8> Filter = hyper:insert(<<"foo">>, hyper:new(4)).
{hyper,4,
{hyper_binary,{dense,<<4,0,0,0,0,0,0,0,0,0,0,0>>,[],0,16}}}
9> Filter =:= hyper:from_json(hyper:to_json(Filter)).
true
```**As of today, we only support the binary backend. More to come**
You can select a different backend. See below for a description of why
you might want to do so. They serialize in exactly the same way, but
can't be mixed in memory.## Is it any good?
No idea ! I do not know anyone that uses it extensively, but it is relatively
well tested. As far as i can tell, it is the only FOSS implementation that does
precision reduction properly !## Hacking
### Documentation
We use ex_doc for documentation. In order to generate the docs, you need to install it
```bash
mix escript.install hex ex_doc
ex_doc --version
```Then generate the docs, after targetting the correct version in docs.sh
```bash
docs.sh
```## Backends
Effort has been spent on implementing different backends in the
pursuit of finding the right performance trade-off. Fill rate refers to how many
registers has a value other than 0.- `hyper_binary`: Fixed memory usage (6 bits * 2^P), fastest on insert,
union, cardinality and serialization. Best default choice.You can also implement your own backend. In `test` theres a
bunch of tests run for all backends, including some PropEr tests. The
test suite will ensure your backend gives correct estimates and
correctly encodes/decodes the serialized filters.## Fork
This is a fork of the original Hyper library by GameAnalytics. It was not
maintained anymore.The main difference are a move to the `rand` module for tests and to `rebar3`
as a build tool, in order to support OTP 23+.The `carray` backend was dropped, as it was never moved outside of experimental
status and could not be serialised for a distributed use. Some backends using
NIF may come back in the future.The bisect implementation was dropped too. Its use case was limited and it
forced a dependency on a library that was not maintained either.The gb backend was dropped for the time being too.
The Array backend was dropped for the time being too.
The estimator was rebuilt following this [paper by Otmar Ertl][], as it was
broken for any precision not 14. This should also provide better estimation
across the board for cardinality.The `reduce_precision` function has been rebuilt properly, as it was quite
simply wrong. This fixed a lot of bugs for unions.[paper by Otmar Ertl]: https://arxiv.org/abs/1706.07290