Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ymtszw/elm-broker
Data stream buffer for Elm application, inspired by Apache Kafka
https://github.com/ymtszw/elm-broker
broker buffer elm elm-lang kafka stream-processing
Last synced: about 2 months ago
JSON representation
Data stream buffer for Elm application, inspired by Apache Kafka
- Host: GitHub
- URL: https://github.com/ymtszw/elm-broker
- Owner: ymtszw
- License: bsd-3-clause
- Created: 2018-07-24T17:00:37.000Z (over 6 years ago)
- Default Branch: master
- Last Pushed: 2018-11-12T13:19:34.000Z (about 6 years ago)
- Last Synced: 2024-11-30T16:50:12.589Z (about 2 months ago)
- Topics: broker, buffer, elm, elm-lang, kafka, stream-processing
- Language: Elm
- Homepage: https://package.elm-lang.org/packages/ymtszw/elm-broker/latest/
- Size: 72.3 KB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# elm-broker
[![CircleCI](https://circleci.com/gh/ymtszw/elm-broker/tree/master.svg?style=svg)](https://circleci.com/gh/ymtszw/elm-broker/tree/master)
Data stream buffer for Elm application, inspired by [Apache Kafka](https://kafka.apache.org/).
## What is this?
- `Broker` is essentially a [circular buffer](https://www.wikiwand.com/en/Circular_buffer), internally using `Array`
- Read pointers (`Offset`) are exposed to clients, allowing "pull"-style data consumption, just as in Kafka
- Insert(`append`), `read`, and `update` all take _O(1)_
- A buffer is made of multiple `Segment`s. Buffer size (= number of `Segment`s and size of each `Segment`) can be configured
- When the whole buffer is filled up, a new "cycle" begins and old `Segment`s are evicted one by one## Expected Usage
- A `Broker` accepts incoming data stream. †
- Several **consumers** reads ("pulls") data from the `Broker` individually, while maintaining each `Offset` as their internal states.
- Consumers perform arbitrary operations against acquired data, then read `Broker` again after previous `Offset`. Rinse and repeat.†
It is possible to have multiple `Broker`s in your application for different purposes,
however you must be careful so that you do not mix up `Offset`s produced from one `Broker` to ones from others.
Since `Offset`s are only valid for their generating `Broker`.
Wrapping `Offset`s in phantom types is a possible technique to enforce this restriction.## Remarks
- Technically, it can also perform _O(1)_ `delete`, but it is still unclear whether we want `delete` API
- Original Kafka now [supports this as an admin command](https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/DeleteRecordsCommand.scala)
- There are several major features I am interested in:
- More sophisticated/efficient dump and reload. Current implementation is plain old `encode` and `decoder` pair,
which is potentially inefficient for big-capacity `Broker`s.
- Bulk append and bulk read
- Callback mechanism around `Segment` eviction## Development
Install Elm Platform.
```sh
$ elm make
$ elm-test # full test
$ elm-test tests/MainTest.elm # only light-weight tests
```## License
Copyright © 2018, [Yu Matsuzawa](https://github.com/ymtszw)
BSD-3-Clause