{"id":21072166,"url":"https://github.com/theartful/broadcast_queue","last_synced_at":"2025-10-26T07:06:00.304Z","repository":{"id":163543334,"uuid":"638951181","full_name":"theartful/broadcast_queue","owner":"theartful","description":"A blazingly fast™ single producer multiple consumer broadcast queue","archived":false,"fork":false,"pushed_at":"2025-02-15T20:52:38.000Z","size":114,"stargazers_count":13,"open_issues_count":0,"forks_count":2,"subscribers_count":2,"default_branch":"master","last_synced_at":"2025-04-03T21:01:32.066Z","etag":null,"topics":["broadcast-queue","concurrent-data-structure","cpp","lock-free","multiple-consumers","seqlock"],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/theartful.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-05-10T12:56:32.000Z","updated_at":"2025-03-16T19:04:46.000Z","dependencies_parsed_at":"2024-11-19T18:56:17.609Z","dependency_job_id":null,"html_url":"https://github.com/theartful/broadcast_queue","commit_stats":null,"previous_names":[],"tags_count":1,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/theartful%2Fbroadcast_queue","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/theartful%2Fbroadcast_queue/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/theartful%2Fbroadcast_queue/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/theartful%2Fbroadcast_queue/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/theartful","download_url":"https://codeload.github.com/theartful/broadcast_queue/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254474569,"owners_count":22077311,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["broadcast-queue","concurrent-data-structure","cpp","lock-free","multiple-consumers","seqlock"],"created_at":"2024-11-19T18:55:46.401Z","updated_at":"2025-10-26T07:05:55.256Z","avatar_url":"https://github.com/theartful.png","language":"C++","readme":"# A single-producer, multiple-consumer broadcast queue for C++11\n\nThis repository contains an implementation of a fixed-size seqlock broadcast queue\nbased on the paper [Can Seqlocks Get Along With Programming Language Memory Models?][1]\nby Hans Bohem.\n\n## Semantics\n\nThis is a fixed-size circular queue that consists of two parts: a sender and\none or more receivers. Its API is inspired by Golang's channels. The sender\nbroadcasts its values to all receivers that have subscribed to it. When a receiver\nsubscribes to a sender, it starts receiving data from the point it subscribed.\n\nIf the sender writes over a part of the queue before a receiver dequeues it,\nthe receiver is considered to be lagging behind. In this case, the next time\nthe receiver tries to dequeue a value, it will receive a `broadcast_queue::Error::Lagged`\nerror. The receiver's position in the queue is then updated to the tip of the queue,\nwhich is the point where the sender will write the next value. Once the sender\nwrites the next value, the receiver will receive it as if it had just resubscribed.\n\nIf the sender dies, the channel is considered closed, and if a receiver tries\nto dequeue data from it, it will receive a `broadcast_queue::Error::Closed` error.\n\n## Reasons to use\n\nThe main setting for using this kind of implementation is real-time applications\nwhere slow receivers are not tolerated. Here, the `sender` doesn't care about\n`receiver`s not being able to catch up, and writes into the queue without giving\nthem any regard. For examaple, my personal use case was writing a server that\nserves real-time audio data for many clients. I can't have the server waiting\non clients, and I can't create a queue for each client since the memory usage\nwould be suboptimal. So instead, all clients share the same queue, and any lagging\nclient will simply drop elements after being notified.\n\n## Example\n\n``` C++\n#include \u003cbroadcast_queue.h\u003e\n#include \u003ccassert\u003e\n\nint main()\n{\n  static constexpr size_t CAPACITY = 3;\n  broadcast_queue::sender\u003cint\u003e sender{CAPACITY};\n\n  auto receiver = sender.subscribe();\n\n  sender.push(1); // always succeeds since this is a circular queue without any\n                  // notion of being full since we don't care about receivers!\n  sender.push(2);\n\n  auto receiver2 = sender.subscribe(); // will receive data after the point of subscription\n  sender.push(3);\n\n  int result;\n  broadcast_queue::Error error;\n\n  error = receiver.try_dequeue(\u0026result);\n  assert(error == broadcast_queue::Error::None);\n  assert(result == 1);\n\n  error = receiver.try_dequeue(\u0026result);\n  assert(error == broadcast_queue::Error::None);\n  assert(result == 2);\n\n  error = receiver.try_dequeue(\u0026result);\n  assert(error == broadcast_queue::Error::None);\n  assert(result == 3);\n\n  error = receiver2.try_dequeue(\u0026result);\n  assert(error == broadcast_queue::Error::None);\n  assert(result == 3);\n}\n```\n\n## Is this queue lock-free?\n\nIt depends on the waiting strategy employed. The default strategy uses condition\nvariables, which requires locks but results in lower CPU usage. Alternatively,\nthe semaphore waiting strategy is lock-free with faster read/write speeds, at the\ncost of higher CPU usage due to busy waiting.\n\n## Use\n\nAdd the following lines to your CMakeLists.txt file:\n```cmake\ninclude(FetchContent)\n\nFetchContent_Declare(\n  broadcast_queue\n  GIT_REPOSITORY    https://github.com/theartful/broadcast_queue\n  GIT_TAG           master\n)\n\nFetchContent_MakeAvailable(broadcast_queue)\n\ntarget_link_libraries(target PUBLIC broadcast_queue)\n```\n\n## TODO\n\n- [x] Implement the `semaphore` class on Windows.\n- [x] Implement the `semaphore` class on macOS.\n- [x] Support non trivially copyable and non trivially destructible data types:\n    - [x] Implement a lock-free bitmap allocator.\n    - [x] Implement a two-layer broadcast-queue, the first layer would consist\n    of pointers employing the same strategy already used, and the second layer\n    would contain the actual data, which would be allocated using the\n    aforementioned lock-free bitmap allocator.\n- [x] Try 128-bit atomic intrinsics on x64 architecture (CMPXCHG16B) to get rid\nof seqlocks in the case when stored data is 64-bit long.\n- [ ] Support multi producers. Two approaches in mind:\n    - [ ] Try out [flat combining][5].\n    - [ ] Or more realistically, split the writer cursor into two: pending and\n    committed, where the pending one is always ahead of or equal to the committed\n    one, and the elements between the pending and the committed would represent\n    the ones being written by the producers at that moment.\n- [ ] Provide benchmark results against java's disruptor, and similar implementations\nof the disruptor pattern in C++.\n\n## Resources\n\n* [Can Seqlocks Get Along With Programming Language Memory Models?][1]\n* [Trading at light speed: designing low latency systems in C++ - David Gross - Meeting C++ 2022][2]\n* [LMAX Disruptor and the Concepts of Mechanical Sympathy][3]\n* [Building a Lock-free Multi-producer, Multi-consumer Queue for Tcmalloc - Matt Kulukundis - CppCon 21][4]\n* [Flat Combining and the Synchronization-Parallelism Tradeoff][5]\n\n[1]: https://dl.acm.org/doi/10.1145/2247684.2247688\n[2]: https://www.youtube.com/watch?v=8uAW5FQtcvE\n[3]: https://www.youtube.com/watch?v=Qho1QNbXBso\n[4]: https://www.youtube.com/watch?v=_qaKkHuHYE0\n[5]: https://people.csail.mit.edu/shanir/publications/Flat%20Combining%20SPAA%2010.pdf\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftheartful%2Fbroadcast_queue","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftheartful%2Fbroadcast_queue","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftheartful%2Fbroadcast_queue/lists"}