{"id":13439708,"url":"https://github.com/jbaldwin/libcoro","last_synced_at":"2025-05-15T05:07:12.372Z","repository":{"id":41540505,"uuid":"293608912","full_name":"jbaldwin/libcoro","owner":"jbaldwin","description":"C++20 coroutine library","archived":false,"fork":false,"pushed_at":"2025-05-03T20:06:07.000Z","size":797,"stargazers_count":740,"open_issues_count":15,"forks_count":72,"subscribers_count":15,"default_branch":"main","last_synced_at":"2025-05-03T20:26:29.765Z","etag":null,"topics":["coroutines","cpp20","cpplibrary"],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/jbaldwin.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2020-09-07T18:56:55.000Z","updated_at":"2025-05-03T18:26:24.000Z","dependencies_parsed_at":"2022-09-02T14:42:21.315Z","dependency_job_id":"360f9573-83a6-44ef-b906-8ae99b053392","html_url":"https://github.com/jbaldwin/libcoro","commit_stats":null,"previous_names":[],"tags_count":17,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jbaldwin%2Flibcoro","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jbaldwin%2Flibcoro/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jbaldwin%2Flibcoro/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jbaldwin%2Flibcoro/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/jbaldwin","download_url":"https://codeload.github.com/jbaldwin/libcoro/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254276447,"owners_count":22043867,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["coroutines","cpp20","cpplibrary"],"created_at":"2024-07-31T03:01:16.410Z","updated_at":"2025-05-15T05:07:07.363Z","avatar_url":"https://github.com/jbaldwin.png","language":"C++","readme":"# libcoro C++20 coroutine library\n\n[![CI](https://github.com/jbaldwin/libcoro/actions/workflows/ci-coverage.yml/badge.svg)](https://github.com/jbaldwin/libcoro/actions/workflows/ci-coverage.yml)\n[![Coverage Status](https://coveralls.io/repos/github/jbaldwin/libcoro/badge.svg?branch=main)](https://coveralls.io/github/jbaldwin/libcoro?branch=main)\n[![Codacy Badge](https://app.codacy.com/project/badge/Grade/c190d4920e6749d4b4d1a9d7d6687f4f)](https://www.codacy.com/gh/jbaldwin/libcoro/dashboard?utm_source=github.com\u0026amp;utm_medium=referral\u0026amp;utm_content=jbaldwin/libcoro\u0026amp;utm_campaign=Badge_Grade)\n[![language][badge.language]][language]\n[![license][badge.license]][license]\n\n**libcoro** is licensed under the Apache 2.0 license.\n\n**libcoro** is meant to provide low level coroutine constructs for building larger applications, the current focus is around high performance networking coroutine support.\n\n## Overview\n* C++20 coroutines!\n* Modern Safe C++20 API\n* Higher level coroutine constructs\n    - [coro::sync_wait(awaitable)](#sync_wait)\n    - [coro::when_all(awaitable...) -\u003e awaitable](#when_all)\n    - [coro::when_any(awaitable...) -\u003e awaitable](#when_any)\n    - [coro::task\u003cT\u003e](#task)\n    - [coro::generator\u003cT\u003e](#generator)\n    - [coro::event](#event)\n    - [coro::latch](#latch)\n    - [coro::mutex](#mutex)\n    - [coro::shared_mutex](#shared_mutex)\n    - [coro::semaphore](#semaphore)\n    - [coro::ring_buffer\u003celement, num_elements\u003e](#ring_buffer)\n* Schedulers\n    - [coro::thread_pool](#thread_pool) for coroutine cooperative multitasking\n    - [coro::io_scheduler](#io_scheduler) for driving i/o events\n        - Can use `coro::thread_pool` for latency sensitive or long lived tasks.\n        - Can use inline task processing for thread per core or short lived tasks.\n        - Currently uses an epoll driver, only supported on linux.\n* Coroutine Networking\n    - coro::net::dns::resolver for async dns\n        - Uses libc-ares\n    - [coro::net::tcp::client](#io_scheduler)\n    - [coro::net::tcp::server](#io_scheduler)\n      * [Example TCP/HTTP Echo Server](#tcp_echo_server)\n    - coro::net::tls::client (OpenSSL)\n    - coro::net::tls::server (OpenSSL)\n    - coro::net::udp::peer\n*\n* [Requirements](#requirements)\n* [Build Instructions](#build-instructions)\n* [Contributing](#contributing)\n* [Support](#support)\n\n## Usage\n\n### A note on co_await and threads\nIts important to note with coroutines that _any_ `co_await` has the potential to switch the underyling thread that is executing the currently executing coroutine if the scheduler used has more than 1 thread. In general this shouldn't affect the way any user of the library would write code except for `thread_local`. Usage of `thread_local` should be extremely careful and _never_ used across any `co_await` boundary do to thread switching and work stealing on libcoro's schedulers. The only way this is safe is by using a `coro::thread_pool` with 1 thread or an inline `io_scheduler` which also only has 1 thread.\n\n### A note on lambda captures (do not use them!)\n[C++ Core Guidelines - CP.51: Do no use capturing lambdas that are coroutines](https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#Rcoro-capture)\n\nThe recommendation is to not use lambda captures and instead pass any data into the coroutine via its function arguments to guarantee the argument lifetimes. Lambda captures will be destroyed at the coroutines first suspension point so if they are used past that point it will result in a use after free bug.\n\n### sync_wait\nThe `sync_wait` construct is meant to be used outside of a coroutine context to block the calling thread until the coroutine has completed. The coroutine can be executed on the calling thread or scheduled on one of libcoro's schedulers.\n\n```C++\n#include \u003ccoro/coro.hpp\u003e\n#include \u003ciostream\u003e\n\nint main()\n{\n    // This lambda will create a coro::task that returns a unit64_t.\n    // It can be invoked many times with different arguments.\n    auto make_task_inline = [](uint64_t x) -\u003e coro::task\u003cuint64_t\u003e { co_return x + x; };\n\n    // This will block the calling thread until the created task completes.\n    // Since this task isn't scheduled on any coro::thread_pool or coro::io_scheduler\n    // it will execute directly on the calling thread.\n    auto result = coro::sync_wait(make_task_inline(5));\n    std::cout \u003c\u003c \"Inline Result = \" \u003c\u003c result \u003c\u003c \"\\n\";\n\n    // We'll make a 1 thread coro::thread_pool to demonstrate offloading the task's\n    // execution to another thread.  We'll pass the thread pool as a parameter so\n    // the task can be scheduled.\n    // Note that you will need to guarantee the thread pool outlives the coroutine.\n    coro::thread_pool tp{coro::thread_pool::options{.thread_count = 1}};\n\n    auto make_task_offload = [](coro::thread_pool\u0026 tp, uint64_t x) -\u003e coro::task\u003cuint64_t\u003e\n    {\n        co_await tp.schedule(); // Schedules execution on the thread pool.\n        co_return x + x;        // This will execute on the thread pool.\n    };\n\n    // This will still block the calling thread, but it will now offload to the\n    // coro::thread_pool since the coroutine task is immediately scheduled.\n    result = coro::sync_wait(make_task_offload(tp, 10));\n    std::cout \u003c\u003c \"Offload Result = \" \u003c\u003c result \u003c\u003c \"\\n\";\n}\n```\n\nExpected output:\n```bash\n$ ./examples/coro_sync_wait\nInline Result = 10\nOffload Result = 20\n```\n\n### when_all\nThe `when_all` construct can be used within coroutines to await a set of tasks, or it can be used outside coroutine context in conjunction with `sync_wait` to await multiple tasks. Each task passed into `when_all` will initially be executed serially by the calling thread so it is recommended to offload the tasks onto an executor like `coro::thread_pool` or `coro::io_scheduler` so they can execute in parallel.\n\n```C++\n#include \u003ccoro/coro.hpp\u003e\n#include \u003ciostream\u003e\n\nint main()\n{\n    // Create a thread pool to execute all the tasks in parallel.\n    coro::thread_pool tp{coro::thread_pool::options{.thread_count = 4}};\n    // Create the task we want to invoke multiple times and execute in parallel on the thread pool.\n    auto twice = [](coro::thread_pool\u0026 tp, uint64_t x) -\u003e coro::task\u003cuint64_t\u003e\n    {\n        co_await tp.schedule(); // Schedule onto the thread pool.\n        co_return x + x;        // Executed on the thread pool.\n    };\n\n    // Make our tasks to execute, tasks can be passed in via a std::ranges::range type or var args.\n    std::vector\u003ccoro::task\u003cuint64_t\u003e\u003e tasks{};\n    for (std::size_t i = 0; i \u003c 5; ++i)\n    {\n        tasks.emplace_back(twice(tp, i + 1));\n    }\n\n    // Synchronously wait on this thread for the thread pool to finish executing all the tasks in parallel.\n    auto results = coro::sync_wait(coro::when_all(std::move(tasks)));\n    for (auto\u0026 result : results)\n    {\n        // If your task can throw calling return_value() will either return the result or re-throw the exception.\n        try\n        {\n            std::cout \u003c\u003c result.return_value() \u003c\u003c \"\\n\";\n        }\n        catch (const std::exception\u0026 e)\n        {\n            std::cerr \u003c\u003c e.what() \u003c\u003c '\\n';\n        }\n    }\n\n    // Use var args instead of a container as input to coro::when_all.\n    auto square = [](coro::thread_pool\u0026 tp, double x) -\u003e coro::task\u003cdouble\u003e\n    {\n        co_await tp.schedule();\n        co_return x* x;\n    };\n\n    // Var args allows you to pass in tasks with different return types and returns\n    // the result as a std::tuple.\n    auto tuple_results = coro::sync_wait(coro::when_all(square(tp, 1.1), twice(tp, 10)));\n\n    auto first  = std::get\u003c0\u003e(tuple_results).return_value();\n    auto second = std::get\u003c1\u003e(tuple_results).return_value();\n\n    std::cout \u003c\u003c \"first: \" \u003c\u003c first \u003c\u003c \" second: \" \u003c\u003c second \u003c\u003c \"\\n\";\n}\n```\n\nExpected output:\n```bash\n$ ./examples/coro_when_all\n2\n4\n6\n8\n10\nfirst: 1.21 second: 20\n```\n\n### when_any\nThe `when_any` construct can be used within coroutines to await a set of tasks and only return the result of the first task that completes. This can also be used outside of a coroutine context in conjunction with `sync_wait` to await the first result. Each task passed into `when_any` will initially be executed serially by the calling thread so it is recommended to offload the tasks onto an executor like `coro::thread_pool` or `coro::io_scheduler` so they can execute in parallel.\n\n```C++\n#include \u003ccoro/coro.hpp\u003e\n#include \u003ciostream\u003e\n\nint main()\n{\n    // Create a scheduler to execute all tasks in parallel and also so we can\n    // suspend a task to act like a timeout event.\n    auto scheduler = coro::io_scheduler::make_shared();\n\n    // This task will behave like a long running task and will produce a valid result.\n    auto make_long_running_task = [](std::shared_ptr\u003ccoro::io_scheduler\u003e scheduler,\n                                     std::chrono::milliseconds           execution_time) -\u003e coro::task\u003cint64_t\u003e\n    {\n        // Schedule the task to execute in parallel.\n        co_await scheduler-\u003eschedule();\n        // Fake doing some work...\n        co_await scheduler-\u003eyield_for(execution_time);\n        // Return the result.\n        co_return 1;\n    };\n\n    auto make_timeout_task = [](std::shared_ptr\u003ccoro::io_scheduler\u003e scheduler) -\u003e coro::task\u003cint64_t\u003e\n    {\n        // Schedule a timer to be fired so we know the task timed out.\n        co_await scheduler-\u003eschedule_after(std::chrono::milliseconds{100});\n        co_return -1;\n    };\n\n    // Example showing the long running task completing first.\n    {\n        std::vector\u003ccoro::task\u003cint64_t\u003e\u003e tasks{};\n        tasks.emplace_back(make_long_running_task(scheduler, std::chrono::milliseconds{50}));\n        tasks.emplace_back(make_timeout_task(scheduler));\n\n        auto result = coro::sync_wait(coro::when_any(std::move(tasks)));\n        std::cout \u003c\u003c \"result = \" \u003c\u003c result \u003c\u003c \"\\n\";\n    }\n\n    // Example showing the long running task timing out.\n    {\n        std::vector\u003ccoro::task\u003cint64_t\u003e\u003e tasks{};\n        tasks.emplace_back(make_long_running_task(scheduler, std::chrono::milliseconds{500}));\n        tasks.emplace_back(make_timeout_task(scheduler));\n\n        auto result = coro::sync_wait(coro::when_any(std::move(tasks)));\n        std::cout \u003c\u003c \"result = \" \u003c\u003c result \u003c\u003c \"\\n\";\n    }\n}\n```\n\nExpected output:\n```bash\n$ ./examples/coro_when_any\nresult = 1\nresult = -1\n```\n\n\n### task\nThe `coro::task\u003cT\u003e` is the main coroutine building block within `libcoro`.  Use task to create your coroutines and `co_await` or `co_yield` tasks within tasks to perform asynchronous operations, lazily evaluation or even spreading work out across a `coro::thread_pool`.  Tasks are lightweight and only begin execution upon awaiting them.\n\n\n```C++\n#include \u003ccoro/coro.hpp\u003e\n#include \u003ciostream\u003e\n\nint main()\n{\n    // Create a task that awaits the doubling of its given value and\n    // then returns the result after adding 5.\n    auto double_and_add_5_task = [](uint64_t input) -\u003e coro::task\u003cuint64_t\u003e\n    {\n        // Task that takes a value and doubles it.\n        auto double_task = [](uint64_t x) -\u003e coro::task\u003cuint64_t\u003e { co_return x * 2; };\n\n        auto doubled = co_await double_task(input);\n        co_return doubled + 5;\n    };\n\n    auto output = coro::sync_wait(double_and_add_5_task(2));\n    std::cout \u003c\u003c \"Task1 output = \" \u003c\u003c output \u003c\u003c \"\\n\";\n\n    struct expensive_struct\n    {\n        std::string              id{};\n        std::vector\u003cstd::string\u003e records{};\n\n        expensive_struct()  = default;\n        ~expensive_struct() = default;\n\n        // Explicitly delete copy constructor and copy assign, force only moves!\n        // While the default move constructors will work for this struct the example\n        // inserts explicit print statements to show the task is moving the value\n        // out correctly.\n        expensive_struct(const expensive_struct\u0026)                    = delete;\n        auto operator=(const expensive_struct\u0026) -\u003e expensive_struct\u0026 = delete;\n\n        expensive_struct(expensive_struct\u0026\u0026 other) : id(std::move(other.id)), records(std::move(other.records))\n        {\n            std::cout \u003c\u003c \"expensive_struct() move constructor called\\n\";\n        }\n        auto operator=(expensive_struct\u0026\u0026 other) -\u003e expensive_struct\u0026\n        {\n            if (std::addressof(other) != this)\n            {\n                id      = std::move(other.id);\n                records = std::move(other.records);\n            }\n            std::cout \u003c\u003c \"expensive_struct() move assignment called\\n\";\n            return *this;\n        }\n    };\n\n    // Create a very large object and return it by moving the value so the\n    // contents do not have to be copied out.\n    auto move_output_task = []() -\u003e coro::task\u003cexpensive_struct\u003e\n    {\n        expensive_struct data{};\n        data.id = \"12345678-1234-5678-9012-123456781234\";\n        for (size_t i = 10'000; i \u003c 100'000; ++i)\n        {\n            data.records.emplace_back(std::to_string(i));\n        }\n\n        // Because the struct only has move contructors it will be forced to use\n        // them, no need to explicitly std::move(data).\n        co_return data;\n    };\n\n    auto data = coro::sync_wait(move_output_task());\n    std::cout \u003c\u003c data.id \u003c\u003c \" has \" \u003c\u003c data.records.size() \u003c\u003c \" records.\\n\";\n\n    // std::unique_ptr\u003cT\u003e can also be used to return a larger object.\n    auto unique_ptr_task = []() -\u003e coro::task\u003cstd::unique_ptr\u003cuint64_t\u003e\u003e { co_return std::make_unique\u003cuint64_t\u003e(42); };\n\n    auto answer_to_everything = coro::sync_wait(unique_ptr_task());\n    if (answer_to_everything != nullptr)\n    {\n        std::cout \u003c\u003c \"Answer to everything = \" \u003c\u003c *answer_to_everything \u003c\u003c \"\\n\";\n    }\n}\n```\n\nExpected output:\n```bash\n$ ./examples/coro_task\nTask1 output = 9\nexpensive_struct() move constructor called\nexpensive_struct() move assignment called\nexpensive_struct() move constructor called\n12345678-1234-5678-9012-123456781234 has 90000 records.\nAnswer to everything = 42\n```\n\n### generator\nThe `coro::generator\u003cT\u003e` construct is a coroutine which can generate one or more values.\n\n```C++\n#include \u003ccoro/coro.hpp\u003e\n#include \u003ciostream\u003e\n\nint main()\n{\n    auto task = [](uint64_t count_to) -\u003e coro::task\u003cvoid\u003e\n    {\n        // Create a generator function that will yield and incrementing\n        // number each time its called.\n        auto gen = []() -\u003e coro::generator\u003cuint64_t\u003e\n        {\n            uint64_t i = 0;\n            while (true)\n            {\n                co_yield i;\n                ++i;\n            }\n        };\n\n        // Generate the next number until its greater than count to.\n        for (auto val : gen())\n        {\n            std::cout \u003c\u003c val \u003c\u003c \", \";\n\n            if (val \u003e= count_to)\n            {\n                break;\n            }\n        }\n        co_return;\n    };\n\n    coro::sync_wait(task(100));\n}\n```\n\nExpected output:\n```bash\n$ ./examples/coro_generator\n0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100,\n```\n\n### event\nThe `coro::event` is a thread safe async tool to have 1 or more waiters suspend for an event to be set before proceeding.  The implementation of event currently will resume execution of all waiters on the thread that sets the event.  If the event is already set when a waiter goes to wait on the thread they will simply continue executing with no suspend or wait time incurred.\n\n```C++\n#include \u003ccoro/coro.hpp\u003e\n#include \u003ciostream\u003e\n\nint main()\n{\n    coro::event e;\n\n    // These tasks will wait until the given event has been set before advancing.\n    auto make_wait_task = [](const coro::event\u0026 e, uint64_t i) -\u003e coro::task\u003cvoid\u003e\n    {\n        std::cout \u003c\u003c \"task \" \u003c\u003c i \u003c\u003c \" is waiting on the event...\\n\";\n        co_await e;\n        std::cout \u003c\u003c \"task \" \u003c\u003c i \u003c\u003c \" event triggered, now resuming.\\n\";\n        co_return;\n    };\n\n    // This task will trigger the event allowing all waiting tasks to proceed.\n    auto make_set_task = [](coro::event\u0026 e) -\u003e coro::task\u003cvoid\u003e\n    {\n        std::cout \u003c\u003c \"set task is triggering the event\\n\";\n        e.set();\n        co_return;\n    };\n\n    // Given more than a single task to synchronously wait on, use when_all() to execute all the\n    // tasks concurrently on this thread and then sync_wait() for them all to complete.\n    coro::sync_wait(coro::when_all(make_wait_task(e, 1), make_wait_task(e, 2), make_wait_task(e, 3), make_set_task(e)));\n}\n```\n\nExpected output:\n```bash\n$ ./examples/coro_event\ntask 1 is waiting on the event...\ntask 2 is waiting on the event...\ntask 3 is waiting on the event...\nset task is triggering the event\ntask 3 event triggered, now resuming.\ntask 2 event triggered, now resuming.\ntask 1 event triggered, now resuming.\n```\n\n### latch\nThe `coro::latch` is a thread safe async tool to have 1 waiter suspend until all outstanding events have completed before proceeding.\n\n```C++\n#include \u003ccoro/coro.hpp\u003e\n#include \u003ciostream\u003e\n\nint main()\n{\n    // Complete worker tasks faster on a thread pool, using the io_scheduler version so the worker\n    // tasks can yield for a specific amount of time to mimic difficult work.  The pool is only\n    // setup with a single thread to showcase yield_for().\n    auto tp = coro::io_scheduler::make_shared(\n        coro::io_scheduler::options{.pool = coro::thread_pool::options{.thread_count = 1}});\n\n    // This task will wait until the given latch setters have completed.\n    auto make_latch_task = [](coro::latch\u0026 l) -\u003e coro::task\u003cvoid\u003e\n    {\n        // It seems like the dependent worker tasks could be created here, but in that case it would\n        // be superior to simply do: `co_await coro::when_all(tasks);`\n        // It is also important to note that the last dependent task will resume the waiting latch\n        // task prior to actually completing -- thus the dependent task's frame could be destroyed\n        // by the latch task completing before it gets a chance to finish after calling resume() on\n        // the latch task!\n\n        std::cout \u003c\u003c \"latch task is now waiting on all children tasks...\\n\";\n        co_await l;\n        std::cout \u003c\u003c \"latch task dependency tasks completed, resuming.\\n\";\n        co_return;\n    };\n\n    // This task does 'work' and counts down on the latch when completed.  The final child task to\n    // complete will end up resuming the latch task when the latch's count reaches zero.\n    auto make_worker_task = [](std::shared_ptr\u003ccoro::io_scheduler\u003e tp, coro::latch\u0026 l, int64_t i) -\u003e coro::task\u003cvoid\u003e\n    {\n        // Schedule the worker task onto the thread pool.\n        co_await tp-\u003eschedule();\n        std::cout \u003c\u003c \"worker task \" \u003c\u003c i \u003c\u003c \" is working...\\n\";\n        // Do some expensive calculations, yield to mimic work...!  Its also important to never use\n        // std::this_thread::sleep_for() within the context of coroutines, it will block the thread\n        // and other tasks that are ready to execute will be blocked.\n        co_await tp-\u003eyield_for(std::chrono::milliseconds{i * 20});\n        std::cout \u003c\u003c \"worker task \" \u003c\u003c i \u003c\u003c \" is done, counting down on the latch\\n\";\n        l.count_down();\n        co_return;\n    };\n\n    const int64_t                 num_tasks{5};\n    coro::latch                   l{num_tasks};\n    std::vector\u003ccoro::task\u003cvoid\u003e\u003e tasks{};\n\n    // Make the latch task first so it correctly waits for all worker tasks to count down.\n    tasks.emplace_back(make_latch_task(l));\n    for (int64_t i = 1; i \u003c= num_tasks; ++i)\n    {\n        tasks.emplace_back(make_worker_task(tp, l, i));\n    }\n\n    // Wait for all tasks to complete.\n    coro::sync_wait(coro::when_all(std::move(tasks)));\n}\n```\n\nExpected output:\n```bash\n$ ./examples/coro_latch\nlatch task is now waiting on all children tasks...\nworker task 1 is working...\nworker task 2 is working...\nworker task 3 is working...\nworker task 4 is working...\nworker task 5 is working...\nworker task 1 is done, counting down on the latch\nworker task 2 is done, counting down on the latch\nworker task 3 is done, counting down on the latch\nworker task 4 is done, counting down on the latch\nworker task 5 is done, counting down on the latch\nlatch task dependency tasks completed, resuming.\n```\n\n### mutex\nThe `coro::mutex` is a thread safe async tool to protect critical sections and only allow a single thread to execute the critical section at any given time.  Mutexes that are uncontended are a simple CAS operation with a memory fence 'acquire' to behave similar to `std::mutex`.  If the lock is contended then the thread will add itself to a LIFO queue of waiters and yield excution to allow another coroutine to process on that thread while it waits to acquire the lock.\n\nIts important to note that upon releasing the mutex that thread unlocking the mutex will immediately start processing the next waiter in line for the `coro::mutex` (if there are any waiters), the mutex is only unlocked/released once all waiters have been processed.  This guarantees fair execution in a reasonbly FIFO manner, but it also means all coroutines that stack in the waiter queue will end up shifting to the single thread that is executing all waiting coroutines.  It is possible to manually reschedule after the critical section onto a thread pool to re-distribute the work if this is a concern in your use case.\n\nThe suspend waiter queue is LIFO, however the worker that current holds the mutex will periodically 'acquire' the current LIFO waiter list to process those waiters when its internal list becomes empty.  This effectively resets the suspended waiter list to empty and the worker holding the mutex will work through the newly acquired LIFO queue of waiters.  It would be possible to reverse this list to be as fair as possible, however not reversing the list should result is better throughput at possibly the cost of some latency for the first suspended waiters on the 'current' LIFO queue.  Reversing the list, however, would introduce latency for all queue waiters since its done everytime the LIFO queue is swapped.\n\n```C++\n#include \u003ccoro/coro.hpp\u003e\n#include \u003ciostream\u003e\n\nint main()\n{\n    coro::thread_pool     tp{coro::thread_pool::options{.thread_count = 4}};\n    std::vector\u003cuint64_t\u003e output{};\n    coro::mutex           mutex;\n\n    auto make_critical_section_task =\n        [](coro::thread_pool\u0026 tp, coro::mutex\u0026 mutex, std::vector\u003cuint64_t\u003e\u0026 output, uint64_t i) -\u003e coro::task\u003cvoid\u003e\n    {\n        co_await tp.schedule();\n        // To acquire a mutex lock co_await its lock() function.  Upon acquiring the lock the\n        // lock() function returns a coro::scoped_lock that holds the mutex and automatically\n        // unlocks the mutex upon destruction.  This behaves just like std::scoped_lock.\n        {\n            auto scoped_lock = co_await mutex.lock();\n            output.emplace_back(i);\n        } // \u003c-- scoped lock unlocks the mutex here.\n        co_return;\n    };\n\n    const size_t                  num_tasks{100};\n    std::vector\u003ccoro::task\u003cvoid\u003e\u003e tasks{};\n    tasks.reserve(num_tasks);\n    for (size_t i = 1; i \u003c= num_tasks; ++i)\n    {\n        tasks.emplace_back(make_critical_section_task(tp, mutex, output, i));\n    }\n\n    coro::sync_wait(coro::when_all(std::move(tasks)));\n\n    // The output will be variable per run depending on how the tasks are picked up on the\n    // thread pool workers.\n    for (const auto\u0026 value : output)\n    {\n        std::cout \u003c\u003c value \u003c\u003c \", \";\n    }\n}\n```\n\nExpected output, note that the output will vary from run to run based on how the thread pool workers\nare scheduled and in what order they acquire the mutex lock:\n```bash\n$ ./examples/coro_mutex\n1, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 34, 33, 32, 31, 30, 29, 28, 27, 26, 25, 24, 23, 37, 36, 35, 40, 39, 38, 41, 42, 43, 44, 46, 47, 48, 45, 49, 50, 51, 52, 53, 54, 55, 57, 56, 59, 58, 61, 60, 62, 63, 65, 64, 67, 66, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 83, 82, 84, 85, 86, 87, 88, 89, 91, 90, 92, 93, 94, 95, 96, 97, 98, 99, 100,\n```\n\nIts very easy to see the LIFO 'atomic' queue in action in the beginning where 22-\u003e2 are immediately suspended waiting to acquire the mutex.\n\n### shared_mutex\nThe `coro::shared_mutex` is a thread safe async tool to allow for multiple shared users at once but also exclusive access.  The lock is acquired strictly in a FIFO manner in that if the lock is currenty held by shared users and an exclusive attempts to lock, the exclusive waiter will suspend until all the _current_ shared users finish using the lock.  Any new users that attempt to lock the mutex in a shared state once there is an exclusive waiter will also wait behind the exclusive waiter.  This prevents the exclusive waiter from being starved.\n\nThe `coro::shared_mutex` requires a `executor_type` when constructed to be able to resume multiple shared waiters when an exclusive lock is released.  This allows for all of the pending shared waiters to be resumed concurrently.\n\n\n```C++\n#include \u003ccoro/coro.hpp\u003e\n#include \u003ciostream\u003e\n\nint main()\n{\n    // Shared mutexes require an excutor type to be able to wake up multiple shared waiters when\n    // there is an exclusive lock holder releasing the lock.  This example uses a single thread\n    // to also show the interleaving of coroutines acquiring the shared lock in shared and\n    // exclusive mode as they resume and suspend in a linear manner.  Ideally the thread pool\n    // executor would have more than 1 thread to resume all shared waiters in parallel.\n    auto tp = std::make_shared\u003ccoro::thread_pool\u003e(coro::thread_pool::options{.thread_count = 1});\n    coro::shared_mutex\u003ccoro::thread_pool\u003e mutex{tp};\n\n    auto make_shared_task = [](std::shared_ptr\u003ccoro::thread_pool\u003e     tp,\n                               coro::shared_mutex\u003ccoro::thread_pool\u003e\u0026 mutex,\n                               uint64_t                               i) -\u003e coro::task\u003cvoid\u003e\n    {\n        co_await tp-\u003eschedule();\n        {\n            std::cerr \u003c\u003c \"shared task \" \u003c\u003c i \u003c\u003c \" lock_shared()\\n\";\n            auto scoped_lock = co_await mutex.lock_shared();\n            std::cerr \u003c\u003c \"shared task \" \u003c\u003c i \u003c\u003c \" lock_shared() acquired\\n\";\n            /// Immediately yield so the other shared tasks also acquire in shared state\n            /// while this task currently holds the mutex in shared state.\n            co_await tp-\u003eyield();\n            std::cerr \u003c\u003c \"shared task \" \u003c\u003c i \u003c\u003c \" unlock_shared()\\n\";\n        }\n        co_return;\n    };\n\n    auto make_exclusive_task = [](std::shared_ptr\u003ccoro::thread_pool\u003e     tp,\n                                  coro::shared_mutex\u003ccoro::thread_pool\u003e\u0026 mutex) -\u003e coro::task\u003cvoid\u003e\n    {\n        co_await tp-\u003eschedule();\n\n        std::cerr \u003c\u003c \"exclusive task lock()\\n\";\n        auto scoped_lock = co_await mutex.lock();\n        std::cerr \u003c\u003c \"exclusive task lock() acquired\\n\";\n        // Do the exclusive work..\n        std::cerr \u003c\u003c \"exclusive task unlock()\\n\";\n        co_return;\n    };\n\n    // Create 3 shared tasks that will acquire the mutex in a shared state.\n    const size_t                  num_tasks{3};\n    std::vector\u003ccoro::task\u003cvoid\u003e\u003e tasks{};\n    for (size_t i = 1; i \u003c= num_tasks; ++i)\n    {\n        tasks.emplace_back(make_shared_task(tp, mutex, i));\n    }\n    // Create an exclusive task.\n    tasks.emplace_back(make_exclusive_task(tp, mutex));\n    // Create 3 more shared tasks that will be blocked until the exclusive task completes.\n    for (size_t i = num_tasks + 1; i \u003c= num_tasks * 2; ++i)\n    {\n        tasks.emplace_back(make_shared_task(tp, mutex, i));\n    }\n\n    coro::sync_wait(coro::when_all(std::move(tasks)));\n}\n```\n\nExample output, notice how the (4,5,6) shared tasks attempt to acquire the lock in a shared state but are blocked behind the exclusive waiter until it completes:\n```bash\n$ ./examples/coro_shared_mutex\nshared task 1 lock_shared()\nshared task 1 lock_shared() acquired\nshared task 2 lock_shared()\nshared task 2 lock_shared() acquired\nshared task 3 lock_shared()\nshared task 3 lock_shared() acquired\nexclusive task lock()\nshared task 4 lock_shared()\nshared task 5 lock_shared()\nshared task 6 lock_shared()\nshared task 1 unlock_shared()\nshared task 2 unlock_shared()\nshared task 3 unlock_shared()\nexclusive task lock() acquired\nexclusive task unlock()\nshared task 4 lock_shared() acquired\nshared task 5 lock_shared() acquired\nshared task 6 lock_shared() acquired\nshared task 4 unlock_shared()\nshared task 5 unlock_shared()\nshared task 6 unlock_shared()\n\n```\n\n### semaphore\nThe `coro::semaphore` is a thread safe async tool to protect a limited number of resources by only allowing so many consumers to acquire the resources a single time.  The `coro::semaphore` also has a maximum number of resources denoted by its constructor.  This means if a resource is produced or released when the semaphore is at its maximum resource availability then the release operation will await for space to become available.  This is useful for a ringbuffer type situation where the resources are produced and then consumed, but will have no effect on a semaphores usage if there is a set known quantity of resources to start with and are acquired and then released back.\n\n```C++\n#include \u003ccoro/coro.hpp\u003e\n#include \u003ciostream\u003e\n\nint main()\n{\n    // Have more threads/tasks than the semaphore will allow for at any given point in time.\n    coro::thread_pool tp{coro::thread_pool::options{.thread_count = 8}};\n    coro::semaphore   semaphore{2};\n\n    auto make_rate_limited_task =\n        [](coro::thread_pool\u0026 tp, coro::semaphore\u0026 semaphore, uint64_t task_num) -\u003e coro::task\u003cvoid\u003e\n    {\n        co_await tp.schedule();\n\n        // This will only allow 2 tasks through at any given point in time, all other tasks will\n        // await the resource to be available before proceeding.\n        auto result = co_await semaphore.acquire();\n        if (result == coro::semaphore::acquire_result::acquired)\n        {\n            std::cout \u003c\u003c task_num \u003c\u003c \", \";\n            semaphore.release();\n        }\n        else\n        {\n            std::cout \u003c\u003c task_num \u003c\u003c \" failed to acquire semaphore [\" \u003c\u003c coro::semaphore::to_string(result) \u003c\u003c \"],\";\n        }\n        co_return;\n    };\n\n    const size_t                  num_tasks{100};\n    std::vector\u003ccoro::task\u003cvoid\u003e\u003e tasks{};\n    for (size_t i = 1; i \u003c= num_tasks; ++i)\n    {\n        tasks.emplace_back(make_rate_limited_task(tp, semaphore, i));\n    }\n\n    coro::sync_wait(coro::when_all(std::move(tasks)));\n}\n```\n\nExpected output, note that there is no lock around the `std::cout` so some of the output isn't perfect.\n```bash\n$ ./examples/coro_semaphore\n1, 23, 25, 24, 22, 27, 28, 29, 21, 20, 19, 18, 17, 14, 31, 30, 33, 32, 41, 40, 37, 39, 38, 36, 35, 34, 43, 46, 47, 48, 45, 42, 44, 26, 16, 15, 13, 52, 54, 55, 53, 49, 51, 57, 58, 50, 62, 63, 61, 60, 59, 56, 12, 11, 8, 10, 9, 7, 6, 5, 4, 3, 642, , 66, 67, 6568, , 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100,\n```\n\n### ring_buffer\nThe `coro::ring_buffer\u003celement, num_elements\u003e` is thread safe async multi-producer multi-consumer statically sized ring buffer.  Producers that try to produce a value when the ring buffer is full will suspend until space is available.  Consumers that try to consume a value when the ring buffer is empty will suspend until space is available.  All waiters on the ring buffer for producing or consuming are resumed in a LIFO manner when their respective operation becomes available.\n\n```C++\n#include \u003ccoro/coro.hpp\u003e\n#include \u003ciostream\u003e\n\nint main()\n{\n    const size_t                    iterations = 100;\n    const size_t                    consumers  = 4;\n    coro::thread_pool               tp{coro::thread_pool::options{.thread_count = 4}};\n    coro::ring_buffer\u003cuint64_t, 16\u003e rb{};\n    coro::mutex                     m{};\n\n    std::vector\u003ccoro::task\u003cvoid\u003e\u003e tasks{};\n\n    auto make_producer_task =\n        [](coro::thread_pool\u0026 tp, coro::ring_buffer\u003cuint64_t, 16\u003e\u0026 rb, coro::mutex\u0026 m) -\u003e coro::task\u003cvoid\u003e\n    {\n        co_await tp.schedule();\n\n        for (size_t i = 1; i \u003c= iterations; ++i)\n        {\n            co_await rb.produce(i);\n        }\n\n        // Wait for the ring buffer to clear all items so its a clean stop.\n        while (!rb.empty())\n        {\n            co_await tp.yield();\n        }\n\n        // Now that the ring buffer is empty signal to all the consumers its time to stop.  Note that\n        // the stop signal works on producers as well, but this example only uses 1 producer.\n        {\n            auto scoped_lock = co_await m.lock();\n            std::cerr \u003c\u003c \"\\nproducer is sending stop signal\";\n        }\n        rb.notify_waiters();\n        co_return;\n    };\n\n    auto make_consumer_task =\n        [](coro::thread_pool\u0026 tp, coro::ring_buffer\u003cuint64_t, 16\u003e\u0026 rb, coro::mutex\u0026 m, size_t id) -\u003e coro::task\u003cvoid\u003e\n    {\n        co_await tp.schedule();\n\n        while (true)\n        {\n            auto expected    = co_await rb.consume();\n            auto scoped_lock = co_await m.lock(); // just for synchronizing std::cout/cerr\n            if (!expected)\n            {\n                std::cerr \u003c\u003c \"\\nconsumer \" \u003c\u003c id \u003c\u003c \" shutting down, stop signal received\";\n                break; // while\n            }\n            else\n            {\n                auto item = std::move(*expected);\n                std::cout \u003c\u003c \"(id=\" \u003c\u003c id \u003c\u003c \", v=\" \u003c\u003c item \u003c\u003c \"), \";\n            }\n\n            // Mimic doing some work on the consumed value.\n            co_await tp.yield();\n        }\n\n        co_return;\n    };\n\n    // Create N consumers\n    for (size_t i = 0; i \u003c consumers; ++i)\n    {\n        tasks.emplace_back(make_consumer_task(tp, rb, m, i));\n    }\n    // Create 1 producer.\n    tasks.emplace_back(make_producer_task(tp, rb, m));\n\n    // Wait for all the values to be produced and consumed through the ring buffer.\n    coro::sync_wait(coro::when_all(std::move(tasks)));\n}\n```\n\nExpected output:\n```bash\n$ ./examples/coro_ring_buffer\n(id=3, v=1), (id=2, v=2), (id=1, v=3), (id=0, v=4), (id=3, v=5), (id=2, v=6), (id=1, v=7), (id=0, v=8), (id=3, v=9), (id=2, v=10), (id=1, v=11), (id=0, v=12), (id=3, v=13), (id=2, v=14), (id=1, v=15), (id=0, v=16), (id=3, v=17), (id=2, v=18), (id=1, v=19), (id=0, v=20), (id=3, v=21), (id=2, v=22), (id=1, v=23), (id=0, v=24), (id=3, v=25), (id=2, v=26), (id=1, v=27), (id=0, v=28), (id=3, v=29), (id=2, v=30), (id=1, v=31), (id=0, v=32), (id=3, v=33), (id=2, v=34), (id=1, v=35), (id=0, v=36), (id=3, v=37), (id=2, v=38), (id=1, v=39), (id=0, v=40), (id=3, v=41), (id=2, v=42), (id=0, v=44), (id=1, v=43), (id=3, v=45), (id=2, v=46), (id=0, v=47), (id=3, v=48), (id=2, v=49), (id=0, v=50), (id=3, v=51), (id=2, v=52), (id=0, v=53), (id=3, v=54), (id=2, v=55), (id=0, v=56), (id=3, v=57), (id=2, v=58), (id=0, v=59), (id=3, v=60), (id=1, v=61), (id=2, v=62), (id=0, v=63), (id=3, v=64), (id=1, v=65), (id=2, v=66), (id=0, v=67), (id=3, v=68), (id=1, v=69), (id=2, v=70), (id=0, v=71), (id=3, v=72), (id=1, v=73), (id=2, v=74), (id=0, v=75), (id=3, v=76), (id=1, v=77), (id=2, v=78), (id=0, v=79), (id=3, v=80), (id=2, v=81), (id=1, v=82), (id=0, v=83), (id=3, v=84), (id=2, v=85), (id=1, v=86), (id=0, v=87), (id=3, v=88), (id=2, v=89), (id=1, v=90), (id=0, v=91), (id=3, v=92), (id=2, v=93), (id=1, v=94), (id=0, v=95), (id=3, v=96), (id=2, v=97), (id=1, v=98), (id=0, v=99), (id=3, v=100),\nproducer is sending stop signal\nconsumer 0 shutting down, stop signal received\nconsumer 1 shutting down, stop signal received\nconsumer 2 shutting down, stop signal received\nconsumer 3 shutting down, stop signal received\n```\n\n### thread_pool\n`coro::thread_pool` is a statically sized pool of worker threads to execute scheduled coroutines from a FIFO queue.  One way to schedule a coroutine on a thread pool is to use the pool's `schedule()` function which should be `co_awaited` inside the coroutine to transfer the execution from the current thread to a thread pool worker thread.  Its important to note that scheduling will first place the coroutine into the FIFO queue and will be picked up by the first available thread in the pool, e.g. there could be a delay if there is a lot of work queued up.\n\n#### Ways to schedule tasks onto a `coro::thread_pool`\n* `coro::thread_pool::schedule()` Use `co_await` on this method inside a coroutine to transfer the tasks execution to the `coro::thread_pool`.\n* `coro::thread_pool::spawn(coro::task\u003cvoid\u0026\u0026 task\u003e)` Spawns the task to be detached and owned by the `coro::thread_pool`, use this if you want to fire and forget the task, the `coro::thread_pool` will maintain the task's lifetime.\n* `coro::thread_pool::schedule(coro::task\u003cT\u003e task) -\u003e coro::task\u003cT\u003e` schedules the task on the `coro::thread_pool` and then returns the result in a task that must be awaited. This is useful if you want to schedule work on the `coro::thread_pool` and want to wait for the result.\n\n```C++\n#include \u003ccoro/coro.hpp\u003e\n#include \u003ciostream\u003e\n#include \u003crandom\u003e\n\nint main()\n{\n    coro::thread_pool tp{coro::thread_pool::options{\n        // By default all thread pools will create its thread count with the\n        // std::thread::hardware_concurrency() as the number of worker threads in the pool,\n        // but this can be changed via this thread_count option.  This example will use 4.\n        .thread_count = 4,\n        // Upon starting each worker thread an optional lambda callback with the worker's\n        // index can be called to make thread changes, perhaps priority or change the thread's\n        // name.\n        .on_thread_start_functor = [](std::size_t worker_idx) -\u003e void\n        { std::cout \u003c\u003c \"thread pool worker \" \u003c\u003c worker_idx \u003c\u003c \" is starting up.\\n\"; },\n        // Upon stopping each worker thread an optional lambda callback with the worker's\n        // index can b called.\n        .on_thread_stop_functor = [](std::size_t worker_idx) -\u003e void\n        { std::cout \u003c\u003c \"thread pool worker \" \u003c\u003c worker_idx \u003c\u003c \" is shutting down.\\n\"; }}};\n\n    auto primary_task = [](coro::thread_pool\u0026 tp) -\u003e coro::task\u003cuint64_t\u003e\n    {\n        auto offload_task = [](coro::thread_pool\u0026 tp, uint64_t child_idx) -\u003e coro::task\u003cuint64_t\u003e\n        {\n            // Start by scheduling this offload worker task onto the thread pool.\n            co_await tp.schedule();\n            // Now any code below this schedule() line will be executed on one of the thread pools\n            // worker threads.\n\n            // Mimic some expensive task that should be run on a background thread...\n            std::random_device              rd;\n            std::mt19937                    gen{rd()};\n            std::uniform_int_distribution\u003c\u003e d{0, 1};\n\n            size_t calculation{0};\n            for (size_t i = 0; i \u003c 1'000'000; ++i)\n            {\n                calculation += d(gen);\n\n                // Lets be nice and yield() to let other coroutines on the thread pool have some cpu\n                // time.  This isn't necessary but is illustrated to show how tasks can cooperatively\n                // yield control at certain points of execution.  Its important to never call the\n                // std::this_thread::sleep_for() within the context of a coroutine, that will block\n                // and other coroutines which are ready for execution from starting, always use yield()\n                // or within the context of a coro::io_scheduler you can use yield_for(amount).\n                if (i == 500'000)\n                {\n                    std::cout \u003c\u003c \"Task \" \u003c\u003c child_idx \u003c\u003c \" is yielding()\\n\";\n                    co_await tp.yield();\n                }\n            }\n            co_return calculation;\n        };\n\n        const size_t                      num_children{10};\n        std::vector\u003ccoro::task\u003cuint64_t\u003e\u003e child_tasks{};\n        child_tasks.reserve(num_children);\n        for (size_t i = 0; i \u003c num_children; ++i)\n        {\n            child_tasks.emplace_back(offload_task(tp, i));\n        }\n\n        // Wait for the thread pool workers to process all child tasks.\n        auto results = co_await coro::when_all(std::move(child_tasks));\n\n        // Sum up the results of the completed child tasks.\n        size_t calculation{0};\n        for (const auto\u0026 task : results)\n        {\n            calculation += task.return_value();\n        }\n        co_return calculation;\n    };\n\n    auto result = coro::sync_wait(primary_task(tp));\n    std::cout \u003c\u003c \"calculated thread pool result = \" \u003c\u003c result \u003c\u003c \"\\n\";\n}\n```\n\nExample output (will vary based on threads):\n```bash\n$ ./examples/coro_thread_pool\nthread pool worker 0 is starting up.\nthread pool worker 2 is starting up.\nthread pool worker 3 is starting up.\nthread pool worker 1 is starting up.\nTask 2 is yielding()\nTask 3 is yielding()\nTask 0 is yielding()\nTask 1 is yielding()\nTask 4 is yielding()\nTask 5 is yielding()\nTask 6 is yielding()\nTask 7 is yielding()\nTask 8 is yielding()\nTask 9 is yielding()\ncalculated thread pool result = 4999898\nthread pool worker 1 is shutting down.\nthread pool worker 2 is shutting down.\nthread pool worker 3 is shutting down.\nthread pool worker 0 is shutting down.\n```\n\n### io_scheduler\n`coro::io_scheduler` is a i/o event scheduler execution context that can use two methods of task processing:\n\n* A background `coro::thread_pool`\n* Inline task processing on the `coro::io_scheduler`'s event loop\n\nUsing a background `coro::thread_pool` will default to using `(std::thread::hardware_concurrency() - 1)` threads to process tasks.  This processing strategy is best for longer tasks that would block the i/o scheduler or for tasks that are latency sensitive.\n\nUsing the inline processing strategy will have the event loop i/o thread process the tasks inline on that thread when events are received.  This processing strategy is best for shorter task that will not block the i/o thread for long or for pure throughput by using thread per core architecture, e.g. spin up an inline i/o scheduler per core and inline process tasks on each scheduler.\n\nThe `coro::io_scheduler` can use a dedicated spawned thread for processing events that are ready or it can be maually driven via its `process_events()` function for integration into existing event loops.  By default i/o schedulers will spawn a dedicated event thread and use a thread pool to process tasks.\n\n#### Ways to schedule tasks onto a `coro::io_scheduler`\n* `coro::io_scheduler::schedule()` Use `co_await` on this method inside a coroutine to transfer the tasks execution to the `coro::io_scheduler`.\n* `coro::io_scheduler::spawn(coro::task\u003cvoid\u0026\u0026 task\u003e)` Spawns the task to be detached and owned by the `coro::io_scheduler`, use this if you want to fire and forget the task, the `coro::io_scheduler` will maintain the task's lifetime.\n* `coro::io_scheduler::schedule(coro::task\u003cT\u003e task) -\u003e coro::task\u003cT\u003e` schedules the task on the `coro::io_scheduler` and then returns the result in a task that must be awaited. This is useful if you want to schedule work on the `coro::io_scheduler` and want to wait for the result.\n* `coro::io_scheduler::schedule(std::stop_source st, coro::task\u003cT\u003e task, std::chrono::duration timeout) -\u003e coro::expected\u003cT, coro::timeout_status\u003e` schedules the task on the `coro::io_scheduler` and then returns the result in a task that must be awaited. That task will then either return the completed task's value if it completes before the timeout, or a return value denoted the task timed out. If the task times out the `std::stop_source.request_stop()` will be invoked so the task can check for it and stop executing. This must be done by the user, the `coro::io_scheduler` cannot stop the execution of the task but it is able through the `std::stop_source` to signal to the task it should stop executing.\n* `coro::io_scheduler::scheduler_after(std::chrono::milliseconds amount)` schedules the current task to be rescheduled after a specified amount of time has passed.\n* `coro::io_scheduler::schedule_at(std::chrono::steady_clock::time_point time)` schedules the current task to be rescheduled at the specified timepoint.\n* `coro::io_scheduler::yield()` will yield execution of the current task and resume after other tasks have had a chance to execute. This effectively places the task at the back of the queue of waiting tasks.\n* `coro::io_scheduler::yield_for(std::chrono::milliseconds amount)` will yield for the given amount of time and then reschedule the task. This is a yield for at least this much time since its placed in the waiting execution queue and might take additional time to start executing again.\n* `coro::io_scheduler::yield_until(std::chrono::steady_clock::time_point time)` will yield execution until the time point.\n\nThe example provided here shows an i/o scheduler that spins up a basic `coro::net::tcp::server` and a `coro::net::tcp::client` that will connect to each other and then send a request and a response.\n\n```C++\n#include \u003ccoro/coro.hpp\u003e\n#include \u003ciostream\u003e\n\nint main()\n{\n    auto scheduler = coro::io_scheduler::make_shared(coro::io_scheduler::options{\n        // The scheduler will spawn a dedicated event processing thread.  This is the default, but\n        // it is possible to use 'manual' and call 'process_events()' to drive the scheduler yourself.\n        .thread_strategy = coro::io_scheduler::thread_strategy_t::spawn,\n        // If the scheduler is in spawn mode this functor is called upon starting the dedicated\n        // event processor thread.\n        .on_io_thread_start_functor = [] { std::cout \u003c\u003c \"io_scheduler::process event thread start\\n\"; },\n        // If the scheduler is in spawn mode this functor is called upon stopping the dedicated\n        // event process thread.\n        .on_io_thread_stop_functor = [] { std::cout \u003c\u003c \"io_scheduler::process event thread stop\\n\"; },\n        // The io scheduler can use a coro::thread_pool to process the events or tasks it is given.\n        // You can use an execution strategy of `process_tasks_inline` to have the event loop thread\n        // directly process the tasks, this might be desirable for small tasks vs a thread pool for large tasks.\n        .pool =\n            coro::thread_pool::options{\n                .thread_count            = 2,\n                .on_thread_start_functor = [](size_t i)\n                { std::cout \u003c\u003c \"io_scheduler::thread_pool worker \" \u003c\u003c i \u003c\u003c \" starting\\n\"; },\n                .on_thread_stop_functor = [](size_t i)\n                { std::cout \u003c\u003c \"io_scheduler::thread_pool worker \" \u003c\u003c i \u003c\u003c \" stopping\\n\"; },\n            },\n        .execution_strategy = coro::io_scheduler::execution_strategy_t::process_tasks_on_thread_pool});\n\n    auto make_server_task = [](std::shared_ptr\u003ccoro::io_scheduler\u003e scheduler) -\u003e coro::task\u003cvoid\u003e\n    {\n        // Start by creating a tcp server, we'll do this before putting it into the scheduler so\n        // it is immediately available for the client to connect since this will create a socket,\n        // bind the socket and start listening on that socket.  See tcp::server for more details on\n        // how to specify the local address and port to bind to as well as enabling SSL/TLS.\n        coro::net::tcp::server server{scheduler};\n\n        // Now scheduler this task onto the scheduler.\n        co_await scheduler-\u003eschedule();\n\n        // Wait for an incoming connection and accept it.\n        auto poll_status = co_await server.poll();\n        if (poll_status != coro::poll_status::event)\n        {\n            co_return; // Handle error, see poll_status for detailed error states.\n        }\n\n        // Accept the incoming client connection.\n        auto client = server.accept();\n\n        // Verify the incoming connection was accepted correctly.\n        if (!client.socket().is_valid())\n        {\n            co_return; // Handle error.\n        }\n\n        // Now wait for the client message, this message is small enough it should always arrive\n        // with a single recv() call.\n        poll_status = co_await client.poll(coro::poll_op::read);\n        if (poll_status != coro::poll_status::event)\n        {\n            co_return; // Handle error.\n        }\n\n        // Prepare a buffer and recv() the client's message.  This function returns the recv() status\n        // as well as a span\u003cchar\u003e that overlaps the given buffer for the bytes that were read.  This\n        // can be used to resize the buffer or work with the bytes without modifying the buffer at all.\n        std::string request(256, '\\0');\n        auto [recv_status, recv_bytes] = client.recv(request);\n        if (recv_status != coro::net::recv_status::ok)\n        {\n            co_return; // Handle error, see net::recv_status for detailed error states.\n        }\n\n        request.resize(recv_bytes.size());\n        std::cout \u003c\u003c \"server: \" \u003c\u003c request \u003c\u003c \"\\n\";\n\n        // Make sure the client socket can be written to.\n        poll_status = co_await client.poll(coro::poll_op::write);\n        if (poll_status != coro::poll_status::event)\n        {\n            co_return; // Handle error.\n        }\n\n        // Send the server response to the client.\n        // This message is small enough that it will be sent in a single send() call, but to demonstrate\n        // how to use the 'remaining' portion of the send() result this is wrapped in a loop until\n        // all the bytes are sent.\n        std::string           response  = \"Hello from server.\";\n        std::span\u003cconst char\u003e remaining = response;\n        do\n        {\n            // Optimistically send() prior to polling.\n            auto [send_status, r] = client.send(remaining);\n            if (send_status != coro::net::send_status::ok)\n            {\n                co_return; // Handle error, see net::send_status for detailed error states.\n            }\n\n            if (r.empty())\n            {\n                break; // The entire message has been sent.\n            }\n\n            // Re-assign remaining bytes for the next loop iteration and poll for the socket to be\n            // able to be written to again.\n            remaining    = r;\n            auto pstatus = co_await client.poll(coro::poll_op::write);\n            if (pstatus != coro::poll_status::event)\n            {\n                co_return; // Handle error.\n            }\n        } while (true);\n\n        co_return;\n    };\n\n    auto make_client_task = [](std::shared_ptr\u003ccoro::io_scheduler\u003e scheduler) -\u003e coro::task\u003cvoid\u003e\n    {\n        // Immediately schedule onto the scheduler.\n        co_await scheduler-\u003eschedule();\n\n        // Create the tcp::client with the default settings, see tcp::client for how to set the\n        // ip address, port, and optionally enabling SSL/TLS.\n        coro::net::tcp::client client{scheduler};\n\n        // Ommitting error checking code for the client, each step should check the status and\n        // verify the number of bytes sent or received.\n\n        // Connect to the server.\n        co_await client.connect();\n\n        // Make sure the client socket can be written to.\n        co_await client.poll(coro::poll_op::write);\n\n        // Send the request data.\n        client.send(std::string_view{\"Hello from client.\"});\n\n        // Wait for the response and receive it.\n        co_await client.poll(coro::poll_op::read);\n        std::string response(256, '\\0');\n        auto [recv_status, recv_bytes] = client.recv(response);\n        response.resize(recv_bytes.size());\n\n        std::cout \u003c\u003c \"client: \" \u003c\u003c response \u003c\u003c \"\\n\";\n        co_return;\n    };\n\n    // Create and wait for the server and client tasks to complete.\n    coro::sync_wait(coro::when_all(make_server_task(scheduler), make_client_task(scheduler)));\n}\n```\n\nExample output:\n```bash\n$ ./examples/coro_io_scheduler\nio_scheduler::thread_pool worker 0 starting\nio_scheduler::process event thread start\nio_scheduler::thread_pool worker 1 starting\nserver: Hello from client.\nclient: Hello from server.\nio_scheduler::thread_pool worker 0 stopping\nio_scheduler::thread_pool worker 1 stopping\nio_scheduler::process event thread stop\n```\n\n### tcp_echo_server\nSee [examples/coro_tcp_echo_erver.cpp](./examples/coro_tcp_echo_server.cpp) for a basic TCP echo server implementation.  You can use tools like `ab` to benchmark against this echo server.\n\nUsing a `Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz`:\n\n```bash\n$ ab -n 10000000 -c 1000 -k http://127.0.0.1:8888/\nThis is ApacheBench, Version 2.3 \u003c$Revision: 1879490 $\u003e\nCopyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/\nLicensed to The Apache Software Foundation, http://www.apache.org/\n\nBenchmarking 127.0.0.1 (be patient)\nCompleted 1000000 requests\nCompleted 2000000 requests\nCompleted 3000000 requests\nCompleted 4000000 requests\nCompleted 5000000 requests\nCompleted 6000000 requests\nCompleted 7000000 requests\nCompleted 8000000 requests\nCompleted 9000000 requests\nCompleted 10000000 requests\nFinished 10000000 requests\n\n\nServer Software:\nServer Hostname:        127.0.0.1\nServer Port:            8888\n\nDocument Path:          /\nDocument Length:        0 bytes\n\nConcurrency Level:      1000\nTime taken for tests:   90.290 seconds\nComplete requests:      10000000\nFailed requests:        0\nNon-2xx responses:      10000000\nKeep-Alive requests:    10000000\nTotal transferred:      1060000000 bytes\nHTML transferred:       0 bytes\nRequests per second:    110753.80 [#/sec] (mean)\nTime per request:       9.029 [ms] (mean)\nTime per request:       0.009 [ms] (mean, across all concurrent requests)\nTransfer rate:          11464.75 [Kbytes/sec] received\n\nConnection Times (ms)\n              min  mean[+/-sd] median   max\nConnect:        0    0   0.2      0      24\nProcessing:     2    9   1.6      9      77\nWaiting:        0    9   1.6      9      77\nTotal:          2    9   1.6      9      88\n\nPercentage of the requests served within a certain time (ms)\n  50%      9\n  66%      9\n  75%     10\n  80%     10\n  90%     11\n  95%     12\n  98%     14\n  99%     15\n 100%     88 (longest request)\n````\n\n### http_200_ok_server\nSee [examples/coro_http_200_ok_erver.cpp](./examples/coro_http_200_ok_server.cpp) for a basic HTTP 200 OK response server implementation.  You can use tools like `wrk` or `autocannon` to benchmark against this HTTP 200 OK server.\n\nUsing a `Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz`:\n\n```bash\n$ ./wrk -c 1000 -d 60s -t 6 http://127.0.0.1:8888/\nRunning 1m test @ http://127.0.0.1:8888/\n  6 threads and 1000 connections\n  Thread Stats   Avg      Stdev     Max   +/- Stdev\n    Latency     2.96ms    3.61ms  82.22ms   90.06%\n    Req/Sec    54.69k     6.75k   70.51k    80.88%\n  19569177 requests in 1.00m, 1.08GB read\nRequests/sec: 325778.99\nTransfer/sec:     18.33MB\n```\n\n### Requirements\n    C++20 Compiler with coroutine support\n        g++ [10.2.1, 10.3.1, 11, 12, 13]\n        clang++ [16, 17]\n            No networking/TLS support on MacOS.\n        MSVC Windows 2022 CL\n            No networking/TLS support.\n    CMake\n    make or ninja\n    pthreads\n    openssl\n    gcov/lcov (For generating coverage only)\n\n### Build Instructions\n\n#### Tested Operating Systems\n\n * ubuntu:20.04, 22.04\n * fedora:32-40\n * openSUSE/leap:15.6\n * Windows 2022\n * Emscripten 3.1.45\n * MacOS 12\n\n#### Cloning the project\nThis project uses git submodules, to properly checkout this project use:\n\n    git clone --recurse-submodules \u003clibcoro-url\u003e\n\nThis project depends on the following git sub-modules:\n * [libc-ares](https://github.com/c-ares/c-ares) For async DNS resolver, this is a git submodule.\n * [catch2](https://github.com/catchorg/Catch2) For testing, this is embedded in the `test/` directory.\n\n#### Building\n    mkdir Release \u0026\u0026 cd Release\n    cmake -DCMAKE_BUILD_TYPE=Release ..\n    cmake --build .\n\nCMake Options:\n\n| Name                          | Default | Description                                                                                        |\n|:------------------------------|:--------|:---------------------------------------------------------------------------------------------------|\n| LIBCORO_EXTERNAL_DEPENDENCIES | OFF     | Use CMake find_package to resolve dependencies instead of embedded libraries.                      |\n| LIBCORO_BUILD_TESTS           | ON      | Should the tests be built? Note this is only default ON if libcoro is the root CMakeLists.txt      |\n| LIBCORO_CODE_COVERAGE         | OFF     | Should code coverage be enabled? Requires tests to be enabled.                                     |\n| LIBCORO_BUILD_EXAMPLES        | ON      | Should the examples be built? Note this is only default ON if libcoro is the root CMakeLists.txt   |\n| LIBCORO_FEATURE_NETWORKING    | ON      | Include networking features. Requires Linux platform. MSVC/MacOS not supported.                    |\n| LIBCORO_FEATURE_TLS           | ON      | Include TLS features. Requires networking to be enabled. MSVC/MacOS not supported.                 |\n\n#### Adding to your project\n\n##### add_subdirectory()\n\n```cmake\n# Include the checked out libcoro code in your CMakeLists.txt file\nadd_subdirectory(path/to/libcoro)\n\n# Link the libcoro cmake target to your project(s).\ntarget_link_libraries(${PROJECT_NAME} PUBLIC libcoro)\n\n```\n\n##### FetchContent\nCMake can include the project directly by downloading the source, compiling and linking to your project via FetchContent, below is an example on how you might do this within your project.\n\n\n```cmake\ncmake_minimum_required(VERSION 3.11)\n\n# Fetch the project and make it available for use.\ninclude(FetchContent)\nFetchContent_Declare(\n    libcoro\n    GIT_REPOSITORY https://github.com/jbaldwin/libcoro.git\n    GIT_TAG        \u003cTAG_OR_GIT_HASH\u003e\n)\nFetchContent_MakeAvailable(libcoro)\n\n# Link the libcoro cmake target to your project(s).\ntarget_link_libraries(${PROJECT_NAME} PUBLIC libcoro)\n\n```\n\n##### Package managers\nlibcoro is available via package managers [Conan](https://conan.io/center/libcoro) and [vcpkg](https://vcpkg.io/).\n\n### Contributing\n\nContributing is welcome, if you have ideas or bugs please open an issue. If you want to open a PR they are also welcome, if you are adding a bugfix or a feature please include tests to verify the feature or bugfix is working properly. If it isn't included I will be asking for you to add some!\n\n#### Tests\nThe tests will automatically be run by github actions on creating a pull request. They can also be ran locally after building from the build directory:\n\n    # Invoke via cmake with all output from the tests displayed to console:\n    ctest -VV\n\n    # Or invoke directly, can pass the name of tests to execute, the framework used is catch2.\n    # Tests are tagged with their group, below is how to run all of the coro::net::tcp::server tests:\n    ./test/libcoro_test \"[tcp_server]\"\n\nIf you open a PR for a bugfix or new feature please include tests to verify that the change is working as intended. If your PR doesn't include tests I will ask you to add them and won't merge until they are added and working properly. Tests are found in the `/test` directory and are organized by object type.\n\n### Support\n\nFile bug reports, feature requests and questions using [GitHub libcoro Issues](https://github.com/jbaldwin/libcoro/issues)\n\nCopyright © 2020-2025 Josh Baldwin\n\n[badge.language]: https://img.shields.io/badge/language-C%2B%2B20-yellow.svg\n[badge.license]: https://img.shields.io/badge/license-Apache--2.0-blue\n\n[language]: https://en.wikipedia.org/wiki/C%2B%2B17\n[license]: https://en.wikipedia.org/wiki/Apache_License\n","funding_links":[],"categories":["HarmonyOS","Parallel and Async Library"],"sub_categories":["Windows Manager"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjbaldwin%2Flibcoro","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fjbaldwin%2Flibcoro","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjbaldwin%2Flibcoro/lists"}