{"id":23940454,"url":"https://github.com/pockerman/cuberl","last_synced_at":"2026-02-04T22:03:57.619Z","repository":{"id":40464566,"uuid":"425203995","full_name":"pockerman/cuberl","owner":"pockerman","description":"Reinforcement learning algorithms  with c++","archived":false,"fork":false,"pushed_at":"2026-02-01T14:26:29.000Z","size":4620,"stargazers_count":7,"open_issues_count":27,"forks_count":1,"subscribers_count":1,"default_branch":"master","last_synced_at":"2026-02-01T20:34:42.354Z","etag":null,"topics":["cpp","pytorch","reinforcement-learning","reinforcement-learning-algorithms"],"latest_commit_sha":null,"homepage":"https://pockerman.github.io/bitrl_cuberl_docs/","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/pockerman.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2021-11-06T09:28:52.000Z","updated_at":"2026-02-01T14:26:32.000Z","dependencies_parsed_at":"2024-02-24T11:26:33.553Z","dependency_job_id":"c52a0819-99fd-46b3-aaf1-e331d5ced760","html_url":"https://github.com/pockerman/cuberl","commit_stats":null,"previous_names":[],"tags_count":30,"template":false,"template_full_name":null,"purl":"pkg:github/pockerman/cuberl","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pockerman%2Fcuberl","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pockerman%2Fcuberl/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pockerman%2Fcuberl/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pockerman%2Fcuberl/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/pockerman","download_url":"https://codeload.github.com/pockerman/cuberl/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pockerman%2Fcuberl/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29097231,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-04T21:05:08.033Z","status":"ssl_error","status_checked_at":"2026-02-04T21:04:53.031Z","response_time":62,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cpp","pytorch","reinforcement-learning","reinforcement-learning-algorithms"],"created_at":"2025-01-06T03:17:14.085Z","updated_at":"2026-02-04T22:03:57.613Z","avatar_url":"https://github.com/pockerman.png","language":"C++","readme":"# cuberl\n\n_cuberl_ is a C++ library containing implementations of various reinforcement learning, filtering and planning algorithms.\nThe following is an indicative list of examples. \n \n\n## Examples\n\n### Introductory\n\n- \u003ca href=\"examples/intro/intro_example_1/intro_example_1.md\"\u003eMonte Carlo integration\u003c/a\u003e\n- \u003ca href=\"examples/intro/intro_example_2/intro_example_2.md\"\u003eUsing PyTorch C++ API Part 1\u003c/a\u003e\n- \u003ca href=\"examples/intro/intro_example_3/intro_example_3.md\"\u003eUsing PyTorch C++ API Part 2\u003c/a\u003e\n- \u003ca href=\"examples/intro/intro_example_4/intro_example_4.md\"\u003eUsing PyTorch C++ API Part 3\u003c/a\u003e\n- \u003ca href=\"examples/intro/intro_example_6/intro_example_6.md\"\u003eToy Markov chain\u003c/a\u003e\n- \u003ca href=\"examples/intro/intro_example_7/intro_example_7.md\"\u003eImportance sampling\u003c/a\u003e\n- \u003ca href=\"examples/intro/intro_example_8/intro_example_8.md\"\u003eVector addition with CUDA\u003c/a\u003e\n\n### Reinforcement learning\n\n- \u003ca href=\"https://pockerman-py-cubeai.readthedocs.io/en/latest/ExamplesCpp/rl/rl_example_0.html\"\u003eDummyAgent on  ```MountainCar-v0```\u003c/a\u003e\n- \u003ca href=\"examples/rl/rl_example_2/rl_example_2.md\"\u003eMulti-armed bandits\u003c/a\u003e\n- \u003ca href=\"examples/rl/rl_example_6/rl_example_6.cpp\"\u003eIterative policy evaluation on ```FrozenLake```\u003c/a\u003e\n- \u003ca href=\"examples/rl/rl_example_7/rl_example_7.cpp\"\u003ePolicy iteration on ```FrozenLake```\u003c/a\u003e\n- \u003ca href=\"examples/rl/rl_example_8/rl_example_8.md\"\u003eValue iteration on ```FrozenLake```\u003c/a\u003e\n- \u003ca href=\"examples/rl/rl_example_9/rl_example_9.md\"\u003eSARSA on ```CliffWalking```\u003c/a\u003e\n- \u003ca href=\"examples/rl/rl_example_10/rl_example_10.md\"\u003eQ-learning on ```CliffWalking```\u003c/a\u003e\n- \u003ca href=\"examples/rl/rl_example_11/rl_example_11.md\"\u003eA2C on ```CartPole```\u003c/a\u003e\n- \u003ca href=\"examples/rl/rl_example_12/rl_example_12.md\"\u003eDQN on ```Gridworld```\u003c/a\u003e\n- \u003ca href=\"examples/rl/rl_example_15/rl_example_15.md\"\u003eDQN on ```Gridworld``` with experience replay\u003c/a\u003e\n- \u003ca href=\"examples/rl/rl_example_13/rl_example_13.md\"\u003eREINFORCE algorithm on ```CartPole```\u003c/a\u003e\n- \u003ca href=\"examples/rl/rl_example_14/rl_example_14.cpp\"\u003eExpected SARSA on ```CliffWalking```\u003c/a\u003e\n- \u003ca href=\"examples/example_15/example_15.cpp\"\u003eApproximate Monte Carlo on ```MountainCar```\u003c/a\u003e\n- \u003ca href=\"examples/rl_example_16/rl_example_16.md\"\u003eMonte Carlo tree search on ```Taxi```\u003c/a\u003e\n- \u003ca href=\"examples/rl/rl_example_18.cpp\"\u003eDouble Q-learning on  ```CartPole``` \u003c/a\u003e\n- \u003ca href=\"examples/rl/rl_example_19/rl_example_19.cpp\"\u003eFirst visit Monte Carlo on ```FrozenLake```\u003c/a\u003e\n- \u003ca href=\"examples/rl/rl_example_20/rl_example_20.md\"\u003eREINFORCE algorithm with baseline on ```CartPole```\u003c/a\u003e\n- \u003ca href=\"examples/rl/rl_example_22/rl_example_22.md\"\u003ePPO on ```LunarLander```\u003c/a\u003e\n\n## Installation\n\nThe cubeai library has a host of dependencies:\n\n- A compiler that supports C++20 e.g. g++-11\n- \u003ca href=\"https://www.boost.org/\"\u003eBoost C++\u003c/a\u003e \n- \u003ca href=\"https://cmake.org/\"\u003eCMake\u003c/a\u003e \u003e= 3.6\n- \u003ca href=\"https://eigen.tuxfamily.org/index.php?title=Main_Page\"\u003eEigen\u003c/a\u003e\n- \u003ca href=\"https://github.com/google/googletest\"\u003eGtest\u003c/a\u003e (if configured with tests)\n\nIn addition, the library also incorporates, see ```(include/cubeai/extern)```, the following libraries (you don't need to install these):\n\n- \u003ca href=\"https://github.com/elnormous/HTTPRequest\"\u003eHTTPRequest\u003c/a\u003e\n- \u003ca href=\"https://github.com/nlohmann/json\"\u003enlohmann/json\u003c/a\u003e\n\n### Enabling PyTorch and CUDA\n\n_cuberl_ can be complied with CUDA and/or PyTorch support. If PyTorch has been compiled using CUDA support, then\nyou need to enable CUDA as well. In order to do so set the flag _USE_CUDA_ in the _CMakeLists.txt_ to _ON_.\n_cuberl_ assumes that PyTorch is compiled with the C++11 ABI.\n\n\n### Documentation dependencies\n\nThere are extra dependencies if you want to generate the documentation. Namely,\n\n- \u003ca href=\"https://www.doxygen.nl/\"\u003eDoxygen\u003c/a\u003e\n- \u003ca href=\"https://www.sphinx-doc.org/en/master/\"\u003eSphinx\u003c/a\u003e\n- \u003ca href=\"https://github.com/readthedocs/sphinx_rtd_theme\"\u003esphinx_rtd_theme\u003c/a\u003e\n- \u003ca href=\"https://github.com/breathe-doc/breathe\"\u003ebreathe\u003c/a\u003e\n- \u003ca href=\"https://github.com/crossnox/m2r2\"\u003em2r2\u003c/a\u003e\n\nNote that if Doxygen is not found on your system CMake will skip this. On a Ubuntu/Debian based machine, you can install\nDoxygen using\n\n```bash\nsudo apt-get install doxygen\n```\n\nSimilarly, install ```sphinx_rtd_theme``` using\n\n```bash\npip install sphinx_rtd_theme\n```\n\nInstall ```breathe``` using\n\n```bash\npip install breathe\n```\n\nInstall ```m2r2``` using\n\n```bash\npip install m2r2\n```\n\n\n## Issues\n\n#### undefined reference to ```cudaLaunchKernelExC@libcudart.so.11.0```. \n\nYou may want to check with ```nvidia-msi``` your CUDA Version and make sure it is compatible with the PyTorch library you are linking against\n\n#### TypeError: Descriptors cannot be created directly.\n\nThis issue may be occur when using the TensorBoardServer in _cuberl_.\nThis issue  is related with an issue with _protobuf_. See: https://stackoverflow.com/questions/72441758/typeerror-descriptors-cannot-not-be-created-directly for \npossible solutions.\n\n\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpockerman%2Fcuberl","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fpockerman%2Fcuberl","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpockerman%2Fcuberl/lists"}