{"id":13492866,"url":"https://github.com/google-deepmind/acme","last_synced_at":"2026-03-11T03:02:54.618Z","repository":{"id":37727345,"uuid":"260420185","full_name":"google-deepmind/acme","owner":"google-deepmind","description":"A library of reinforcement learning components and agents","archived":false,"fork":false,"pushed_at":"2026-02-16T15:43:58.000Z","size":6632,"stargazers_count":3919,"open_issues_count":94,"forks_count":527,"subscribers_count":75,"default_branch":"master","last_synced_at":"2026-02-16T21:43:00.491Z","etag":null,"topics":["agents","reinforcement-learning","research"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/google-deepmind.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2020-05-01T09:18:12.000Z","updated_at":"2026-02-16T15:44:02.000Z","dependencies_parsed_at":"2023-09-07T20:34:46.011Z","dependency_job_id":"2965a8bf-d4e6-4d35-bbe6-5d8cac55be1c","html_url":"https://github.com/google-deepmind/acme","commit_stats":{"total_commits":1176,"total_committers":84,"mean_commits":14.0,"dds":0.7848639455782314,"last_synced_commit":"1177501df180edadd9f125cf5ee960f74bff64af"},"previous_names":["google-deepmind/acme","deepmind/acme"],"tags_count":12,"template":false,"template_full_name":null,"purl":"pkg:github/google-deepmind/acme","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Facme","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Facme/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Facme/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Facme/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/google-deepmind","download_url":"https://codeload.github.com/google-deepmind/acme/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Facme/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30368580,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-10T21:41:54.280Z","status":"online","status_checked_at":"2026-03-11T02:00:07.027Z","response_time":84,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agents","reinforcement-learning","research"],"created_at":"2024-07-31T19:01:09.993Z","updated_at":"2026-03-11T03:02:54.591Z","avatar_url":"https://github.com/google-deepmind.png","language":"Python","readme":"\u003cimg src=\"docs/imgs/acme.png\" width=\"50%\"\u003e\n\n# Acme: a research framework for reinforcement learning\n\n[![PyPI Python Version][pypi-versions-badge]][pypi]\n[![PyPI version][pypi-badge]][pypi]\n[![acme-tests][tests-badge]][tests]\n[![Documentation Status][rtd-badge]][documentation]\n\n[pypi-versions-badge]: https://img.shields.io/pypi/pyversions/dm-acme\n[pypi-badge]: https://badge.fury.io/py/dm-acme.svg\n[pypi]: https://pypi.org/project/dm-acme/\n[tests-badge]: https://github.com/deepmind/acme/workflows/acme-tests/badge.svg\n[tests]: https://github.com/deepmind/acme/actions/workflows/ci.yml\n[rtd-badge]: https://readthedocs.org/projects/dm-acme/badge/?version=latest\n\nAcme is a library of reinforcement learning (RL) building blocks that strives to\nexpose simple, efficient, and readable agents. These agents first and foremost\nserve both as reference implementations as well as providing strong baselines\nfor algorithm performance. However, the baseline agents exposed by Acme should\nalso provide enough flexibility and simplicity that they can be used as a\nstarting block for novel research. Finally, the building blocks of Acme are\ndesigned in such a way that the agents can be run at multiple scales (e.g.\nsingle-stream vs. distributed agents).\n\n## Getting started\n\nThe quickest way to get started is to take a look at the detailed working code\nexamples found in the [examples] subdirectory. These show how to instantiate a\nnumber of different agents and run them within a variety of environments. See\nthe [quickstart notebook][Quickstart] for an even quicker dive into using a\nsingle agent. Even more detail on the internal construction of an agent can be\nfound inside our [tutorial notebook][Tutorial]. Finally, a full description Acme\nand its underlying components can be found by referring to the [documentation].\nMore background information and details behind the design decisions can be found\nin our [technical report][Paper].\n\n\u003e NOTE: Acme is first and foremost a framework for RL research written by\n\u003e researchers, for researchers. We use it for our own work on a daily basis. So\n\u003e with that in mind, while we will make every attempt to keep everything in good\n\u003e working order, things may break occasionally. But if so we will make our best\n\u003e effort to fix them as quickly as possible!\n\n[examples]: examples/\n[tutorial]: https://colab.research.google.com/github/deepmind/acme/blob/master/examples/tutorial.ipynb\n[quickstart]: https://colab.research.google.com/github/deepmind/acme/blob/master/examples/quickstart.ipynb\n[documentation]: https://dm-acme.readthedocs.io/\n[paper]: https://arxiv.org/abs/2006.00979\n\n## Installation\n\nTo get up and running quickly just follow the steps below:\n\n1.  While you can install Acme in your standard python environment, we\n    *strongly* recommend using a\n    [Python virtual environment](https://docs.python.org/3/tutorial/venv.html)\n    to manage your dependencies. This should help to avoid version conflicts and\n    just generally make the installation process easier.\n\n    ```bash\n    python3 -m venv acme\n    source acme/bin/activate\n    pip install --upgrade pip setuptools wheel\n    ```\n\n1.  While the core `dm-acme` library can be pip installed directly, the set of\n    dependencies included for installation is minimal. In particular, to run any\n    of the included agents you will also need either [JAX] or [TensorFlow]\n    depending on the agent. As a result we recommend installing these components\n    as well, i.e.\n\n    ```bash\n    pip install dm-acme[jax,tf]\n    ```\n\n1.  Finally, to install a few example environments (including [gym],\n    [dm_control], and [bsuite]):\n\n    ```bash\n    pip install dm-acme[envs]\n    ```\n\n1.  **Installing from github**: if you're interested in running the\n    bleeding-edge version of Acme, you can do so by cloning the Acme GitHub\n    repository and then executing following command from the main directory\n    (where `setup.py` is located):\n\n    ```bash\n    pip install .[jax,tf,testing,envs]\n    ```\n\n## Citing Acme\n\nIf you use Acme in your work, please cite the updated accompanying\n[technical report][paper]:\n\n```bibtex\n@article{hoffman2020acme,\n    title={Acme: A Research Framework for Distributed Reinforcement Learning},\n    author={\n        Matthew W. Hoffman and Bobak Shahriari and John Aslanides and\n        Gabriel Barth-Maron and Nikola Momchev and Danila Sinopalnikov and\n        Piotr Sta\\'nczyk and Sabela Ramos and Anton Raichuk and\n        Damien Vincent and L\\'eonard Hussenot and Robert Dadashi and\n        Gabriel Dulac-Arnold and Manu Orsini and Alexis Jacq and\n        Johan Ferret and Nino Vieillard and Seyed Kamyar Seyed Ghasemipour and\n        Sertan Girgin and Olivier Pietquin and Feryal Behbahani and\n        Tamara Norman and Abbas Abdolmaleki and Albin Cassirer and\n        Fan Yang and Kate Baumli and Sarah Henderson and Abe Friesen and\n        Ruba Haroun and Alex Novikov and Sergio G\\'omez Colmenarejo and\n        Serkan Cabi and Caglar Gulcehre and Tom Le Paine and\n        Srivatsan Srinivasan and Andrew Cowie and Ziyu Wang and Bilal Piot and\n        Nando de Freitas\n    },\n    year={2020},\n    journal={arXiv preprint arXiv:2006.00979},\n    url={https://arxiv.org/abs/2006.00979},\n}\n```\n\n[JAX]: https://github.com/google/jax\n[TensorFlow]: https://tensorflow.org\n[gym]: https://github.com/openai/gym\n[dm_control]: https://github.com/deepmind/dm_env\n[dm_env]: https://github.com/deepmind/dm_env\n[bsuite]: https://github.com/deepmind/bsuite\n","funding_links":[],"categories":["Uncategorized","Python","Reinforcement Learning","Industry Strength Reinforcement Learning","Simulation \u0026 Benchmarking Environments"],"sub_categories":["Uncategorized","Others","Multimodal Model Benchmarks"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgoogle-deepmind%2Facme","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fgoogle-deepmind%2Facme","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgoogle-deepmind%2Facme/lists"}