{"id":19705745,"url":"https://github.com/llnl/abmarl","last_synced_at":"2025-09-08T07:31:08.930Z","repository":{"id":38325956,"uuid":"310139094","full_name":"LLNL/Abmarl","owner":"LLNL","description":"Agent Based Modeling and Reinforcement Learning","archived":false,"fork":false,"pushed_at":"2024-05-13T15:49:41.000Z","size":15033,"stargazers_count":71,"open_issues_count":67,"forks_count":19,"subscribers_count":6,"default_branch":"main","last_synced_at":"2025-09-04T12:00:10.696Z","etag":null,"topics":["machine-learning"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/LLNL.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2020-11-04T23:13:34.000Z","updated_at":"2025-07-17T06:57:05.000Z","dependencies_parsed_at":"2023-10-05T03:11:57.305Z","dependency_job_id":"316dba39-8b9a-4457-a921-3ac77a169d9e","html_url":"https://github.com/LLNL/Abmarl","commit_stats":{"total_commits":1183,"total_committers":6,"mean_commits":"197.16666666666666","dds":0.1301775147928994,"last_synced_commit":"4d27ac5e8e310676b69081f1f7c9f78409f2cb12"},"previous_names":[],"tags_count":12,"template":false,"template_full_name":null,"purl":"pkg:github/LLNL/Abmarl","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LLNL%2FAbmarl","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LLNL%2FAbmarl/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LLNL%2FAbmarl/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LLNL%2FAbmarl/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/LLNL","download_url":"https://codeload.github.com/LLNL/Abmarl/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LLNL%2FAbmarl/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":274152132,"owners_count":25231285,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-09-08T02:00:09.813Z","response_time":121,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["machine-learning"],"created_at":"2024-11-11T21:29:56.801Z","updated_at":"2025-09-08T07:31:08.108Z","avatar_url":"https://github.com/LLNL.png","language":"Python","funding_links":[],"categories":[],"sub_categories":[],"readme":"# Abmarl\n\nAbmarl is a package for developing Agent-Based Simulations and training them with\nMultiAgent Reinforcement Learning (MARL). We provide an intuitive command line\ninterface for engaging with the full workflow of MARL experimentation: training,\nvisualizing, and analyzing agent behavior. We define an Agent-Based Simulation\nInterface and Simulation Manager, which control which agents interact with the\nsimulation at each step. We support integration with popular reinforcement learning\nsimulation interfaces, including gym.Env, MultiAgentEnv, and OpenSpiel. We define\nour own GridWorld Simulation Framework for creating custom grid-based Agent Based\nSimulations.\n\nAbmarl leverages RLlib’s framework for reinforcement learning and extends it to\nmore easily support custom simulations, algorithms, and policies. We enable researchers to rapidly\nprototype MARL experiments and simulation design and lower the barrier for pre-existing\nprojects to prototype RL as a potential solution.\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://github.com/LLNL/Abmarl/actions/workflows/build-and-test.yml/badge.svg\" alt=\"Build and Test Badge\" /\u003e\n  \u003cimg src=\"https://github.com/LLNL/Abmarl/actions/workflows/build-docs.yml/badge.svg\" alt=\"Sphinx docs Badge\" /\u003e\n  \u003cimg src=\"https://github.com/LLNL/Abmarl/actions/workflows/lint.yml/badge.svg\" alt=\"Lint Badge\" /\u003e\n\u003c/p\u003e\n\n\n## Quickstart\n\nTo use Abmarl, install via pip: `pip install abmarl`\n\nTo develop Abmarl, clone the repository and install via pip's development mode.\n\n```\ngit clone git@github.com:LLNL/Abmarl.git\ncd abmarl\npip install -r requirements/requirements_all.txt\npip install -e . --no-deps\n```\n\nTrain agents in a multicorridor simulation:\n```\nabmarl train examples/multi_corridor_example.py\n```\n\nVisualize trained behavior:\n```\nabmarl visualize ~/abmarl_results/MultiCorridor-2020-08-25_09-30/ -n 5 --record\n```\n\nNote: If you install with `conda,` then you must also include `ffmpeg` in your\nvirtual environment.\n\n## Documentation\n\nYou can find the latest Abmarl documentation on\n[our ReadTheDocs page](https://abmarl.readthedocs.io/en/latest/index.html).\n\n[![Documentation Status](https://readthedocs.org/projects/abmarl/badge/?version=latest)](https://abmarl.readthedocs.io/en/latest/?badge=latest)\n\n\n## Community\n\n### Citation\n\n[![DOI](https://joss.theoj.org/papers/10.21105/joss.03424/status.svg)](https://doi.org/10.21105/joss.03424)\n\nAbmarl has been published to the Journal of Open Source Software (JOSS). It can\nbe cited using the following bibtex entry:\n\n```\n@article{Rusu2021,\n  doi = {10.21105/joss.03424},\n  url = {https://doi.org/10.21105/joss.03424},\n  year = {2021},\n  publisher = {The Open Journal},\n  volume = {6},\n  number = {64},\n  pages = {3424},\n  author = {Edward Rusu and Ruben Glatt},\n  title = {Abmarl: Connecting Agent-Based Simulations with Multi-Agent Reinforcement Learning},\n  journal = {Journal of Open Source Software}\n}\n```\n\n### Reporting Issues\n\nPlease use our issue tracker to report any bugs or submit feature requests. Great\nbug reports tend to have:\n- A quick summary and/or background\n- Steps to reproduce, sample code is best.\n- What you expected would happen\n- What actually happens\n\n### Contributing\n\nPlease submit contributions via pull requests from a forked repository. Find out\nmore about this process [here](https://guides.github.com/introduction/flow/index.html).\nAll contributions are under the BSD 3 License that covers the project.\n\n## Release\n\nLLNL-CODE-815883\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fllnl%2Fabmarl","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fllnl%2Fabmarl","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fllnl%2Fabmarl/lists"}