Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/sjtu-marl/malib
A parallel framework for population-based multi-agent reinforcement learning.
https://github.com/sjtu-marl/malib
distributed games multiagent parallel python ray reinforcement-learning
Last synced: 24 days ago
JSON representation
A parallel framework for population-based multi-agent reinforcement learning.
- Host: GitHub
- URL: https://github.com/sjtu-marl/malib
- Owner: sjtu-marl
- License: mit
- Created: 2021-05-07T01:08:37.000Z (over 3 years ago)
- Default Branch: main
- Last Pushed: 2023-12-14T09:46:35.000Z (about 1 year ago)
- Last Synced: 2024-08-09T13:19:32.198Z (4 months ago)
- Topics: distributed, games, multiagent, parallel, python, ray, reinforcement-learning
- Language: Python
- Homepage: https://malib.io
- Size: 9.18 MB
- Stars: 481
- Watchers: 9
- Forks: 59
- Open Issues: 6
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
- StarryDivineSky - sjtu-marl/malib - MARL 的开源大规模并行训练框架。MALib 支持丰富的种群训练方式(例如,self-play, PSRO, league training),并且实现和优化了常见多智能体深度强化学习算法,为研究人员降低并行化工作量的同时,大幅提升了训练效率。此外,MALib 基于 Ray 的底层分布式框架,实现了全新的中心化任务分发模型,相较于常见的多智能体强化学习训练框架(RLlib,PyMARL,OpenSpiel),相同硬件条件下吞吐量和训练速度有着数倍的提升。现阶段,MALib 已对接常见多智能体环境(星际争霸、谷歌足球、棋牌类、多人 Atari 等),后续将提供对自动驾驶、智能电网等场景的支持。 (时间序列 / 网络服务_其他)
- awesome-production-machine-learning - MALib - marl/malib.svg?style=social) - MALib is a parallel framework of population-based learning nested with reinforcement learning methods. MALib provides higher-level abstractions of MARL training paradigms, which enables efficient code reuse and flexible deployments on different distributed computing paradigms. (Industry Strength RL)
README
# MALib: A parallel framework for population-based reinforcement learning
[![GitHub license](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/sjtu-marl/malib/blob/main/LICENSE)
[![Documentation Status](https://readthedocs.org/projects/malib/badge/?version=latest)](https://malib.readthedocs.io/en/latest/?badge=latest)
[![Build Status](https://app.travis-ci.com/sjtu-marl/malib.svg?branch=main)](https://app.travis-ci.com/sjtu-marl/malib.svg?branch=main)
[![codecov](https://codecov.io/gh/sjtu-marl/malib/branch/main/graph/badge.svg?token=CJX14B2AJG)](https://codecov.io/gh/sjtu-marl/malib)MALib is a parallel framework of population-based learning nested with reinforcement learning methods, such as Policy Space Response Oracle, Self-Play, and Neural Fictitious Self-Play. MALib provides higher-level abstractions of MARL training paradigms, which enables efficient code reuse and flexible deployments on different distributed computing paradigms.
![architecture](docs/imgs/architecture3.png)
## Installation
The installation of MALib is very easy. We've tested MALib on Python 3.8 and above. This guide is based on Ubuntu 18.04 and above (currently, MALib can only run on Linux system). We strongly recommend using [conda](https://docs.conda.io/en/latest/miniconda.html) to manage your dependencies, and avoid version conflicts. Here we show the example of building python 3.8 based conda environment.
```bash
conda create -n malib python==3.8 -y
conda activate malib# install dependencies
./install.sh
```## Environments
MALib integrates many popular reinforcement learning environments, we list some of them as follows.
- [x] [OpenSpiel](https://github.com/deepmind/open_spiel): A framework for Reinforcement Learning in games, it provides plenty of environments for the research of game theory.
- [x] [Gym](https://github.com/openai/gym): An open source environment collections for developing and comparing reinforcement learning algorithms.
- [x] [Google Research Football](https://github.com/google-research/football): RL environment based on open-source game Gameplay Football.
- [x] [SMAC](https://github.com/oxwhirl/smac): An environment for research in the field of collaborative multi-agent reinforcement learning (MARL) based on Blizzard's StarCraft II RTS game.
- [x] [PettingZoo](https://github.com/Farama-Foundation/PettingZoo): A Python library for conducting research in multi-agent reinforcement learning, akin to a multi-agent version of [Gymnasium](https://github.com/Farama-Foundation/Gymnasium).
- [ ] [DexterousHands](https://github.com/PKU-MARL/DexterousHands): An environment collection of bimanual dexterous manipulations tasks.See [malib/envs](/malib/envs/) for more details. In addition, users can customize environments with MALib's environment interfaces. Please refer to our documentation.
## Algorithms and Scenarios
MALib integrates population-based reinforcement learning, popular deep reinforcement learning algorithms. See algorithms table [here](/algorithms.md). The supported learning scenarios are listed as follow:
- [x] Single-stream PSRO scenario: for single-stream population-based reinforcement learning algorithms, cooperating with empirical game theoretical analysis methods. See [scenarios/psro_scenario.py](/malib/scenarios/psro_scenario.py)
- [ ] Multi-stream PSRO scenario: for multi-stream population-based reinforcement learning algorithms, cooperating with empirical game theoretical analysis methods. See [scenarios/p2sro_scenario.py](/malib/scenarios/p2sro_scenario.py)
- [x] Multi-agent Reinforcement Learning scenario: for multi-/single-agent reinforcement learning, with distributed techniques. See [scenarios/marl_scenario.py](/malib/scenarios/marl_scenario.py)## Quick Start
Before running examples, please ensure that you import python path as:
```bash
cd malib# if you run malib installation with `pip install -e .`, you can ignore the path export
export PYTHONPATH=./
```- Running PSRO example to start training for Kuhn Poker game: `python examples/run_psro.py`
- Running RL example to start training for CartPole-v1 game: `python examples/run_gym.py`## Documentation
See online documentation at [MALib Docs](https://malib.readthedocs.io/), or you can also compile a local version by compiling local files as
```bash
pip install -e .[dev]
make docs-compile
```Then start a web server to get the docs:
```bash
# execute following command, then the server will start at: http://localhost:8000
make docs-view
```## Contributing
Read [CONTRIBUTING.md](/CONTRIBUTING.md) for more details.
## Citing MALib
If you use MALib in your work, please cite the accompanying [paper](https://www.jmlr.org/papers/v24/22-0169.html).
```bibtex
@article{JMLR:v24:22-0169,
author = {Ming Zhou and Ziyu Wan and Hanjing Wang and Muning Wen and Runzhe Wu and Ying Wen and Yaodong Yang and Yong Yu and Jun Wang and Weinan Zhang},
title = {MALib: A Parallel Framework for Population-based Multi-agent Reinforcement Learning},
journal = {Journal of Machine Learning Research},
year = {2023},
volume = {24},
number = {150},
pages = {1--12},
url = {http://jmlr.org/papers/v24/22-0169.html}
}
```