https://github.com/ffelten/masac
Jax and Torch Multi-Agent SAC on PettingZoo API
https://github.com/ffelten/masac
multi-agent-reinforcement-learning pettingzoo reinforcement-learning
Last synced: 6 months ago
JSON representation
Jax and Torch Multi-Agent SAC on PettingZoo API
- Host: GitHub
- URL: https://github.com/ffelten/masac
- Owner: ffelten
- License: mit
- Created: 2023-02-14T08:23:00.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2024-11-23T15:16:17.000Z (11 months ago)
- Last Synced: 2025-04-02T20:05:53.222Z (6 months ago)
- Topics: multi-agent-reinforcement-learning, pettingzoo, reinforcement-learning
- Language: Python
- Homepage:
- Size: 237 KB
- Stars: 75
- Watchers: 1
- Forks: 7
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Citation: CITATION.bib
Awesome Lists containing this project
README
[](https://github.com/ffelten/MASAC/blob/main/LICENSE)
[](https://pre-commit.com/)
[](https://github.com/psf/black)# MASAC
:warning: Work in progress, I did not extensively test the algorithms (especially the jax version).:warning:
Simple, yet useful [Jax](https://github.com/google/jax) and [Torch](https://pytorch.org/) Multi-Agent SAC for Parallel [PettingZoo](https://pettingzoo.farama.org/) environments.
It is assumed that the agents are homogeneous (actions and observations) and all have the same global reward.The implementation is based on the SAC implementation from the excellent [cleanRL](https://github.com/vwxyzjn/cleanrl) repo.
## Multi-Agent features
Shared parameters:
* Shared critic between all agents;
* Shared actor (conditioned on agent ID).## Install & run
```shell
poetry install
poetry run python masac/masac.py
```## Citation
If you use this code for your research, please cite this using:```bibtex
@misc{masac,
author = {Florian Felten},
title = {MASAC: A Multi-Agent Soft-Actor-Critic implementation for PettingZoo},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/ffelten/MASAC}},
}
```