https://github.com/zombie-einstein/esquilax
JAX Multi-Agent RL, Neuro-Evolution, and A-Life Library
https://github.com/zombie-einstein/esquilax
alife jax multi-agent multi-agent-reinforcement-learning multi-agent-simulation multi-agent-systems neuroevolution reinforcement-learning reinforcement-learning-environments simulation
Last synced: 5 days ago
JSON representation
JAX Multi-Agent RL, Neuro-Evolution, and A-Life Library
- Host: GitHub
- URL: https://github.com/zombie-einstein/esquilax
- Owner: zombie-einstein
- License: mit
- Created: 2024-09-09T20:21:39.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2025-07-02T15:45:42.000Z (7 months ago)
- Last Synced: 2025-07-02T16:42:33.782Z (7 months ago)
- Topics: alife, jax, multi-agent, multi-agent-reinforcement-learning, multi-agent-simulation, multi-agent-systems, neuroevolution, reinforcement-learning, reinforcement-learning-environments, simulation
- Language: Python
- Homepage: https://zombie-einstein.github.io/esquilax/
- Size: 639 KB
- Stars: 10
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE.txt
Awesome Lists containing this project
README
JAX Multi-Agent RL, A-Life, and Simulation Framework
Esquilax is set of transformations and utilities
intended to allow developers and researchers to
quickly implement models of multi-agent systems
for rl-training, evolutionary methods, and a-life.
It is intended for systems involving large number of
agents, and to work alongside other JAX packages
like [Flax](https://github.com/google/flax) and
[Evosax](https://github.com/RobertTLange/evosax).
**Full documentation can be found
[here](https://zombie-einstein.github.io/esquilax/)**
## Features
- ***Built on top of JAX***
This has the benefits of JAX; high-performance, built in
GPU support etc., but also means Esquilax can interoperate
with existing JAX ML and RL libraries.
- ***Interaction Algorithm Implementations***
Implements common agent interaction patterns. This
allows users to concentrate on model design instead of low-level
algorithm implementation details.
- ***Scale and Performance***
JIT compilation and GPU support enables simulations and multi-agent
systems containing large numbers of agents whilst maintaining
performance and training throughput.
- ***Functional Patterns***
Esquilax is designed around functional patterns, ensuring models
can be readily parallelised, but also aiding composition
and readability
- ***Built-in RL and Evolutionary Training***
Esquilax provides functionality for running multi-agent RL
and multi-strategy neuro-evolution training, within Esquilax
simulations.
## Should I Use Esquilax?
Esquilax is intended for time-stepped models of large scale systems
with fixed numbers of entities, where state is updated in parallel.
As such you should probably *not* use Esquilax if:
- You want to use something other than stepped updates, e.g.
continuous time, event driven models, or where agents are intended to
update in sequence.
- You need variable numbers of entities or temporary entities, e.g.
message passing.
- You need a high-fidelity physics/robotics simulation.
## Getting Started
Esquilax can be installed from pip using
``` bash
pip install esquilax
```
The requirements for evolutionary and rl training are
not installed by default. They can be installed using the `evo` and `rl`
extras respectively, e.g.:
```bash
pip install esquilax[evo]
```
You may need to manually install JAXlib, especially for GPU support.
Installation instructions for JAX can be found
[here](https://github.com/google/jax?tab=readme-ov-file#installation).
## Examples
Example models and multi-agent policy training implemented using Esquilax
can be found [here](https://github.com/zombie-einstein/esquilax/tree/main/examples). A virtual environment with additional
dependencies for the examples can be setup using [poetry](https://python-poetry.org/)
with
```bash
poetry install --extras all --with examples
```
For a project using Esquilax see
[Floxs](https://github.com/zombie-einstein/floxs) a collection of multi-agent
RL flock/swarm environments or
[this](https://github.com/instadeepai/jumanji/tree/main/jumanji/environments/swarms/search_and_rescue)
multi-agent rl environment, part of the [Jumanji](https://github.com/instadeepai/jumanji)
RL environment library.
## Contributing
### Issues
Please report any issues or feature suggestions
[here](https://github.com/zombie-einstein/esquilax/issues).
### Developers
Developer notes can be found
[here](https://github.com/zombie-einstein/esquilax/blob/main/.github/docs/developers.md),
Esquilax is under active development and contributions are very welcome!