https://github.com/rystrauss/dopamax
Reinforcement learning in pure JAX.
https://github.com/rystrauss/dopamax
alphazero anakin brax ddpg dopamax dqn jax mcts muzero podracer ppo reinforcement-learning sac td3
Last synced: 27 days ago
JSON representation
Reinforcement learning in pure JAX.
- Host: GitHub
- URL: https://github.com/rystrauss/dopamax
- Owner: rystrauss
- License: mit
- Created: 2023-01-22T18:22:36.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2025-02-23T20:50:10.000Z (8 months ago)
- Last Synced: 2025-08-17T04:36:59.793Z (about 2 months ago)
- Topics: alphazero, anakin, brax, ddpg, dopamax, dqn, jax, mcts, muzero, podracer, ppo, reinforcement-learning, sac, td3
- Language: Python
- Homepage: https://pypi.org/project/dopamax/
- Size: 262 KB
- Stars: 13
- Watchers: 2
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
[1]: https://github.com/google/jax
[2]: https://arxiv.org/abs/2104.06272
# Dopamax
Dopamax is a library containing pure [JAX][1] implementations of common reinforcement learning algorithms. _Everything_
is implemented in JAX, including the environments. This allows for extremely fast training and evaluation of agents,
because the entire loop of environment simulation, agent interaction, and policy updates can be compiled as a single
XLA program and executed on CPUs, GPUs, or TPUs. More specifically, the implementations in Dopamax follow the
Anakin Podracer architecture -- see [this paper][2] for more details.## Supported Algorithms
- [Proximal Policy Optimization (PPO)](src/dopamax/agents/anakin/ppo.py)
- [Deep Q-Network (DQN)](src/dopamax/agents/anakin/dqn.py)
- [Deep Deterministic Policy Gradients (DDPG)](src/dopamax/agents/anakin/ddpg.py)
- [Twin Delayed DDPG (TD3)](src/dopamax/agents/anakin/ddpg.py)
- [Soft Actor Critic](src/dopamax/agents/anakin/sac.py)
- [AlphaZero](src/dopamax/agents/anakin/alphazero.py)## Installation
Dopamax can be installed with:
```bash
pip install dopamax
```This will install the `dopamax` Python package, as well as a command-line interface (CLI) for training and evaluation.
Note that only the CPU version of JAX is installed by default. If you would like to use a GPU or TPU, you will need to
install the appropriate version of JAX. See the
[JAX installation instructions](https://github.com/google/jax#installation).> [!NOTE]
> The above command will install the latest "release" of Dopamax, which may not necessarily align with the latest
> commit in the main branch. To install the version found in the main branch of this repository, you can use:
> ```bash
> pip install git+https://github.com/rystrauss/dopamax.git
> ```## Usage
After installation, the Dopamax CLI can be used to train and evaluate agents:
```bash
dopamax --help
```Dopamax uses [Weights and Biases (W&B)](https://wandb.ai/site) for logging and artifact management. Before using the CLI
for training and evaluation, you must first make sure you have a W&B account (it's free) and have authenticated
with `wandb login`.### Training
Agent's can be trained using the `dopamax train` command, to which you must provide a configuration file. The
configuration file is a YAML file that specifies the agent, environment, and training hyperparameters. You can find
examples in the [examples](examples) directory. For example, to train a PPO agent on the CartPole environment, you would
run:```bash
dopamax train --config examples/ppo-cartpole/config.yaml
```Note that all of the example config files have a random seed specified, so you will get the same result every time you
run the command. The seeds provided in the examples are known to result in a successful run (with the given
hyperparameters). To get different results on each run, you can remove the seed from the config file.### Evaluation
Once you have trained some agents, you can evaluate them using the `dopamax evaluate` command. This will allow you to
specify a W&B agent artifact that you'd like to evaluate (these artifacts are produced by the training runs and
contain the agent hyperparameters and weights from the end of training). For example, to evaluate a PPO agent trained
on CartPole, you might use a command like:```bash
dopamax evaluate --agent_artifact CartPole-PPO-agent:v0 --num_episodes 100
```where `--num_episodes 100` signals that you would like to rollout the agent's policy for 100 episodes. The minimum,
mean, and maximum episode reward will be logged back to W&B. If you would additionally like to render the episodes and
have then logged back to W&B, you can provide the `--render` flag. But note that this will usually significantly slow
down the evaluation process since environment rendering is not a pure JAX function and requires callbacks to the host.
You should usually only use the `--render` flag with a small number of episodes.## See Also
Some of the JAX-native packages that Dopamax relies on:
- [sotetsuk/pgx](https://github.com/sotetsuk/pgx)
- [deepmind/mctx](https://github.com/deepmind/mctx)
- [deepmind/rlax](https://github.com/deepmind/rlax)
- [google/brax](https://github.com/google/brax)