https://github.com/openai/phasic-policy-gradient
Code for the paper "Phasic Policy Gradient"
https://github.com/openai/phasic-policy-gradient
Last synced: about 2 months ago
JSON representation
Code for the paper "Phasic Policy Gradient"
- Host: GitHub
- URL: https://github.com/openai/phasic-policy-gradient
- Owner: openai
- License: mit
- Created: 2020-09-02T00:10:21.000Z (almost 5 years ago)
- Default Branch: master
- Last Pushed: 2023-04-02T11:44:12.000Z (about 2 years ago)
- Last Synced: 2025-04-02T11:56:21.102Z (2 months ago)
- Language: Python
- Size: 3.29 MB
- Stars: 260
- Watchers: 6
- Forks: 57
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
**Status:** Archive (code is provided as-is, no updates expected)
# Phasic Policy Gradient
#### [[Paper]](https://arxiv.org/abs/2009.04416)
This is code for training agents using [Phasic Policy Gradient](https://arxiv.org/abs/2009.04416) [(citation)](#citation).
Supported platforms:
- macOS 10.14 (Mojave)
- Ubuntu 16.04Supported Pythons:
- 3.7 64-bit
## Install
You can get miniconda from https://docs.conda.io/en/latest/miniconda.html if you don't have it, or install the dependencies from [`environment.yml`](environment.yml) manually.
```
git clone https://github.com/openai/phasic-policy-gradient.git
conda env update --name phasic-policy-gradient --file phasic-policy-gradient/environment.yml
conda activate phasic-policy-gradient
pip install -e phasic-policy-gradient
```## Reproduce and Visualize Results
PPG with default hyperparameters (results/ppg-runN):
```
mpiexec -np 4 python -m phasic_policy_gradient.train
python -m phasic_policy_gradient.graph --experiment_name ppg
```PPO baseline (results/ppo-runN):
```
mpiexec -np 4 python -m phasic_policy_gradient.train --n_epoch_pi 3 --n_epoch_vf 3 --n_aux_epochs 0 --arch shared
python -m phasic_policy_gradient.graph --experiment_name ppo
```PPG, varying E_pi (results/e-pi-N):
```
mpiexec -np 4 python -m phasic_policy_gradient.train --n_epoch_pi N
python -m phasic_policy_gradient.graph --experiment_name e_pi
```PPG, varying E_aux (results/e-aux-N):
```
mpiexec -np 4 python -m phasic_policy_gradient.train --n_aux_epochs N
python -m phasic_policy_gradient.graph --experiment_name e_aux
```PPG, varying N_pi (results/n-pi-N):
```
mpiexec -np 4 python -m phasic_policy_gradient.train --n_pi N
python -m phasic_policy_gradient.graph --experiment_name n_pi
```PPG, using L_KL instead of L_clip (results/ppgkl-runN):
```
mpiexec -np 4 python -m phasic_policy_gradient.train --clip_param 0 --kl_penalty 1
python -m phasic_policy_gradient.graph --experiment_name ppgkl
```PPG, single network variant (results/ppgsingle-runN):
```
mpiexec -np 4 python -m phasic_policy_gradient.train --arch detach
python -m phasic_policy_gradient.graph --experiment_name ppg_single_network
```Pass `--normalize_and_reduce` to compute and visualize the mean normalized return with `phasic_policy_gradient.graph`.
# Citation
Please cite using the following bibtex entry:
```
@article{cobbe2020ppg,
title={Phasic Policy Gradient},
author={Cobbe, Karl and Hilton, Jacob and Klimov, Oleg and Schulman, John},
journal={arXiv preprint arXiv:2009.04416},
year={2020}
}
```