https://github.com/geyang/rl-playground
Collection of RL algorithm implementations and baselines
https://github.com/geyang/rl-playground
Last synced: 3 months ago
JSON representation
Collection of RL algorithm implementations and baselines
- Host: GitHub
- URL: https://github.com/geyang/rl-playground
- Owner: geyang
- License: mit
- Created: 2020-07-31T19:08:10.000Z (almost 5 years ago)
- Default Branch: master
- Last Pushed: 2022-02-16T05:45:09.000Z (over 3 years ago)
- Last Synced: 2025-01-10T12:58:29.115Z (5 months ago)
- Language: Jupyter Notebook
- Size: 53.3 MB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Welcome to RL-Playground!
This repo contains implementation and baselines for the following algorithms:
- [x] DQN
- [x] Proximal Policy Optimization (PPO)
- [x] Soft Actor-Critic (SAC)
- [x] Twin Delayed DDPG (TD3)Missing Pieces:
- [ ] hindsight experience replay
- [ ] pixel input## Citation
If RL Playground has helped in accelerating the publication of your
paper, please kindly consider citing this repo with the following
bibtext entry:```bibtex
@misc{yang2020playground,
author={Ge Yang},
title={Playground},
url={https://github.com/geyang/rl_playground},
year={2019}
}
```