Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/rmst/ddpg
TensorFlow implementation of the DDPG algorithm from the paper Continuous Control with Deep Reinforcement Learning (ICLR 2016)
https://github.com/rmst/ddpg
deep-learning reinforcement-learning tensorflow
Last synced: 4 days ago
JSON representation
TensorFlow implementation of the DDPG algorithm from the paper Continuous Control with Deep Reinforcement Learning (ICLR 2016)
- Host: GitHub
- URL: https://github.com/rmst/ddpg
- Owner: rmst
- License: mit
- Created: 2016-05-16T14:02:59.000Z (over 8 years ago)
- Default Branch: master
- Last Pushed: 2018-02-16T16:57:33.000Z (over 6 years ago)
- Last Synced: 2024-08-08T23:20:12.742Z (3 months ago)
- Topics: deep-learning, reinforcement-learning, tensorflow
- Language: Jupyter Notebook
- Homepage:
- Size: 13.5 MB
- Stars: 208
- Watchers: 16
- Forks: 64
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
- License: LICENSE.md
Awesome Lists containing this project
README
# Deep Deterministic Policy Gradient
__Warning: This repo is no longer maintained. For a more recent (and improved) implementation of DDPG see https://github.com/openai/baselines/tree/master/baselines/ddpg .__
Paper: ["Continuous control with deep reinforcement learning" - TP Lillicrap, JJ Hunt et al., 2015](http://arxiv.org/abs/1509.02971)
### Installation
Install [Gym](https://github.com/openai/gym#installation) and [TensorFlow](https://www.tensorflow.org/versions/r0.9/get_started/os_setup.html). Then:```bash
pip install pyglet # required for gym rendering
pip install jupyter # required only for visualization (see below)git clone https://github.com/SimonRamstedt/ddpg.git # get ddpg
```### Usage
Example:
```bash
python run.py --outdir ../ddpg-results/experiment1 --env InvertedDoublePendulum-v1
```
Enter `python run.py -h` to get a complete overview.If you want to run in the cloud or a university cluster [this](https://github.com/SimonRamstedt/ddpg-darmstadt) might contain additional information.
### Visualization
Example:
```bash
python dashboard.py --exdir ../ddpg-results/+
```
Enter `python dashboard.py -h` to get a complete overview.### Known issues
- No batch normalization yet
- No conv nets yet (i.e. only learning from low dimensional states)
- No proper seeding for reproducibilty*Please write me or open a github issue if you encounter problems! Contributions are welcome!*
### Improvements beyond the original paper
- [Output normalization](http://www0.cs.ucl.ac.uk/staff/d.silver/web/Publications_files/popart.pdf) – the main reason for divergence are variations in return scales. Output normalization would probably solve this.
- [Prioritized experience replay](http://arxiv.org/abs/1511.05952) – faster learning, better performance especially with sparse rewards – *Please write if you have/know of an implementation!*### Advaned Usage
Remote execution:
```bash
python run.py --outdir [email protected]:/some/remote/directory/+ --env InvertedDoublePendulum-v1
```