Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/pathak22/exploration-by-disagreement
[ICML 2019] TensorFlow Code for Self-Supervised Exploration via Disagreement
https://github.com/pathak22/exploration-by-disagreement
artificial-curiosity artificial-intelligence curiosity deep-learning deep-reinforcement-learning exploration rl self-supervised tensorflow
Last synced: 3 months ago
JSON representation
[ICML 2019] TensorFlow Code for Self-Supervised Exploration via Disagreement
- Host: GitHub
- URL: https://github.com/pathak22/exploration-by-disagreement
- Owner: pathak22
- Created: 2019-05-20T02:00:21.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2019-06-11T19:32:50.000Z (over 5 years ago)
- Last Synced: 2024-05-22T00:03:25.993Z (9 months ago)
- Topics: artificial-curiosity, artificial-intelligence, curiosity, deep-learning, deep-reinforcement-learning, exploration, rl, self-supervised, tensorflow
- Language: Python
- Homepage:
- Size: 4.82 MB
- Stars: 121
- Watchers: 4
- Forks: 22
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
## Self-Supervised Exploration via Disagreement ##
#### [[Project Website]](https://pathak22.github.io/exploration-by-disagreement/) [[Demo Video]](https://youtu.be/POlrWt32_ec)[Deepak Pathak*](https://people.eecs.berkeley.edu/~pathak/), [Dhiraj Gandhi*](http://www.cs.cmu.edu/~dgandhi/), [Abhinav Gupta](http://www.cs.cmu.edu/~abhinavg/)
(* equal contribution)UC Berkeley
CMU
Facebook AI ResearchThis is a TensorFlow based implementation for our [paper on self-supervised exploration via disagreement](https://pathak22.github.io/exploration-by-disagreement/). In this paper, we propose a formulation for exploration inspired by the work in active learning literature. Specifically, we train an ensemble of dynamics models and incentivize the agent to explore such that the disagreement of those ensembles is maximized. This allows the agent to learn skills by exploring in a self-supervised manner without any external reward. Notably, we further leverage the disagreement objective to optimize the agent's policy in a differentiable manner, without using reinforcement learning, which results in a sample-efficient exploration. We demonstrate the efficacy of this formulation across a variety of benchmark environments including stochastic-Atari, Mujoco, Unity and a real robotic arm. If you find this work useful in your research, please cite:
@inproceedings{pathak19disagreement,
Author = {Pathak, Deepak and
Gandhi, Dhiraj and Gupta, Abhinav},
Title = {Self-Supervised Exploration via Disagreement},
Booktitle = {ICML},
Year = {2019}
}### Installation and Usage
The following command should train a pure exploration agent on Breakout with default experiment parameters.
```bash
python run.py
```
To use more than one gpu/machine, use MPI (e.g. `mpiexec -n 8 python run.py` should use 1024 parallel environments to collect experience instead of the default 128 on an 8 gpu machine).### Other helpful pointers
- [Paper](https://pathak22.github.io/exploration-by-disagreement/resources/icml19.pdf)
- [Project Website](https://pathak22.github.io/exploration-by-disagreement/)
- [Demo Video](https://youtu.be/POlrWt32_ec)### Acknowledgement
This repository is built off the publicly released code of [Large-Scale Study of Curiosity-driven Learning, ICLR 2019](https://github.com/openai/large-scale-curiosity).