{"id":13936187,"url":"https://github.com/pathak22/noreward-rl","last_synced_at":"2025-05-16T03:04:55.740Z","repository":{"id":20956310,"uuid":"91322593","full_name":"pathak22/noreward-rl","owner":"pathak22","description":"[ICML 2017] TensorFlow code for Curiosity-driven Exploration for Deep Reinforcement Learning","archived":false,"fork":false,"pushed_at":"2022-12-07T23:59:41.000Z","size":4870,"stargazers_count":1428,"open_issues_count":35,"forks_count":301,"subscribers_count":62,"default_branch":"master","last_synced_at":"2025-04-08T13:11:20.078Z","etag":null,"topics":["curiosity","deep-learning","deep-neural-networks","deep-reinforcement-learning","doom","exploration","mario","openai-gym","rl","self-supervised","tensorflow"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/pathak22.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2017-05-15T09:57:27.000Z","updated_at":"2025-04-08T07:38:42.000Z","dependencies_parsed_at":"2023-01-12T03:31:07.090Z","dependency_job_id":null,"html_url":"https://github.com/pathak22/noreward-rl","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pathak22%2Fnoreward-rl","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pathak22%2Fnoreward-rl/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pathak22%2Fnoreward-rl/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pathak22%2Fnoreward-rl/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/pathak22","download_url":"https://codeload.github.com/pathak22/noreward-rl/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254459087,"owners_count":22074605,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["curiosity","deep-learning","deep-neural-networks","deep-reinforcement-learning","doom","exploration","mario","openai-gym","rl","self-supervised","tensorflow"],"created_at":"2024-08-07T23:02:27.209Z","updated_at":"2025-05-16T03:04:50.731Z","avatar_url":"https://github.com/pathak22.png","language":"Python","readme":"## Curiosity-driven Exploration by Self-supervised Prediction ##\n#### In ICML 2017 [[Project Website]](http://pathak22.github.io/noreward-rl/) [[Demo Video]](http://pathak22.github.io/noreward-rl/index.html#demoVideo)\n\n[Deepak Pathak](https://people.eecs.berkeley.edu/~pathak/), [Pulkit Agrawal](https://people.eecs.berkeley.edu/~pulkitag/), [Alexei A. Efros](https://people.eecs.berkeley.edu/~efros/), [Trevor Darrell](https://people.eecs.berkeley.edu/~trevor/)\u003cbr/\u003e\nUniversity of California, Berkeley\u003cbr/\u003e\n\n\u003cimg src=\"images/mario1.gif\" width=\"300\"\u003e    \u003cimg src=\"images/vizdoom.gif\" width=\"351\"\u003e\n\nThis is a tensorflow based implementation for our [ICML 2017 paper on curiosity-driven exploration for reinforcement learning](http://pathak22.github.io/noreward-rl/). Idea is to train agent with intrinsic curiosity-based motivation (ICM) when external rewards from environment are sparse. Surprisingly, you can use ICM even when there are no rewards available from the environment, in which case, agent learns to explore only out of curiosity: 'RL without rewards'. If you find this work useful in your research, please cite:\n\n    @inproceedings{pathakICMl17curiosity,\n        Author = {Pathak, Deepak and Agrawal, Pulkit and\n                  Efros, Alexei A. and Darrell, Trevor},\n        Title = {Curiosity-driven Exploration by Self-supervised Prediction},\n        Booktitle = {International Conference on Machine Learning ({ICML})},\n        Year = {2017}\n    }\n\n### 1) Installation and Usage\n1.  This code is based on [TensorFlow](https://www.tensorflow.org/). To install, run these commands:\n  ```Shell\n  # you might not need many of these, e.g., fceux is only for mario\n  sudo apt-get install -y python-numpy python-dev cmake zlib1g-dev libjpeg-dev xvfb \\\n  libav-tools xorg-dev python-opengl libboost-all-dev libsdl2-dev swig python3-dev \\\n  python3-venv make golang libjpeg-turbo8-dev gcc wget unzip git fceux virtualenv \\\n  tmux\n\n  # install the code\n  git clone -b master --single-branch https://github.com/pathak22/noreward-rl.git\n  cd noreward-rl/\n  virtualenv curiosity\n  source $PWD/curiosity/bin/activate\n  pip install numpy\n  pip install -r src/requirements.txt\n  python curiosity/src/go-vncdriver/build.py\n\n  # download models\n  bash models/download_models.sh\n\n  # setup customized doom environment\n  cd doomFiles/\n  # then follow commands in doomFiles/README.md\n  ```\n\n2. Running demo\n  ```Shell\n  cd noreward-rl/src/\n  python demo.py --ckpt ../models/doom/doom_ICM\n  python demo.py --env-id SuperMarioBros-1-1-v0 --ckpt ../models/mario/mario_ICM\n  ```\n\n3. Training code\n  ```Shell\n  cd noreward-rl/src/\n  # For Doom: doom or doomSparse or doomVerySparse\n  python train.py --default --env-id doom\n\n  # For Mario, change src/constants.py as follows:\n  # PREDICTION_BETA = 0.2\n  # ENTROPY_BETA = 0.0005\n  python train.py --default --env-id mario --noReward\n\n  xvfb-run -s \"-screen 0 1400x900x24\" bash  # only for remote desktops\n  # useful xvfb link: http://stackoverflow.com/a/30336424\n  python inference.py --default --env-id doom --record\n  ```\n\n### 2) Other helpful pointers\n- [Paper](https://pathak22.github.io/noreward-rl/resources/icml17.pdf)\n- [Project Website](http://pathak22.github.io/noreward-rl/)\n- [Demo Video](http://pathak22.github.io/noreward-rl/index.html#demoVideo)\n- [Reddit Discussion](https://redd.it/6bc8ul)\n- [Media Articles (New Scientist, MIT Tech Review and others)](http://pathak22.github.io/noreward-rl/index.html#media)\n\n### 3) Acknowledgement\nVanilla A3C code is based on the open source implementation of [universe-starter-agent](https://github.com/openai/universe-starter-agent).\n","funding_links":[],"categories":["Python","Codes"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpathak22%2Fnoreward-rl","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fpathak22%2Fnoreward-rl","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpathak22%2Fnoreward-rl/lists"}