{"id":13837120,"url":"https://github.com/coreylynch/async-rl","last_synced_at":"2025-04-12T22:22:55.330Z","repository":{"id":142674530,"uuid":"56199835","full_name":"coreylynch/async-rl","owner":"coreylynch","description":"Tensorflow + Keras + OpenAI Gym implementation of 1-step Q Learning from  \"Asynchronous Methods for Deep Reinforcement Learning\"","archived":false,"fork":false,"pushed_at":"2018-03-18T04:55:56.000Z","size":1096,"stargazers_count":1012,"open_issues_count":22,"forks_count":172,"subscribers_count":67,"default_branch":"master","last_synced_at":"2025-04-04T01:24:50.650Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/coreylynch.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2016-04-14T02:05:24.000Z","updated_at":"2025-04-03T23:26:47.000Z","dependencies_parsed_at":null,"dependency_job_id":"7a4c4286-3dac-44fc-b230-5808c5a31895","html_url":"https://github.com/coreylynch/async-rl","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/coreylynch%2Fasync-rl","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/coreylynch%2Fasync-rl/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/coreylynch%2Fasync-rl/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/coreylynch%2Fasync-rl/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/coreylynch","download_url":"https://codeload.github.com/coreylynch/async-rl/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248638608,"owners_count":21137690,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-04T15:01:01.394Z","updated_at":"2025-04-12T22:22:55.306Z","avatar_url":"https://github.com/coreylynch.png","language":"Python","readme":"# Asyncronous RL in Tensorflow + Keras + OpenAI's Gym\n\n![](http://g.recordit.co/BeiqC9l70B.gif)\n\nThis is a Tensorflow + Keras implementation of asyncronous 1-step Q learning as described in [\"Asynchronous Methods for Deep Reinforcement Learning\"](http://arxiv.org/pdf/1602.01783v1.pdf).\n\nSince we're using multiple actor-learner threads to stabilize learning in place of experience replay (which is super memory intensive), this runs comfortably on a macbook w/ 4g of ram.\n\nIt uses Keras to define the deep q network (see model.py), OpenAI's gym library to interact with the Atari Learning Environment (see atari_environment.py), and Tensorflow for optimization/execution (see async_dqn.py).\n\n## Requirements\n* [tensorflow](https://www.tensorflow.org/versions/r0.9/get_started/os_setup.html)\n* [gym](https://github.com/openai/gym#installation)\n* [gym's atari environment] (https://github.com/openai/gym#atari)\n* skimage\n* Keras\n\n## Usage\n### Training\nTo kick off training, run:\n```\npython async_dqn.py --experiment breakout --game \"Breakout-v0\" --num_concurrent 8\n```\nHere we're organizing the outputs for the current experiment under a folder called 'breakout', choosing \"Breakout-v0\" as our gym environment, and running 8 actor-learner threads concurrently. See [this](https://gym.openai.com/envs#atari) for a full list of possible game names you can hand to --game.\n\n### Visualizing training with tensorboard\nWe collect episode reward stats and max q values that can be vizualized with tensorboard by running the following:\n```\ntensorboard --logdir /tmp/summaries/breakout\n```\nThis is what my per-episode reward and average max q value curves looked like over the training period:\n![](https://github.com/coreylynch/async-rl/blob/master/resources/episode_reward.png)\n![](https://github.com/coreylynch/async-rl/blob/master/resources/max_q_value.png)\n\n### Evaluation\nTo run a gym evaluation, turn the testing flag to True and hand in a current checkpoint file:\n```\npython async_dqn.py --experiment breakout --testing True --checkpoint_path /tmp/breakout.ckpt-2690000 --num_eval_episodes 100\n```\nAfter completing the eval, we can upload our eval file to OpenAI's site as follows:\n```python\nimport gym\ngym.upload('/tmp/breakout/eval', api_key='YOUR_API_KEY')\n```\nNow we can find the eval at https://gym.openai.com/evaluations/eval_uwwAN0U3SKSkocC0PJEwQ\n\n### Next Steps\nSee a3c.py for a WIP async advantage actor critic implementation.\n\n## Resources\nI found these super helpful as general background materials for deep RL:\n\n* [David Silver's \"Deep Reinforcement Learning\" lecture](http://videolectures.net/rldm2015_silver_reinforcement_learning/)\n* [Nervana's Demystifying Deep Reinforcement Learning blog post](http://www.nervanasys.com/demystifying-deep-reinforcement-learning/)\n\n## Important notes\n* In the paper the authors mention \"for asynchronous methods we average over the best 5 models from **50 experiments**\". I overlooked this point when I was writing this, but I think it's important. These async methods seem to vary in performance a lot from run to run (at least in my implementation of them!). I think it's a good idea to run multiple seeded versions at the same time and average over their performance to get a good picture of whether or not some architectural change is good or not. Equivalently don't get discouraged if you don't see performance on your task right away; try rerunning the same code a few more times with different seeds.\n* This repo has no affiliation with Deepmind or the authors; it was just a simple project I was using to learn TensorFlow. Feedback is highly appreciated.\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcoreylynch%2Fasync-rl","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcoreylynch%2Fasync-rl","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcoreylynch%2Fasync-rl/lists"}