{"id":30318219,"url":"https://github.com/deepmind/scalable_agent","last_synced_at":"2025-08-17T20:09:41.024Z","repository":{"id":66141143,"uuid":"136356756","full_name":"google-deepmind/scalable_agent","owner":"google-deepmind","description":"A TensorFlow implementation of Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures.","archived":false,"fork":false,"pushed_at":"2019-03-13T04:43:25.000Z","size":54,"stargazers_count":1007,"open_issues_count":14,"forks_count":160,"subscribers_count":32,"default_branch":"master","last_synced_at":"2025-08-11T10:26:46.758Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/google-deepmind.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2018-06-06T16:31:31.000Z","updated_at":"2025-08-09T15:47:44.000Z","dependencies_parsed_at":"2023-06-27T01:32:49.736Z","dependency_job_id":null,"html_url":"https://github.com/google-deepmind/scalable_agent","commit_stats":null,"previous_names":["google-deepmind/scalable_agent","deepmind/scalable_agent"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/google-deepmind/scalable_agent","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Fscalable_agent","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Fscalable_agent/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Fscalable_agent/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Fscalable_agent/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/google-deepmind","download_url":"https://codeload.github.com/google-deepmind/scalable_agent/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Fscalable_agent/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":270899582,"owners_count":24664720,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-08-17T02:00:09.016Z","response_time":129,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-08-17T20:04:13.276Z","updated_at":"2025-08-17T20:09:41.008Z","avatar_url":"https://github.com/google-deepmind.png","language":"Python","funding_links":[],"categories":["Python"],"sub_categories":[],"readme":"# Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures\n\nThis repository contains an implementation of \"Importance Weighted Actor-Learner\nArchitectures\", along with a *dynamic batching* module. This is not an\nofficially supported Google product.\n\nFor a detailed description of the architecture please read [our paper][arxiv].\nPlease cite the paper if you use the code from this repository in your work.\n\n### Bibtex\n\n```\n@inproceedings{impala2018,\n  title={IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures},\n  author={Espeholt, Lasse and Soyer, Hubert and Munos, Remi and Simonyan, Karen and Mnih, Volodymir and Ward, Tom and Doron, Yotam and Firoiu, Vlad and Harley, Tim and Dunning, Iain and others},\n  booktitle={Proceedings of the International Conference on Machine Learning (ICML)},\n  year={2018}\n}\n```\n\n## Running the Code\n\n### Prerequisites\n\n[TensorFlow][tensorflow] \u003e=1.9.0-dev20180530, the environment\n[DeepMind Lab][deepmind_lab] and the neural network library\n[DeepMind Sonnet][sonnet]. Although we use [DeepMind Lab][deepmind_lab] in this\nrelease, the agent has been successfully applied to other domains such as\n[Atari][arxiv], [Street View][learning_nav] and has been modified to\n[generate images][generate_images].\n\nWe include a [Dockerfile][dockerfile] that serves as a reference for the\nprerequisites and commands needed to run the code.\n\n### Single Machine Training on a Single Level\n\nTraining on `explore_goal_locations_small`. Most runs should end up with average\nepisode returns around 200 or around 250 after 1B frames.\n\n```sh\npython experiment.py --num_actors=48 --batch_size=32\n```\n\nAdjust the number of actors (i.e. number of environments) and batch size to\nmatch the size of the machine it runs on. A single actor, including DeepMind\nLab, requires a few hundred MB of RAM.\n\n### Distributed Training on DMLab-30\n\nTraining on the full [DMLab-30][dmlab30]. Across 10 runs with different seeds\nbut identical hyperparameters, we observed between 45 and 50 capped human\nnormalized training score with different seeds (`--seed=[seed]`). Test scores\nare usually an absolute of ~2% lower.\n\n#### Learner\n\n```sh\npython experiment.py --job_name=learner --task=0 --num_actors=150 \\\n    --level_name=dmlab30 --batch_size=32 --entropy_cost=0.0033391318945337044 \\\n    --learning_rate=0.00031866995608948655 \\\n    --total_environment_frames=10000000000 --reward_clipping=soft_asymmetric\n```\n\n#### Actor(s)\n\n```sh\nfor i in $(seq 0 149); do\n  python experiment.py --job_name=actor --task=$i \\\n      --num_actors=150 --level_name=dmlab30 --dataset_path=[...] \u0026\ndone;\nwait\n```\n\n#### Test Score\n\n```sh\npython experiment.py --mode=test --level_name=dmlab30 --dataset_path=[...] \\\n    --test_num_episodes=10\n```\n\n[arxiv]: https://arxiv.org/abs/1802.01561\n[deepmind_lab]: https://github.com/deepmind/lab\n[sonnet]: https://github.com/deepmind/sonnet\n[learning_nav]: https://arxiv.org/abs/1804.00168\n[generate_images]: https://deepmind.com/blog/learning-to-generate-images/\n[tensorflow]: https://github.com/tensorflow/tensorflow\n[dockerfile]: Dockerfile\n[dmlab30]: https://github.com/deepmind/lab/tree/master/game_scripts/levels/contributed/dmlab30\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdeepmind%2Fscalable_agent","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdeepmind%2Fscalable_agent","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdeepmind%2Fscalable_agent/lists"}