{"id":47251415,"url":"https://google-deepmind.github.io/disco_rl/","last_synced_at":"2026-03-28T20:00:43.208Z","repository":{"id":327262264,"uuid":"1025674319","full_name":"google-deepmind/disco_rl","owner":"google-deepmind","description":"Accompanying code for \"Discovering State-of-the-art Reinforcement Algorithms\" Nature publication","archived":false,"fork":false,"pushed_at":"2025-12-02T16:58:41.000Z","size":15668,"stargazers_count":370,"open_issues_count":1,"forks_count":26,"subscribers_count":5,"default_branch":"main","last_synced_at":"2025-12-05T15:30:17.029Z","etag":null,"topics":["ai","meta-learning","nature","reinforcement-learning","reinforcement-learning-algorithms"],"latest_commit_sha":null,"homepage":"https://google-deepmind.github.io/disco_rl","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/google-deepmind.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-07-24T16:06:21.000Z","updated_at":"2025-12-05T07:38:15.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/google-deepmind/disco_rl","commit_stats":null,"previous_names":["google-deepmind/disco_rl"],"tags_count":null,"template":false,"template_full_name":null,"purl":"pkg:github/google-deepmind/disco_rl","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Fdisco_rl","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Fdisco_rl/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Fdisco_rl/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Fdisco_rl/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/google-deepmind","download_url":"https://codeload.github.com/google-deepmind/disco_rl/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Fdisco_rl/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31120886,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-28T17:50:59.904Z","status":"ssl_error","status_checked_at":"2026-03-28T17:50:59.435Z","response_time":79,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","meta-learning","nature","reinforcement-learning","reinforcement-learning-algorithms"],"created_at":"2026-03-14T15:00:23.312Z","updated_at":"2026-03-28T20:00:43.196Z","avatar_url":"https://github.com/google-deepmind.png","language":"Python","funding_links":[],"categories":["Reinforcement Learning"],"sub_categories":["1.Basic for RL"],"readme":"# DiscoRL: Discovering State-of-the-art Reinforcement Learning Algorithms\n\nThis repository contains accompanying code for the *\"Discovering\n State-of-the-art Reinforcement Learning Algorithms\"* Nature publication.\n\nIt provides a minimal JAX harness for the DiscoRL setup together with the\n original meta-learned weights for the *Disco103* discovered update rule.\n\nThe harness supports both:\n\n-   **Meta-evaluation**: training an agent using the *Disco103* discovered RL\n    update rule, using the `colabs/eval.ipynb` notebook [![Open In](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google-deepmind/disco_rl/blob/master/colabs/eval.ipynb) and\n\n-   **Meta-training**: meta-learning a RL update rule from scratch or from a\n    pre-existing checkpoint, using the `colabs/meta_train.ipynb` notebook [![Open In](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google-deepmind/disco_rl/blob/master/colabs/meta_train.ipynb)\n\nNote that it will not be actively maintained moving forward.\n\n## Installation\n\nSet up a Python virtual environment and install the package:\n\n```bash\npython3 -m venv disco_rl_venv\nsource disco_rl_venv/bin/activate\npip install git+https://github.com/google-deepmind/disco_rl.git\n```\n\nThe package can also be installed from colab:\n\n```bash\n!pip install git+https://github.com/google-deepmind/disco_rl.git\n```\n\n## Usage\n\nThe code is structured as follows:\n\n* `environments/` contains the general interface for the environments that can\n  be used with the provided harness, and two implementations of `Catch`:\n  a CPU-based one and jittable;\n\n* `networks/` includes a simple MLP network and LSTM-based components of the\n  DiscoRL models, all implemented in Haiku;\n\n* `update_rules/` has implementations of the discovered rules, actor-critic, and\n  policy gradient;\n\n* `value_fns/` contains value-function related utilities;\n\n* `types.py`, `utils.py`, `optimizers.py` implement a basic functionality for\n  the harness;\n\n* `agent.py` is a generic implementation of an RL agent which uses the update\n  rule's API for training, hence it is compatible with all the rules from\n  `update_rules/`.\n\nDetailed examples of usage can be found in the colabs above.\n\n## Citation\n\nPlease cite the original Nature paper:\n\n```\n@Article{DiscoRL2025,\n  author  = {Oh, Junhyuk and Farquhar, Greg and Kemaev, Iurii and Calian, Dan A. and Hessel, Matteo and Zintgraf, Luisa and Singh, Satinder and van Hasselt, Hado and Silver, David},\n  journal = {Nature},\n  title   = {Discovering State-of-the-art Reinforcement Learning Algorithms},\n  year    = {2025},\n  doi     = {10.1038/s41586-025-09761-x}\n}\n```\n\n## License and disclaimer\n\nCopyright 2025 Google LLC\n\nAll software is licensed under the Apache License, Version 2.0 (Apache 2.0);\nyou may not use this file except in compliance with the Apache 2.0 license.\nYou may obtain a copy of the Apache 2.0 license at:\nhttps://www.apache.org/licenses/LICENSE-2.0\n\nAll other materials are licensed under the Creative Commons Attribution 4.0\nInternational License (CC-BY). You may obtain a copy of the CC-BY license at:\nhttps://creativecommons.org/licenses/by/4.0/legalcode\n\nUnless required by applicable law or agreed to in writing, all software and\nmaterials distributed here under the Apache 2.0 or CC-BY licenses are\ndistributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,\neither express or implied. See the licenses for the specific language governing\npermissions and limitations under those licenses.\n\nThis is not an official Google product.\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/google-deepmind.github.io%2Fdisco_rl%2F","html_url":"https://awesome.ecosyste.ms/projects/google-deepmind.github.io%2Fdisco_rl%2F","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/google-deepmind.github.io%2Fdisco_rl%2F/lists"}