{"id":13720099,"url":"https://github.com/rll/rllab","last_synced_at":"2025-05-14T21:09:44.588Z","repository":{"id":43230362,"uuid":"56792704","full_name":"rll/rllab","owner":"rll","description":"rllab is a framework for developing and evaluating reinforcement learning algorithms, fully compatible with OpenAI Gym.","archived":false,"fork":false,"pushed_at":"2023-06-10T20:57:52.000Z","size":1557,"stargazers_count":2957,"open_issues_count":117,"forks_count":800,"subscribers_count":162,"default_branch":"master","last_synced_at":"2025-04-06T14:06:16.331Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/rll.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2016-04-21T17:21:22.000Z","updated_at":"2025-04-02T07:27:01.000Z","dependencies_parsed_at":"2022-08-21T10:40:24.926Z","dependency_job_id":"7f2024da-289b-494d-80a6-1e62c15eccb4","html_url":"https://github.com/rll/rllab","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rll%2Frllab","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rll%2Frllab/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rll%2Frllab/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rll%2Frllab/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/rll","download_url":"https://codeload.github.com/rll/rllab/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248710437,"owners_count":21149189,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-03T01:00:59.764Z","updated_at":"2025-04-13T16:54:02.164Z","avatar_url":"https://github.com/rll.png","language":"Python","readme":"rllab is no longer under active development, but an [alliance of researchers](https://github.com/rlworkgroup/) from several universities has adopted it, and now maintains it under the name [**garage**](https://github.com/rlworkgroup/garage).\n\nWe recommend you develop new projects, and rebase old ones, onto the actively-maintained [garage](https://github.com/rlworkgroup/garage) codebase, to promote reproducibility and code-sharing in RL research. The new codebase shares almost all of its code with rllab, so most conversions only need to edit package import paths and perhaps update some renamed functions. \n\n[garage](https://github.com/rlworkgroup/garage) is always looking for new users and contributors, so please consider contributing your rllab-based projects and improvements to the new codebase! Recent improvements include first-class support for TensorFlow, TensorBoard integration, new algorithms including PPO and DDPG, updated Docker images, new environment wrappers, many updated dependencies, and stability improvements.\n\n[![Docs](https://readthedocs.org/projects/rllab/badge)](http://rllab.readthedocs.org/en/latest/)\n[![Circle CI](https://circleci.com/gh/rllab/rllab.svg?style=shield)](https://circleci.com/gh/rllab/rllab)\n[![License](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/rllab/rllab/blob/master/LICENSE)\n[![Join the chat at https://gitter.im/rllab/rllab](https://badges.gitter.im/rllab/rllab.svg)](https://gitter.im/rllab/rllab?utm_source=badge\u0026utm_medium=badge\u0026utm_campaign=pr-badge\u0026utm_content=badge)\n\n# rllab\n\nrllab is a framework for developing and evaluating reinforcement learning algorithms. It includes a wide range of continuous control tasks plus implementations of the following algorithms:\n\n\n- [REINFORCE](https://github.com/rllab/rllab/blob/master/rllab/algos/vpg.py)\n- [Truncated Natural Policy Gradient](https://github.com/rllab/rllab/blob/master/rllab/algos/tnpg.py)\n- [Reward-Weighted Regression](https://github.com/rllab/rllab/blob/master/rllab/algos/erwr.py)\n- [Relative Entropy Policy Search](https://github.com/rllab/rllab/blob/master/rllab/algos/reps.py)\n- [Trust Region Policy Optimization](https://github.com/rllab/rllab/blob/master/rllab/algos/trpo.py)\n- [Cross Entropy Method](https://github.com/rllab/rllab/blob/master/rllab/algos/cem.py)\n- [Covariance Matrix Adaption Evolution Strategy](https://github.com/rllab/rllab/blob/master/rllab/algos/cma_es.py)\n- [Deep Deterministic Policy Gradient](https://github.com/rllab/rllab/blob/master/rllab/algos/ddpg.py)\n\nrllab is fully compatible with [OpenAI Gym](https://gym.openai.com/). See [here](http://rllab.readthedocs.io/en/latest/user/gym_integration.html) for instructions and examples.\n\nrllab only officially supports Python 3.5+. For an older snapshot of rllab sitting on Python 2, please use the [py2 branch](https://github.com/rllab/rllab/tree/py2).\n\nrllab comes with support for running reinforcement learning experiments on an EC2 cluster, and tools for visualizing the results. See the [documentation](https://rllab.readthedocs.io/en/latest/user/cluster.html) for details.\n\nThe main modules use [Theano](http://deeplearning.net/software/theano/) as the underlying framework, and we have support for TensorFlow under [sandbox/rocky/tf](https://github.com/openai/rllab/tree/master/sandbox/rocky/tf).\n\n# Documentation\n\nDocumentation is available online: [https://rllab.readthedocs.org/en/latest/](https://rllab.readthedocs.org/en/latest/).\n\n# Citing rllab\n\nIf you use rllab for academic research, you are highly encouraged to cite the following paper:\n\n- Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel. \"[Benchmarking Deep Reinforcement Learning for Continuous Control](http://arxiv.org/abs/1604.06778)\". _Proceedings of the 33rd International Conference on Machine Learning (ICML), 2016._\n\n# Credits\n\nrllab was originally developed by Rocky Duan (UC Berkeley / OpenAI), Peter Chen (UC Berkeley), Rein Houthooft (UC Berkeley / OpenAI), John Schulman (UC Berkeley / OpenAI), and Pieter Abbeel (UC Berkeley / OpenAI). The library is continued to be jointly developed by people at OpenAI and UC Berkeley.\n\n# Slides\n\nSlides presented at ICML 2016: https://www.dropbox.com/s/rqtpp1jv2jtzxeg/ICML2016_benchmarking_slides.pdf?dl=0\n","funding_links":[],"categories":["Reinforcement Learning (RL) and Deep Reinforcement Learning (DRL)","Papers","Algorithm Repos","Python","Table of Contents"],"sub_categories":["RL/DRL Algorithm Implementations and Software Frameworks","Classic Exploration RL Papers","Libraries"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frll%2Frllab","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Frll%2Frllab","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frll%2Frllab/lists"}