{"id":13400398,"url":"https://github.com/openai/gym","last_synced_at":"2025-05-12T03:44:32.606Z","repository":{"id":37359335,"uuid":"57222302","full_name":"openai/gym","owner":"openai","description":"A toolkit for developing and comparing reinforcement learning algorithms.","archived":false,"fork":false,"pushed_at":"2024-10-11T20:07:05.000Z","size":7123,"stargazers_count":35896,"open_issues_count":124,"forks_count":8659,"subscribers_count":1057,"default_branch":"master","last_synced_at":"2025-05-01T13:51:52.865Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://www.gymlibrary.dev","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/openai.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE.md","code_of_conduct":"CODE_OF_CONDUCT.rst","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2016-04-27T14:59:16.000Z","updated_at":"2025-05-01T08:36:48.000Z","dependencies_parsed_at":"2023-02-16T10:16:03.169Z","dependency_job_id":"3dcb015d-32d9-4c83-9b1f-fe360c155911","html_url":"https://github.com/openai/gym","commit_stats":null,"previous_names":[],"tags_count":56,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openai%2Fgym","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openai%2Fgym/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openai%2Fgym/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openai%2Fgym/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/openai","download_url":"https://codeload.github.com/openai/gym/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253671518,"owners_count":21945440,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-07-30T19:00:51.603Z","updated_at":"2025-05-12T03:44:32.560Z","avatar_url":"https://github.com/openai.png","language":"Python","readme":"[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit\u0026logoColor=white)](https://pre-commit.com/) [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n\n## Important Notice\n\n### The team that has been maintaining Gym since 2021 has moved all future development to [Gymnasium](https://github.com/Farama-Foundation/Gymnasium), a drop in replacement for Gym (import gymnasium as gym), and Gym will not be receiving any future updates. Please switch over to Gymnasium as soon as you're able to do so. If you'd like to read more about the story behind this switch, please check out [this blog post](https://farama.org/Announcing-The-Farama-Foundation).\n\n## Gym\n\nGym is an open source Python library for developing and comparing reinforcement learning algorithms by providing a standard API to communicate between learning algorithms and environments, as well as a standard set of environments compliant with that API. Since its release, Gym's API has become the field standard for doing this.\n\nGym documentation website is at [https://www.gymlibrary.dev/](https://www.gymlibrary.dev/), and you can propose fixes and changes to it [here](https://github.com/Farama-Foundation/gym-docs).\n\nGym also has a discord server for development purposes that you can join here: https://discord.gg/nHg2JRN489\n\n## Installation\n\nTo install the base Gym library, use `pip install gym`.\n\nThis does not include dependencies for all families of environments (there's a massive number, and some can be problematic to install on certain systems). You can install these dependencies for one family like `pip install gym[atari]` or use `pip install gym[all]` to install all dependencies.\n\nWe support Python 3.7, 3.8, 3.9 and 3.10 on Linux and macOS. We will accept PRs related to Windows, but do not officially support it.\n\n## API\n\nThe Gym API's API models environments as simple Python `env` classes. Creating environment instances and interacting with them is very simple- here's an example using the \"CartPole-v1\" environment:\n\n```python\nimport gym\nenv = gym.make(\"CartPole-v1\")\nobservation, info = env.reset(seed=42)\n\nfor _ in range(1000):\n    action = env.action_space.sample()\n    observation, reward, terminated, truncated, info = env.step(action)\n\n    if terminated or truncated:\n        observation, info = env.reset()\nenv.close()\n```\n\n## Notable Related Libraries\n\nPlease note that this is an incomplete list, and just includes libraries that the maintainers most commonly point newcommers to when asked for recommendations.\n\n* [CleanRL](https://github.com/vwxyzjn/cleanrl) is a learning library based on the Gym API. It is designed to cater to newer people in the field and provides very good reference implementations.\n* [Tianshou](https://github.com/thu-ml/tianshou) is a learning library that's geared towards very experienced users and is design to allow for ease in complex algorithm modifications.\n* [RLlib](https://docs.ray.io/en/latest/rllib/index.html) is a learning library that allows for distributed training and inferencing and supports an extraordinarily large number of features throughout the reinforcement learning space.\n* [PettingZoo](https://github.com/Farama-Foundation/PettingZoo) is like Gym, but for environments with multiple agents.\n\n## Environment Versioning\n\nGym keeps strict versioning for reproducibility reasons. All environments end in a suffix like \"\\_v0\".  When changes are made to environments that might impact learning results, the number is increased by one to prevent potential confusion.\n\n## MuJoCo Environments\n\nThe latest \"\\_v4\" and future versions of the MuJoCo environments will no longer depend on `mujoco-py`. Instead `mujoco` will be the required dependency for future gym MuJoCo environment versions. Old gym MuJoCo environment versions that depend on `mujoco-py` will still be kept but unmaintained.\nTo install the dependencies for the latest gym MuJoCo environments use `pip install gym[mujoco]`. Dependencies for old MuJoCo environments can still be installed by `pip install gym[mujoco_py]`. \n\n## Citation\n\nA whitepaper from when Gym just came out is available https://arxiv.org/pdf/1606.01540, and can be cited with the following bibtex entry:\n\n```\n@misc{1606.01540,\n  Author = {Greg Brockman and Vicki Cheung and Ludwig Pettersson and Jonas Schneider and John Schulman and Jie Tang and Wojciech Zaremba},\n  Title = {OpenAI Gym},\n  Year = {2016},\n  Eprint = {arXiv:1606.01540},\n}\n```\n\n## Release Notes\n\nThere used to be release notes for all the new Gym versions here. New release notes are being moved to [releases page](https://github.com/openai/gym/releases) on GitHub, like most other libraries do. Old notes can be viewed [here](https://github.com/openai/gym/blob/31be35ecd460f670f0c4b653a14c9996b7facc6c/README.rst).\n","funding_links":[],"categories":["Python","资源列表","Reinforcement Learning","AI Framework","Environments","Machine Learning","Resources and Frameworks","Libraries","Sensor Processing","Python (144)","Table of Contents","机器学习","强化学习","时间序列","Papers","3) Lenguajes y Librerias :clipboard:","Datasets","Machine Learning [🔝](#readme)","Open Source Reinforcement Learning Platforms","Machine Learning Libraries","Miscellaneous","Uncategorized","[Open AI Gym](https://gym.openai.com/)"],"sub_categories":["机器学习","NLP","Quant","Reinforcement Learning","Simulation","Machine Learning","Others","网络服务_其他","NeurIPS 2022","General-Purpose Machine Learning","Classic Exploration RL Papers","Python","ICML 2024","Automatic Plotting","ICML 2021","Meta Lifelong Reinforcement Learning","Human Computer Interaction","Frameworks","Drone Frames","Related","Uncategorized"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopenai%2Fgym","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fopenai%2Fgym","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopenai%2Fgym/lists"}