{"id":13958521,"url":"https://github.com/daochenzha/rapid","last_synced_at":"2025-08-16T10:31:01.879Z","repository":{"id":61744624,"uuid":"330797687","full_name":"daochenzha/rapid","owner":"daochenzha","description":"[ICLR 2021] Rank the Episodes: A Simple Approach for Exploration in Procedurally-Generated Environments.","archived":false,"fork":false,"pushed_at":"2023-03-24T06:44:16.000Z","size":1276,"stargazers_count":56,"open_issues_count":1,"forks_count":9,"subscribers_count":3,"default_branch":"main","last_synced_at":"2024-11-28T02:34:46.693Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/daochenzha.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-01-18T22:04:33.000Z","updated_at":"2024-09-09T11:11:31.000Z","dependencies_parsed_at":"2024-11-28T02:32:20.004Z","dependency_job_id":"7cb4de9b-9459-4897-a7d9-13f3c0382c68","html_url":"https://github.com/daochenzha/rapid","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/daochenzha%2Frapid","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/daochenzha%2Frapid/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/daochenzha%2Frapid/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/daochenzha%2Frapid/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/daochenzha","download_url":"https://codeload.github.com/daochenzha/rapid/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":230028646,"owners_count":18161960,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-08T13:01:41.702Z","updated_at":"2024-12-16T21:34:21.776Z","avatar_url":"https://github.com/daochenzha.png","language":"Python","readme":"# [ICLR 2021] RAPID: A Simple Approach for Exploration in Reinforcement Learning\nThis is the Tensorflow implementation of ICLR 2021 paper [Rank the Episodes: A Simple Approach for Exploration in Procedurally-Generated Environments](https://openreview.net/forum?id=MtEE0CktZht). We propose a simple method RAPID for exploration through scroring the previous episodes and reproducing the good exploration behaviors with imitation learning.\n\u003cimg width=\"800\" src=\"./imgs/overview.png\" alt=\"overview\" /\u003e\n\nThe implementation is based on [OpenAI baselines](https://github.com/openai/baselines). For all the experiments, add the option `--disable_rapid` to see the baseline result. RAPID can achieve better performance and sample efficiency than state-of-the-art exploration methods on [MiniGrid environments](https://github.com/maximecb/gym-minigrid).\n\u003cimg width=\"800\" src=\"./imgs/rendering.png\" alt=\"rendering\" /\u003e\n\u003cimg width=\"800\" src=\"./imgs/performance.png\" alt=\"performance\" /\u003e\n\nMiscellaneous Resources: Have you heard of data-centric AI? Please check out our [data-centric AI survey](https://arxiv.org/abs/2303.10158) and [awesome data-centric AI resources](https://github.com/daochenzha/data-centric-AI)!\n\n## Cite This Work\n```\n@inproceedings{\nzha2021rank,\ntitle={Rank the Episodes: A Simple Approach for Exploration in Procedurally-Generated Environments},\nauthor={Daochen Zha and Wenye Ma and Lei Yuan and Xia Hu and Ji Liu},\nbooktitle={International Conference on Learning Representations},\nyear={2021},\nurl={https://openreview.net/forum?id=MtEE0CktZht}\n}\n```\n\n## Installation\nPlease make sure that you have **Python 3.5+** installed. First, clone the repo with\n```\ngit clone https://github.com/daochenzha/rapid.git\ncd rapid\n```\nThen install the dependencies with **pip**:\n```\npip install -r requirements.txt\npip install -e .\n```\nTo run MuJoCo experiments, you need to have the MuJoCo license. Install `mujoco-py` with\n```\npip install mujoco-py==1.50.1.68\n```\n\n## How to run the code\nThe entry is `main.py`. Some important hyperparameters are as follows.\n*   `--env`: what environment to be used\n*   `--num_timesteps`: the number of timesteps to be run\n*   `--w0`: the weight of extrinsic reward score\n*   `--w1`: the weight of local score\n*   `--w2`: the weight of global score\n*   `--sl_until`: do the RAPID update until which timestep\n*   `--disable_rapid`: use it to compare with PPO baseline\n*   `--log_dir`: the directory to save logs\n\n## Reproducing the result of MiniGrid environments\nFor MiniGrid-KeyCorridorS3R2, run\n```\npython main.py --env MiniGrid-KeyCorridorS3R2-v0 --sl_until 1200000\n```\nFor MiniGrid-KeyCorridorS3R3, run\n```\npython main.py --env MiniGrid-KeyCorridorS3R3-v0 --sl_until 3000000\n```\nFor other environments, run\n```\npython main.py --env $ENV\n```\nwhere `$ENV` is the environment name.\n\n## Run MiniWorld Maze environment\n1. Clone the latest master branch of MiniWorld and install it\n```\ngit clone -b master --single-branch --depth=1 https://github.com/maximecb/gym-miniworld.git\ncd gym-miniwolrd\npip install -e .\ncd ..\n```\n2. Start training with\n```\npython main.py --env MiniWorld-MazeS5-v0 --num_timesteps 5000000 --nsteps 512 --w1 0.00001 --w2 0.0 --log_dir results/MiniWorld-MazeS5-v0\n```\nFor server without screens, you may install `xvfb` with\n```\napt-get install xvfb\n```\nThen start training with\n```\nxvfb-run -a -s \"-screen 0 1024x768x24 -ac +extension GLX +render -noreset\" python main.py --env MiniWorld-MazeS5-v0 --num_timesteps 5000000 --nsteps 512 --w1 0.00001 --w2 0.0 --log_dir results/MiniWorld-MazeS5-v0\n```\n\n## Run MuJoCo experiments\nRun\n```\npython main.py --seed 0 --env $env --num_timesteps 5000000 --lr 5e-4 --w1 0.001 --w2 0.0 --log_dir logs/$ENV/rapid\n```\nwhere `$ENV` can be `EpisodeSwimmer-v2`, `EpisodeHopper-v2`, `EpisodeWalker2d-v2`, `EpisodeInvertedPendulum-v2`, `DensityEpisodeSwimmer-v2`, or `ViscosityEpisodeSwimmer-v2`.\n","funding_links":[],"categories":["时间序列"],"sub_categories":["网络服务_其他"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdaochenzha%2Frapid","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdaochenzha%2Frapid","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdaochenzha%2Frapid/lists"}