{"id":13958533,"url":"https://github.com/wangshusen/DRL","last_synced_at":"2025-07-21T00:31:16.377Z","repository":{"id":37391995,"uuid":"280908823","full_name":"wangshusen/DRL","owner":"wangshusen","description":"Deep Reinforcement Learning","archived":false,"fork":false,"pushed_at":"2022-12-10T13:25:34.000Z","size":201119,"stargazers_count":3658,"open_issues_count":41,"forks_count":615,"subscribers_count":41,"default_branch":"master","last_synced_at":"2025-03-11T20:37:37.121Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/wangshusen.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2020-07-19T16:48:21.000Z","updated_at":"2025-03-11T15:09:53.000Z","dependencies_parsed_at":"2022-07-10T14:17:16.777Z","dependency_job_id":null,"html_url":"https://github.com/wangshusen/DRL","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/wangshusen/DRL","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wangshusen%2FDRL","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wangshusen%2FDRL/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wangshusen%2FDRL/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wangshusen%2FDRL/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/wangshusen","download_url":"https://codeload.github.com/wangshusen/DRL/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wangshusen%2FDRL/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":266221259,"owners_count":23894965,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-08T13:01:42.304Z","updated_at":"2025-07-21T00:31:11.368Z","avatar_url":"https://github.com/wangshusen.png","language":null,"readme":"# Deep Reinforcement Learning\n\n\n\n\n1. **Overview.**\n\n\n    * Reinforcement Learning \n    [[slides](https://github.com/wangshusen/DRL/blob/master/Slides/1_Basics_1.pdf)] \n    [[lecture note](https://github.com/wangshusen/DeepLearning/blob/master/LectureNotes/DRL/DRL.pdf)] \n    [[Video (in Chinese)](https://youtu.be/vmkRMvhCW5c)].\n\n    * Value-Based Learning \n    [[slides](https://github.com/wangshusen/DRL/blob/master/Slides/1_Basics_2.pdf)] \n    [[Video (in Chinese)](https://youtu.be/jflq6vNcZyA)].\n\n    * Policy-Based Learning \n    [[slides](https://github.com/wangshusen/DRL/blob/master/Slides/1_Basics_3.pdf)] \n    [[Video (in Chinese)](https://youtu.be/qI0vyfR2_Rc)].\n\n    * Actor-Critic Methods \n    [[slides](https://github.com/wangshusen/DRL/blob/master/Slides/1_Basics_4.pdf)] \n    [[Video (in Chinese)](https://youtu.be/xjd7Jq9wPQY)].\n\n    * AlphaGo \n    [[slides](https://github.com/wangshusen/DRL/blob/master/Slides/1_Basics_5.pdf)] \n    [[Video (in Chinese)](https://youtu.be/zHojAp5vkRE)].\n    \n    \n\n\n\n2. **TD Learning.**\n    \n    * Sarsa\n    [[slides](https://github.com/wangshusen/DRL/blob/master/Slides/2_TD_1.pdf)] \n    [[Video (in Chinese)](https://youtu.be/-cYWdUubB6Q)].\n    \n    * Q-learning\n    [[slides](https://github.com/wangshusen/DRL/blob/master/Slides/2_TD_2.pdf)] \n    [[Video (in Chinese)](https://youtu.be/Ymy2w3DGn2U)].\n    \n    * Multi-Step TD Target\n    [[slides](https://github.com/wangshusen/DRL/blob/master/Slides/2_TD_3.pdf)] \n    [[Video (in Chinese)](https://youtu.be/UqTP138IATc)].\n    \n    \n    \n\n\n3. **Advanced Topics on Value-Based Learning.**\n\n\n    * Experience Replay (ER) \u0026 Prioritized ER\n    [[slides](https://github.com/wangshusen/DRL/blob/master/Slides/3_DQN_1.pdf)]\n    [[Video (in Chinese)](https://youtu.be/rhslMPmj7SY)].\n    \n    * Overestimation, Target Network, \u0026 Double DQN\n    [[slides](https://github.com/wangshusen/DRL/blob/master/Slides/3_DQN_2.pdf)] \n    [[Video (in Chinese)](https://youtu.be/X2-56QN79zc)].\n    \n    * Dueling Networks\n    [[slides](https://github.com/wangshusen/DRL/blob/master/Slides/3_DQN_3.pdf)]\n    [[Video (in Chinese)](https://youtu.be/DBux6cA0EoM)].\n\n\n\n\n4. **Policy Gradient with Baseline.**\n\n\n    * Policy Gradient with Baseline\n    [[slides](https://github.com/wangshusen/DRL/blob/master/Slides/4_Policy_1.pdf)]\n    [[Video (in Chinese)](https://youtu.be/yNEqbptitZs)].\n    \n    * REINFORCE with Baseline\n    [[slides](https://github.com/wangshusen/DRL/blob/master/Slides/4_Policy_2.pdf)]\n    [[Video (in Chinese)](https://youtu.be/Ob78ADXTQNo)].\n    \n    * Advantage Actor-Critic (A2C)\n    [[slides](https://github.com/wangshusen/DRL/blob/master/Slides/4_Policy_3.pdf)]\n    [[Video (in Chinese)](https://youtu.be/mtT4TSGSon8)].\n    \n    * REINFORCE versus A2C\n    [[slides](https://github.com/wangshusen/DRL/blob/master/Slides/4_Policy_4.pdf)]\n    [[Video (in Chinese)](https://youtu.be/hN9WMIMMeAI)].\n    \n\n\n5. **Advanced Topics on Policy-Based Learning.**\n    \n    * Trust-Region Policy Optimization (TRPO)\n    [[slides](https://github.com/wangshusen/DRL/blob/master/Slides/5_Policy_1.pdf)]\n    [[Video (in Chinese)](https://youtu.be/fcSYiyvPjm4)].\n    \n    * Partial Observation and RNNs.\n\n\n\n6. **Dealing with Continuous Action Space.**\n\n\n    * Discrete versus Continuous Control\n    [[slides](https://github.com/wangshusen/DRL/blob/master/Slides/6_Continuous_1.pdf)]\n    [[Video (in Chinese)](https://youtu.be/rRIjgdxSvg8)].\n\n    * Deterministic Policy Gradient (DPG) for Continuous Control\n    [[slides](https://github.com/wangshusen/DRL/blob/master/Slides/6_Continuous_2.pdf)] \n    [[Video (in Chinese)](https://youtu.be/cmWejKRWLA8)].\n\n    * Stochastic Policy Gradient for Continuous Control\n    [[slides](https://github.com/wangshusen/DRL/blob/master/Slides/6_Continuous_3.pdf)] \n    [[Video (in Chinese)](https://youtu.be/McqFyl_W5Wc)].\n    \n    \n\n7. **Multi-Agent Reinforcement Learning.**\n\n\n    * Basics and Challenges \n    [[slides](https://github.com/wangshusen/DRL/blob/master/Slides/7_MARL_1.pdf)] \n    [[Video (in Chinese)](https://youtu.be/KN-XMQFTD0o)].\n\n    * Centralized VS Decentralized \n    [[slides](https://github.com/wangshusen/DRL/blob/master/Slides/7_MARL_2.pdf)] \n    [[Video (in Chinese)](https://youtu.be/0HV1hsjd1y8)].\n\n\n\n8. **Imitation Learning.**\n\n\n    * Inverse Reinforcement Learning.\n    \n    * Generative Adversarial Imitation Learning (GAIL).\n\n\n","funding_links":[],"categories":["深度学习大类","时间序列","Others"],"sub_categories":["值得收藏的开源仓库","网络服务_其他"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fwangshusen%2FDRL","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fwangshusen%2FDRL","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fwangshusen%2FDRL/lists"}