{"id":13415758,"url":"https://github.com/yandexdataschool/Practical_RL","last_synced_at":"2025-03-14T23:31:05.940Z","repository":{"id":16370098,"uuid":"79821807","full_name":"yandexdataschool/Practical_RL","owner":"yandexdataschool","description":"A course in reinforcement learning in the wild","archived":false,"fork":false,"pushed_at":"2025-03-02T20:41:32.000Z","size":47966,"stargazers_count":6055,"open_issues_count":39,"forks_count":1715,"subscribers_count":208,"default_branch":"master","last_synced_at":"2025-03-11T18:17:24.289Z","etag":null,"topics":["course-materials","deep-learning","deep-reinforcement-learning","git-course","hacktoberfest","keras","mooc","pytorch","pytorch-tutorials","reinforcement-learning","tensorflow"],"latest_commit_sha":null,"homepage":"","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"unlicense","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/yandexdataschool.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE.md","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2017-01-23T16:18:00.000Z","updated_at":"2025-03-11T13:00:49.000Z","dependencies_parsed_at":"2023-02-17T07:16:08.979Z","dependency_job_id":"5ee2f50f-b85d-4ae3-b81e-617a39de81ef","html_url":"https://github.com/yandexdataschool/Practical_RL","commit_stats":{"total_commits":1205,"total_committers":96,"mean_commits":"12.552083333333334","dds":0.4970954356846473,"last_synced_commit":"50381358a03a9196ebb6fd39be2751aa4167ad78"},"previous_names":[],"tags_count":2,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/yandexdataschool%2FPractical_RL","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/yandexdataschool%2FPractical_RL/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/yandexdataschool%2FPractical_RL/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/yandexdataschool%2FPractical_RL/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/yandexdataschool","download_url":"https://codeload.github.com/yandexdataschool/Practical_RL/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":243663450,"owners_count":20327299,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["course-materials","deep-learning","deep-reinforcement-learning","git-course","hacktoberfest","keras","mooc","pytorch","pytorch-tutorials","reinforcement-learning","tensorflow"],"created_at":"2024-07-30T21:00:51.896Z","updated_at":"2025-03-14T23:31:05.933Z","avatar_url":"https://github.com/yandexdataschool.png","language":"Jupyter Notebook","readme":"# Practical_RL\n\nAn open course on reinforcement learning in the wild.\nTaught on-campus at [HSE](https://cs.hse.ru) and [YSDA](https://yandexdataschool.com/)  and maintained to be friendly to online students (both english and russian).\n\n\n#### Manifesto:\n* __Optimize for the curious.__ For all the materials that aren’t covered in detail there are links to more information and related materials (D.Silver/Sutton/blogs/whatever). Assignments will have bonus sections if you want to dig deeper.\n* __Practicality first.__ Everything essential to solving reinforcement learning problems is worth mentioning. We won't shun away from covering tricks and heuristics. For every major idea there should be a lab that makes you to “feel” it on a practical problem.\n* __Git-course.__ Know a way to make the course better? Noticed a typo in a formula? Found a useful link? Made the code more readable? Made a version for alternative framework? You're awesome! [Pull-request](https://help.github.com/articles/about-pull-requests/) it!\n\n[![Github contributors](https://img.shields.io/github/contributors/yandexdataschool/Practical_RL.svg?logo=github\u0026logoColor=white)](https://github.com/yandexdataschool/Practical_RL/graphs/contributors)\n\n# Course info\n\n* __FAQ:__ [About the course](https://github.com/yandexdataschool/Practical_RL/wiki/Practical-RL), [Technical issues thread](https://github.com/yandexdataschool/Practical_RL/issues/1), [Lecture Slides](https://yadi.sk/d/loPpY45J3EAYfU), [Online Student Survival Guide](https://github.com/yandexdataschool/Practical_RL/wiki/Online-student's-survival-guide)\n\n* Anonymous [feedback form](https://docs.google.com/forms/d/e/1FAIpQLSdurWw97Sm9xCyYwC8g3iB5EibITnoPJW2IkOVQYE_kcXPh6Q/viewform).\n\n* Virtual course environment: \n    * [__Google Colab__](https://colab.research.google.com/) - set open -\u003e github -\u003e yandexdataschool/pracical_rl -\u003e {branch name} and select any notebook you want.\n    * [Installing dependencies](https://github.com/yandexdataschool/Practical_RL/issues/1) on your local machine (recommended).\n    * Alternative: [Azure Notebooks](https://notebooks.azure.com/).\n\n\n# Additional materials\n* [RL reading group](https://github.com/yandexdataschool/Practical_RL/wiki/RL-reading-group)\n\n\n# Syllabus\n\nThe syllabus is approximate: the lectures may occur in a slightly different order and some topics may end up taking two weeks.\n\n* [__week01_intro__](./week01_intro) Introduction\n  * Lecture: RL problems around us. Decision processes. Stochastic optimization, Crossentropy method. Parameter space search vs action space search.\n  * Seminar: Welcome into openai gym. Tabular CEM for Taxi-v0, deep CEM for box2d environments.\n  * Homework description - see week1/README.md. \n\n* [__week02_value_based__](./week02_value_based) Value-based methods\n  * Lecture: Discounted reward MDP. Value-based approach. Value iteration. Policy iteration. Discounted reward fails.\n  * Seminar: Value iteration.  \n  * Homework description - see week2/README.md. \n  \n* [__week03_model_free__](./week03_model_free) Model-free reinforcement learning\n  * Lecture: Q-learning. SARSA. Off-policy Vs on-policy algorithms. N-step algorithms. TD(Lambda).\n  * Seminar: Qlearning Vs SARSA Vs Expected Value SARSA\n  * Homework description - see week3/README.md. \n\n* [__recap_deep_learning__](./week04_\\[recap\\]_deep_learning) - deep learning recap \n  * Lecture: Deep learning 101\n  * Seminar: Intro to pytorch/tensorflow, simple image classification with convnets\n\n* [__week04_approx_rl__](./week04_approx_rl) Approximate (deep) RL\n  * Lecture: Infinite/continuous state space. Value function approximation. Convergence conditions. Multiple agents trick; experience replay, target networks, double/dueling/bootstrap DQN, etc.\n  * Seminar:  Approximate Q-learning with experience replay. (CartPole, Atari)\n  \n* [__week05_explore__](./week05_explore) Exploration\n  * Lecture: Contextual bandits. Thompson Sampling, UCB, bayesian UCB. Exploration in model-based RL, MCTS. \"Deep\" heuristics for exploration.\n  * Seminar: bayesian exploration for contextual bandits. UCB for MCTS.\n\n* [__week06_policy_based__](./week06_policy_based) Policy Gradient methods\n  * Lecture: Motivation for policy-based, policy gradient, logderivative trick, REINFORCE/crossentropy method, variance reduction(baseline), advantage actor-critic (incl. GAE)\n  * Seminar: REINFORCE, advantage actor-critic\n\n* [__week07_seq2seq__](./week07_seq2seq) Reinforcement Learning for Sequence Models\n  * Lecture: Problems with sequential data. Recurrent neural networks. Backprop through time. Vanishing \u0026 exploding gradients. LSTM, GRU. Gradient clipping\n  * Seminar: character-level RNN language model\n\n* [__week08_pomdp__](./week08_pomdp) Partially Observed MDP\n  * Lecture: POMDP intro. POMDP learning (agents with memory). POMDP planning (POMCP, etc)\n  * Seminar: Deep kung-fu \u0026 doom with recurrent A3C and DRQN\n  \n* [__week09_policy_II__](./week09_policy_II) Advanced policy-based methods\n  * Lecture: Trust region policy optimization. NPO/PPO. Deterministic policy gradient. DDPG\n  * Seminar: Approximate TRPO for simple robot control.\n\n* [__week10_planning__](./week10_planning) Model-based RL \u0026 Co\n  * Lecture: Model-Based RL, Planning in General, Imitation Learning and Inverse Reinforcement Learning\n  * Seminar: MCTS for toy tasks\n\n* [__yet_another_week__](./yet_another_week) Inverse RL and Imitation Learning\n  * All that cool RL stuff that you won't learn from this course :)\n\n\n# Course staff\nCourse materials and teaching by: _[unordered]_\n- [Pavel Shvechikov](https://github.com/pshvechikov) - lectures, seminars, hw checkups, reading group\n- [Nikita Putintsev](https://github.com/qwasser) - seminars, hw checkups, organizing our hot mess\n- [Alexander Fritsler](https://github.com/Fritz449) - lectures, seminars, hw checkups\n- [Oleg Vasilev](https://github.com/Omrigan) - seminars, hw checkups, technical support\n- [Dmitry Nikulin](https://github.com/pastafarianist) - tons of fixes, far and wide\n- [Mikhail Konobeev](https://github.com/MichaelKonobeev) - seminars, hw checkups\n- [Ivan Kharitonov](https://github.com/neer201) - seminars, hw checkups\n- [Ravil Khisamov](https://github.com/zshrav) - seminars, hw checkups\n- [Anna Klepova](https://github.com/q0o0p) - hw checkups\n- [Fedor Ratnikov](https://github.com/justheuristic) - admin stuff\n\n# Contributions\n* Using pictures from [Berkeley AI course](http://ai.berkeley.edu/home.html)\n* Massively refering to [CS294](http://rll.berkeley.edu/deeprlcourse/)\n* Several tensorflow assignments by [Scitator](https://github.com/Scitator)\n* A lot of fixes from [arogozhnikov](https://github.com/arogozhnikov)\n* Other awesome people: see github [contributors](https://github.com/yandexdataschool/Practical_RL/graphs/contributors)\n* [Alexey Umnov](https://github.com/alexeyum) helped us a lot during spring2018\n\n","funding_links":[],"categories":["Jupyter Notebook","Uncategorized","时间序列","Machine Learning","Tutorials / Websites","📋 Contents","Table of Contents","Образование"],"sub_categories":["Uncategorized","网络服务_其他","Introduction to CS","Human Computer Interaction","📚 14. Resources \u0026 Learning","Courses"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fyandexdataschool%2FPractical_RL","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fyandexdataschool%2FPractical_RL","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fyandexdataschool%2FPractical_RL/lists"}