Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/kristery/Awesome-Imitation-Learning
A curated list of awesome imitation learning resources and publications
https://github.com/kristery/Awesome-Imitation-Learning
List: Awesome-Imitation-Learning
awesome awesome-lists imitation-learning
Last synced: 3 months ago
JSON representation
A curated list of awesome imitation learning resources and publications
- Host: GitHub
- URL: https://github.com/kristery/Awesome-Imitation-Learning
- Owner: kristery
- Created: 2018-10-18T06:48:22.000Z (about 6 years ago)
- Default Branch: master
- Last Pushed: 2024-02-05T00:10:08.000Z (9 months ago)
- Last Synced: 2024-04-10T10:17:43.539Z (7 months ago)
- Topics: awesome, awesome-lists, imitation-learning
- Homepage:
- Size: 71.3 KB
- Stars: 474
- Watchers: 19
- Forks: 59
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- ultimate-awesome - Awesome-Imitation-Learning - A curated list of awesome imitation learning resources and publications. (Other Lists / PowerShell Lists)
README
# Awesome-Imitation-Learning: [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome)
A curated list of awesome imitation learning (including inverse reinforcement learning and behavior cloning) resources, inspired by [awesome-php](https://github.com/ziadoz/awesome-php).
## Contribution
Please feel free to send me [pull request](https://github.com/kristery/Awesome-Imitation-Learning/pulls) or email ([email protected]) to add links.## Table of Contents
- [Papers](#papers)
- [Tutorials and Talks](#tutorials-and-talks)
- [Blogs](#blogs)## Papers
### General settings
* [How Resilient Are Imitation Learning Methods to Sub-optimal Experts?](https://link.springer.com/chapter/10.1007/978-3-031-21689-3_32), Gavenski et al., BRACIS 2023* [IQ-Learn: Inverse soft-Q Learning for Imitation](https://arxiv.org/abs/2106.12142), D. Garg et al., NeurIPS 2021
* [Learning from Imperfect Demonstrations from Agents with Varying Dynamics](https://arxiv.org/abs/2103.05910), Z. Cao et al., ICRA 2021
* [Robust Imitation Learning from Noisy Demonstrations](https://proceedings.mlr.press/v130/tangkaratt21a.html), V. Tangkaratt et al., AISTATS 2021
* [Generative Adversarial Imitation Learning with Neural Networks: Global Optimality and Convergence Rate](https://arxiv.org/abs/2003.03709), Y. Zhang et al., ICML 2020
* [Provable Representation Learning for Imitation Learning via Bi-level Optimization](https://arxiv.org/pdf/2002.10544), S. Arora et al., ICML 2020
* [Domain Adaptive Imitation Learning](https://arxiv.org/pdf/1910.00105), K. Kim et al., ICML 2020
* [VILD: Variational Imitation Learning with Diverse-quality Demonstrations](https://proceedings.mlr.press/v119/tangkaratt20a.html), V. Tangkaratt et al., ICML 2020
* [Imitation Learning from Imperfect Demonstration](http://proceedings.mlr.press/v97/wu19a/wu19a.pdf), Y. Wu et al., ICML 2019
* [A Divergence Minimization Perspective on Imitation Learning Methods](https://arxiv.org/abs/1911.02256), S. Ghasemipour et al., CoRL 2019
* [Sample-Efficient Imitation Learning via Generative Adversarial Nets](https://arxiv.org/abs/1809.02064), L. Blonde et al., AISTATS 2019
* [Sample Efficient Imitation Learning for Continuous Control](https://openreview.net/pdf?id=BkN5UoAqF7), F. Sasaki et al., ICLR 2019* [Random Expert Distillation: Imitation Learning via Expert Policy Support Estimation](http://proceedings.mlr.press/v97/wang19d.html), R. Wang et al., ICML 2019
* [Uncertainty-Aware Data Aggregation for Deep Imitation Learning](https://arxiv.org/abs/1905.02780), Y. Cui et al., ICRA 2019
* [Goal-conditioned Imitation Learning](https://openreview.net/pdf?id=HkglHcSj2N), Y. Ding et al., ICML Workshop 2019
* [Adversarial Imitation Learning from Incomplete Demonstrations](https://arxiv.org/pdf/1905.12310.pdf), M. Sun et al., 2019
* [Generative Adversarial Self-Imitation Learning](https://openreview.net/forum?id=HJeABnCqKQ), J. Oh et al., 2019
* [Wasserstein Adversarial Imitation Learning](https://arxiv.org/pdf/1906.08113.pdf), H. Xiao et al., 2019
* [Learning Plannable Representations with Causal InfoGAN](http://papers.nips.cc/paper/8090-learning-plannable-representations-with-causal-infogan.pdf), T. Kurutach et al., NeurIPS 2018
* [Self-Imitation Learning](https://arxiv.org/abs/1806.05635), J. Oh et al., ICML 2018
* [Deep Q-learning from Demonstrations](https://arxiv.org/abs/1704.03732), T. Hester et al., AAAI 2018
* [An Algorithmic Perspective on Imitation Learning](https://www.nowpublishers.com/article/Details/ROB-053), T. Osa et al., 2018
* [Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning](https://arxiv.org/pdf/1809.02925.pdf), I. Kostrikov et al., 2018
* [Universal Planning Networks](https://arxiv.org/pdf/1804.00645.pdf), A. Srinivas et al., 2018
* [Learning to Search via Retrospective Imitation](https://authors.library.caltech.edu/92668/1/1804.00846.pdf), J. Song et al., 2018
* [Third-Person Imitation Learning](https://arxiv.org/abs/1703.01703), B. Stadie et al., ICLR 2017
* [RAIL: Risk-Averse Imitation Learning](https://arxiv.org/abs/1707.06658), A. Santara et al., NIPS 2017
* [Generative Adversarial Imitation Learning](https://arxiv.org/abs/1606.03476), J. Ho et al., NIPS 2016
### Applications
* [Model Imitation for Model-Based Reinforcement Learning](https://arxiv.org/abs/1909.11821.pdf), Y. Wu et al., 2019
* [Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations](https://arxiv.org/pdf/1907.03976.pdf), D. Brown et al., CoRL 2019
* [Task-Relevant Adversarial Imitation Learning](https://arxiv.org/abs/1910.01077), K. Zolna et al., 2019
* [Multi-Task Hierarchical Imitation Learning for Home Automation](http://ronberenstein.com/papers/CASE19_Multi-Task%20Hierarchical%20Imitation%20Learning%20for%20Home%20Automation%20%20.pdf), R. Fox et al., 2019
* [Imitation Learning for Human Pose Prediction](https://arxiv.org/pdf/1909.03449.pdf), B. Wang et al., 2019
* [Making Efficient Use of Demonstrations to Solve Hard Exploration Problems](https://arxiv.org/abs/1909.11821), C. Gulcehre et al., 2019
* [Imitation Learning from Video by Leveraging Proprioception](https://arxiv.org/pdf/1905.09335.pdf), F. Torabi et al., IJCAI 2019
* [Adversarial Imitation Learning from Incomplete Demonstrations](https://arxiv.org/abs/1905.12310), M. Sun et al., 2019
* [End-to-end Driving via Conditional Imitation Learning](https://arxiv.org/abs/1710.02410), F. Codevilla et al., ICRA 2018
* [R2P2: A ReparameteRized Pushforward Policy for Diverse, Precise Generative Path Forecasting](https://link.springer.com/chapter/10.1007/978-3-030-01261-8_47), N. Rhinehart et al., ECCV 2018 [[blog]](http://www.cs.cmu.edu/~nrhineha/R2P2.html)
* [End-to-End Learning Driver Policy using Moments Deep Neural Network](https://ieeexplore.ieee.org/abstract/document/8664869), D. Qian et al., ROBIO 2018
* [Learning Montezuma’s Revenge from a Single Demonstration](https://arxiv.org/pdf/1812.03381.pdf), T. Salimans., et al., 2018
* [ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst](https://arxiv.org/pdf/1812.03079.pdf), M. Bansal et al., 2018
* [Video Imitation GAN: Learning control policies by imitating raw videos using generative adversarial reward estimation](https://arxiv.org/pdf/1810.01108.pdf), S. Chaudhury et al., 2018* [Query-Efficient Imitation Learning for End-to-End Autonomous Driving](https://arxiv.org/abs/1605.06450), J. Zhang et al., 2016
### Survey papers
* [Imitation Learning: Progress, Taxonomies and Challenges](https://ieeexplore.ieee.org/document/9927439), Zheng et al., 2022* [Deep Reinforcement Learning: An Overview](https://arxiv.org/abs/1701.07274), Y. Li, 2018
* [A Brief Survey of Deep Reinforcement Learning](https://arxiv.org/abs/1708.05866), K. Arulkumaran et al., 2017
* [Imitation Learning : A Survey of Learning Methods](http://www.open-access.bcu.ac.uk/5045/1/Imitation%20Learning%20A%20Survey%20of%20Learning%20Methods.pdf), A. Hussein et al.### Robotics and Vision
* [Graph-Structured Visual Imitation](https://arxiv.org/abs/1907.05518), M. Sieb et al., CoRL 2019* [On-Policy Robot Imitation Learning from a Converging Supervisor](https://arxiv.org/abs/1907.03423), A. Balakrishna et al., CoRL 2019
* [Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Reward](https://pdfs.semanticscholar.org/8186/04245973bb30ad021728149a89157b3b2780.pdf), M. Vecerik et al., 2017
### Cold-start methods
* [Zero-Shot Visual Imitation](http://openaccess.thecvf.com/content_cvpr_2018_workshops/papers/w40/Pathak_Zero-Shot_Visual_Imitation_CVPR_2018_paper.pdf), D. Pathak et al., ICLR 2018* [One-Shot Hierarchical Imitation Learning of Compound Visuomotor Tasks](https://arxiv.org/pdf/1810.11043.pdf), T. Yu et al., 2018
* [One-Shot Imitation Learning](http://papers.nips.cc/paper/6709-one-shot-imitation-learning), Y. Duan et al., NIPS 2017
### Learning multi-modal behaviors
* [Learning a Multi-Modal Policy via Imitating Demonstrations with Mixed Behaviors](https://arxiv.org/pdf/1903.10304), F Hsiao et al., 2019
* [Watch, Try, Learn: Meta-Learning from Demonstrations and Reward. Imitation learning](https://arxiv.org/pdf/1906.03352), A. Zhou et al., 2019* [Shared Multi-Task Imitation Learning for Indoor Self-Navigation](https://arxiv.org/pdf/1808.04503.pdf), J. Xu et al., 2018
* [Robust Imitation of Diverse Behaviors](http://papers.nips.cc/paper/7116-robust-imitation-of-diverse-behaviors), Z. Wang et al., NIPS 2017
* [Multi-Modal Imitation Learning from Unstructured Demonstrations using Generative Adversarial Nets](http://papers.nips.cc/paper/6723-multi-modal-imitation-learning-from-unstructured-demonstrations-using-generative-adversarial-nets), K. Hausman et al., NIPS 2017
* [InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations](http://papers.nips.cc/paper/6971-infogail-interpretable-imitation-learning-from-visual-demonstrations), Y. Li et al., NIPS 2017### Hierarchical approaches
* [Learning Compound Tasks without Task-specific Knowledge via Imitation and Self-supervised Learning](https://proceedings.icml.cc/static/paper_files/icml/2020/4283-Paper.pdf), S. Lee et al., ICML 2020* [CompILE: Compositional Imitation Learning and Execution](https://arxiv.org/pdf/1812.01483.pdf), T. Kipf et al., ICML 2019
* [Directed-Info GAIL: Learning Hierarchical Policies from Unsegmented Demonstrations using Directed Information](https://openreview.net/pdf?id=BJeWUs05KQ), M. Sharma et al., ICLR 2019
* [Hierarchical Imitation and Reinforcement Learning](https://arxiv.org/abs/1803.00590), H. Le et al., ICML 2018
* [OptionGAN: Learning Joint Reward-Policy Options using Generative Adversarial Inverse Reinforcement Learning](https://arxiv.org/pdf/1709.06683.pdf), P. Henderson et al., AAAI 2018
### Learning from human preference
* [Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences](https://arxiv.org/abs/2002.09089), D. Brown et al., ICML 2020* [A Low-Cost Ethics Shaping Approach for Designing Reinforcement Learning Agents](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewFile/16195/15869), Y. Wu et al., AAAI 2018
* [Deep Reinforcement Learning from Human Preferences](http://papers.nips.cc/paper/7017-deep-reinforcement-learning-from-human-preferences), P. Christiano et al., NIPS 2017
### Learning from observations
* [Self-Supervised Adversarial Imitation Learning](https://arxiv.org/abs/2304.10914) M. Juarez et al., IJCNN 2023* [MobILE: Model-Based Imitation Learning From Observation Alone](https://proceedings.neurips.cc/paper_files/paper/2021/hash/f06048518ff8de2035363e00710c6a1d-Abstract.html), Kidambi et al., NeurIPS 2021
* [Off-Policy Imitation Learning from Observations](https://arxiv.org/abs/2102.13185), Zhu et al., NeurIPS 2020
* [Imitation Learning from Observations by Minimizing Inverse Dynamics Disagreement](https://arxiv.org/abs/1910.04417), C. Yang et al., NeurIPS 2019
* [To Follow or not to Follow: Selective Imitation Learning from Observations](https://arxiv.org/abs/1912.07670), Y. Lee et al., CoRL 2019
* [Provably Efficient Imitation Learning from Observation Alone](http://proceedings.mlr.press/v97/sun19b.html), W. Sun et al., ICML 2019
* [To follow or not to follow: Selective Imitation Learning from Observations](https://arxiv.org/abs/1912.07670), Y. Lee et al.
* [Recent Advances in Imitation Learning from Observation](https://arxiv.org/pdf/1905.13566.pdf), F. Torabi et al., IJCAI 2019
* [Adversarial Imitation Learning from State-only Demonstrations](https://dl.acm.org/citation.cfm?id=3332067), F. Torabi et al., AAMAS 2019* [Imitation from Observation: Learning to Imitate Behaviors from Raw Video via Context Translation](https://arxiv.org/abs/1707.03374), Y. Liu et al., 2018
* [Observational Learning by Reinforcement Learning](https://arxiv.org/abs/1706.06617), D. Borsa et al., 2017
### Model-based approaches
* [Safe end-to-end imitation learning for model predictive control](https://arxiv.org/pdf/1803.10231.pdf), K. Lee et al., ICRA 2019
* [Deep Imitative Models for Flexible Inference, Planning, and Control](https://arxiv.org/abs/1810.06544), N. Rhinehart et al., 2019 [[blog]](https://sites.google.com/view/imitative-models)* [Model-based imitation learning from state trajectories](https://openreview.net/forum?id=S1GDXzb0b¬eId=S1GDXzb0b), S. Chaudhury et al., 2018
* [End-to-End Differentiable Adversarial Imitation Learning](http://proceedings.mlr.press/v70/baram17a/baram17a.pdf), N. Baram et al., ICML 2017
### Behavior cloning
* [Imitating Unknown Policies via Exploration](https://arxiv.org/abs/2008.05660), G. Nathan et al., BMVC 2020
* [Augmented Behavioral Cloning from Observation](https://arxiv.org/pdf/2004.13529.pdf), M. Juarez et al., IJCNN 2020
* [Truly Batch Apprenticeship Learning with Deep Successor Features](https://arxiv.org/pdf/1903.10077), D. Lee et al., 2019* [SQIL: Imitation Learning via Regularized Behavioral Cloning](https://arxiv.org/pdf/1905.11108), S. Reddy et al., 2019
* [Behavioral Cloning from Observation](https://arxiv.org/abs/1805.01954), F. Torabi et al., IJCAI 2018
* [Causal Confusion in Imitation Learning](https://people.eecs.berkeley.edu/~dineshjayaraman/projects/causal_confusion_nips18.pdf), P. Haan et al., NeurIPS 2018
### Imitation with rewards
* [Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning](https://arxiv.org/abs/1910.11956), A. Gupta et al., CoRL 2019* [Integration of Imitation Learning using GAIL and Reinforcement Learning using Task-achievement Rewards via Probabilistic Generative Model](https://arxiv.org/pdf/1907.02140.pdf), A. Kinose et al., 2019
* [Reinforced Imitation in Heterogeneous Action Space](https://arxiv.org/pdf/1904.03438), K. Zolna et al., 2019* [Reinforcement and Imitation Learning for Diverse Visuomotor Skills](https://arxiv.org/abs/1802.09564), Y. Zhu et al., RSS 2018
* [Policy Optimization with Demonstrations](http://proceedings.mlr.press/v80/kang18a.html), B. Kang et al., ICML 2018
* [Reinforcement Learning from Imperfect Demonstrations](https://arxiv.org/pdf/1802.05313.pdf), Y. Gao et al., ICML Workshop 2018
* [Pre-training with Non-expert Human Demonstration for Deep Reinforcement Learning](https://arxiv.org/pdf/1812.08904), G. Cruz Jr et al., 2018
* [Sparse Reward Based Manipulator Motion Planning by Using High Speed Learning from Demonstrations](https://ieeexplore.ieee.org/abstract/document/8665328), G. Zuo et al., ROBIO 2018
### Multi-agent systems
* [Independent Generative Adversarial Self-Imitation Learning in Cooperative Multiagent Systems](https://dl.acm.org/citation.cfm?id=3331837), X. Hao et al., AAMAS 2019
* [PRECOG: PREdiction Conditioned On Goals in Visual Multi-Agent Settings](https://arxiv.org/abs/1905.01296), N. Rhinehart et al., 2019 [[blog]](https://sites.google.com/view/precog)### Inverse reinforcement learning
* [Intrinsic Reward Driven Imitation Learning via Generative Model](https://arxiv.org/pdf/2006.15061), X. Yu et al., ICML 2020* [Inferring Task Goals and Constraints using Bayesian Nonparametric Inverse Reinforcement Learning](https://proceedings.mlr.press/v100/park20a.html), D. Park et al., CoRL 2019
* [Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations](http://proceedings.mlr.press/v97/brown19a/brown19a.pdf), D. Brown et al., ICML 2019
* [Learning Reward Functions by Integrating Human Demonstrations and Preferences](https://arxiv.org/pdf/1906.08928), M. Palan et al., 2019
* [Learning Robust Rewards with Adversarial Inverse Reinforcement Learning](https://arxiv.org/abs/1710.11248), J. Fu et al., 2018* [Model-Free Deep Inverse Reinforcement Learning by Logistic Regression](https://link.springer.com/article/10.1007/s11063-017-9702-7), E. Uchibe, 2018
* [Compatible Reward Inverse Reinforcement Learning](https://papers.nips.cc/paper/6800-compatible-reward-inverse-reinforcement-learning), A. Metelli et al., NIPS 2017
* [A Connection Between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models](https://arxiv.org/pdf/1611.03852.pdf), C. Finn et al., NIPS Workshop 2016
* [Maximum Entropy Inverse Reinforcement Learning](https://www.aaai.org/Papers/AAAI/2008/AAAI08-227.pdf), B. Ziebart et al., AAAI 2008
### POMDP
* [Learning Belief Representations for Imitation Learning in POMDPs](https://arxiv.org/pdf/1906.09510.pdf), T. Gangwani et al., 2019### Planning
* [Dyna-AIL : Adversarial Imitation Learning by Planning](https://arxiv.org/abs/1903.03234), V. Saxena et al., 2019### Representation learning
* [Visual Adversarial Imitation Learning using Variational Models](https://proceedings.neurips.cc/paper/2021/hash/1796a48fa1968edd5c5d10d42c7b1813-Abstract.html), R. Rafailov et al., NeuRIPS 2021
* [An Empirical Investigation of Representation Learning for Imitation](https://openreview.net/forum?id=kBNhgqXatI), X. Chen et al., NeuRIPS 2021
* [Self-Supervised Disentangled Representation Learning for Third-Person Imitation Learning](https://ieeexplore.ieee.org/abstract/document/9636363?casa_token=h9sZkw2gl3gAAAAA:jvlP3duYMpTJRczH8uHwKvqxvQDccsplK8eF_Hd1mYEa0CC6Tso7z7iWSWiwoIQmQdhTLg7f), J. Shang et al., IROS 2021
* [The Surprising Effectiveness of Representation Learning for Visual Imitation](https://arxiv.org/abs/2112.01511), J. Pari et al., 2021
* [Provable Representation Learning for Imitation Learning via Bi-level Optimization](http://proceedings.mlr.press/v119/arora20a.html), S. Arora et al., ICML 2020
* [Causal Confusion in Imitation Learning](https://proceedings.neurips.cc/paper/2019/hash/947018640bf36a2bb609d3557a285329-Abstract.html), P. Haan et al, NeuRIPS 2019## Tutorials and talks
* [2018 ICML](https://www.youtube.com/watch?v=6rZTaboSY4k) [(Slides)](https://sites.google.com/view/icml2018-imitation-learning/)
* [Imitation learning basic (National Taiwan University)](https://www.youtube.com/watch?v=rOho-2oJFeA)
* [New Frontiers in Imitation Learning (2017)](https://www.youtube.com/watch?v=4PnNlvPGbUQi)
* [Unity Course](https://www.youtube.com/watch?v=uiutRBXfEbg)## Blogs
* [Introduction to Imitation Learning](https://blog.statsbot.co/introduction-to-imitation-learning-32334c3b1e7a)### Materials
* [Imitation Learning](http://ciml.info/dl/v0_99/ciml-v0_99-ch18.pdf)
* [CMU Imitation Learning](https://katefvision.github.io/katefSlides/immitation_learning_I_katef.pdf)
* [Deep Reinforcement Learning via Imitation Learning](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=10&cad=rja&uact=8&ved=2ahUKEwi7tJ_gvI_eAhXGE7wKHV7WDoUQFjAJegQICRAC&url=https%3A%2F%2Fbcourses.berkeley.edu%2Fcourses%2F1453965%2Ffiles%2F69855652%2Fdownload%3Fverifier%3D5DYZT6niDXA1pc4fTuIndZ1tpIsJeCmcicRgcpY2%26wrap%3D1&usg=AOvVaw2HHdnf9uYaJalo6t9kv46s), S. Levine## Licenses
License[![CC0](http://i.creativecommons.org/p/zero/1.0/88x31.png)](http://creativecommons.org/publicdomain/zero/1.0/)
To the extent possible under law, [Yueh-Hua Wu](https://kristery.github.io/) has waived all copyright and related or neighboring rights to this work.