Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
Awesome-Imitation-Learning
A curated list of awesome imitation learning resources and publications
https://github.com/kristery/Awesome-Imitation-Learning
- How Resilient Are Imitation Learning Methods to Sub-optimal Experts?
- IQ-Learn: Inverse soft-Q Learning for Imitation
- Learning from Imperfect Demonstrations from Agents with Varying Dynamics
- Robust Imitation Learning from Noisy Demonstrations
- Generative Adversarial Imitation Learning with Neural Networks: Global Optimality and Convergence Rate
- Provable Representation Learning for Imitation Learning via Bi-level Optimization
- Domain Adaptive Imitation Learning
- VILD: Variational Imitation Learning with Diverse-quality Demonstrations
- Imitation Learning from Imperfect Demonstration
- A Divergence Minimization Perspective on Imitation Learning Methods
- Sample-Efficient Imitation Learning via Generative Adversarial Nets
- Sample Efficient Imitation Learning for Continuous Control
- Random Expert Distillation: Imitation Learning via Expert Policy Support Estimation
- Uncertainty-Aware Data Aggregation for Deep Imitation Learning
- Goal-conditioned Imitation Learning
- Adversarial Imitation Learning from Incomplete Demonstrations
- Generative Adversarial Self-Imitation Learning
- Wasserstein Adversarial Imitation Learning
- Learning Plannable Representations with Causal InfoGAN
- Self-Imitation Learning
- Deep Q-learning from Demonstrations
- An Algorithmic Perspective on Imitation Learning
- Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning
- Universal Planning Networks
- Learning to Search via Retrospective Imitation
- Third-Person Imitation Learning
- RAIL: Risk-Averse Imitation Learning
- Generative Adversarial Imitation Learning
- Model Imitation for Model-Based Reinforcement Learning
- Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations
- Task-Relevant Adversarial Imitation Learning
- Multi-Task Hierarchical Imitation Learning for Home Automation
- Imitation Learning for Human Pose Prediction
- Making Efficient Use of Demonstrations to Solve Hard Exploration Problems
- Imitation Learning from Video by Leveraging Proprioception
- Adversarial Imitation Learning from Incomplete Demonstrations
- End-to-end Driving via Conditional Imitation Learning
- R2P2: A ReparameteRized Pushforward Policy for Diverse, Precise Generative Path Forecasting
- End-to-End Learning Driver Policy using Moments Deep Neural Network
- Learning Montezuma’s Revenge from a Single Demonstration
- ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst
- Video Imitation GAN: Learning control policies by imitating raw videos using generative adversarial reward estimation
- Query-Efficient Imitation Learning for End-to-End Autonomous Driving
- Imitation Learning: Progress, Taxonomies and Challenges
- Deep Reinforcement Learning: An Overview
- A Brief Survey of Deep Reinforcement Learning
- Imitation Learning : A Survey of Learning Methods
- Graph-Structured Visual Imitation
- On-Policy Robot Imitation Learning from a Converging Supervisor
- Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Reward
- Zero-Shot Visual Imitation
- One-Shot Hierarchical Imitation Learning of Compound Visuomotor Tasks
- One-Shot Imitation Learning
- Learning a Multi-Modal Policy via Imitating Demonstrations with Mixed Behaviors
- Watch, Try, Learn: Meta-Learning from Demonstrations and Reward. Imitation learning
- Shared Multi-Task Imitation Learning for Indoor Self-Navigation
- Robust Imitation of Diverse Behaviors
- Multi-Modal Imitation Learning from Unstructured Demonstrations using Generative Adversarial Nets
- InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations
- Learning Compound Tasks without Task-specific Knowledge via Imitation and Self-supervised Learning
- CompILE: Compositional Imitation Learning and Execution
- Directed-Info GAIL: Learning Hierarchical Policies from Unsegmented Demonstrations using Directed Information
- Hierarchical Imitation and Reinforcement Learning
- OptionGAN: Learning Joint Reward-Policy Options using Generative Adversarial Inverse Reinforcement Learning
- Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences
- A Low-Cost Ethics Shaping Approach for Designing Reinforcement Learning Agents
- Deep Reinforcement Learning from Human Preferences
- Self-Supervised Adversarial Imitation Learning
- MobILE: Model-Based Imitation Learning From Observation Alone
- Off-Policy Imitation Learning from Observations
- Imitation Learning from Observations by Minimizing Inverse Dynamics Disagreement
- To Follow or not to Follow: Selective Imitation Learning from Observations
- Provably Efficient Imitation Learning from Observation Alone
- To follow or not to follow: Selective Imitation Learning from Observations
- Recent Advances in Imitation Learning from Observation
- Adversarial Imitation Learning from State-only Demonstrations
- Imitation from Observation: Learning to Imitate Behaviors from Raw Video via Context Translation
- Observational Learning by Reinforcement Learning
- Safe end-to-end imitation learning for model predictive control
- Deep Imitative Models for Flexible Inference, Planning, and Control - models)
- Model-based imitation learning from state trajectories
- End-to-End Differentiable Adversarial Imitation Learning
- Imitating Unknown Policies via Exploration
- Augmented Behavioral Cloning from Observation
- Truly Batch Apprenticeship Learning with Deep Successor Features
- SQIL: Imitation Learning via Regularized Behavioral Cloning
- Behavioral Cloning from Observation
- Causal Confusion in Imitation Learning
- Relay Policy Learning: Solving Long-Horizon Tasks via Imitation and Reinforcement Learning
- Integration of Imitation Learning using GAIL and Reinforcement Learning using Task-achievement Rewards via Probabilistic Generative Model
- Reinforced Imitation in Heterogeneous Action Space
- Reinforcement and Imitation Learning for Diverse Visuomotor Skills
- Policy Optimization with Demonstrations
- Reinforcement Learning from Imperfect Demonstrations
- Pre-training with Non-expert Human Demonstration for Deep Reinforcement Learning
- Sparse Reward Based Manipulator Motion Planning by Using High Speed Learning from Demonstrations
- Independent Generative Adversarial Self-Imitation Learning in Cooperative Multiagent Systems
- PRECOG: PREdiction Conditioned On Goals in Visual Multi-Agent Settings
- Intrinsic Reward Driven Imitation Learning via Generative Model
- Inferring Task Goals and Constraints using Bayesian Nonparametric Inverse Reinforcement Learning
- Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations
- Learning Reward Functions by Integrating Human Demonstrations and Preferences
- Learning Robust Rewards with Adversarial Inverse Reinforcement Learning
- Model-Free Deep Inverse Reinforcement Learning by Logistic Regression
- Compatible Reward Inverse Reinforcement Learning
- A Connection Between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models
- Maximum Entropy Inverse Reinforcement Learning
- Learning Belief Representations for Imitation Learning in POMDPs
- Dyna-AIL : Adversarial Imitation Learning by Planning
- Visual Adversarial Imitation Learning using Variational Models
- An Empirical Investigation of Representation Learning for Imitation
- Self-Supervised Disentangled Representation Learning for Third-Person Imitation Learning
- The Surprising Effectiveness of Representation Learning for Visual Imitation
- Provable Representation Learning for Imitation Learning via Bi-level Optimization
- Causal Confusion in Imitation Learning
- 2018 ICML - imitation-learning/)
- Imitation learning basic (National Taiwan University)
- New Frontiers in Imitation Learning (2017)
- Unity Course
- Introduction to Imitation Learning
- Imitation Learning
- CMU Imitation Learning
- Deep Reinforcement Learning via Imitation Learning
- ![CC0
- Yueh-Hua Wu