https://github.com/josh00-lu/awesome-embodied-learning
Papers for embodied learning
https://github.com/josh00-lu/awesome-embodied-learning
List: awesome-embodied-learning
Last synced: 3 months ago
JSON representation
Papers for embodied learning
- Host: GitHub
- URL: https://github.com/josh00-lu/awesome-embodied-learning
- Owner: Josh00-Lu
- Created: 2025-03-02T06:36:13.000Z (3 months ago)
- Default Branch: main
- Last Pushed: 2025-03-02T12:25:47.000Z (3 months ago)
- Last Synced: 2025-03-02T12:30:15.173Z (3 months ago)
- Homepage:
- Size: 2.93 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- ultimate-awesome - awesome-embodied-learning - Papers for embodied learning. (Other Lists / Julia Lists)
README
# Awesome-Embodied-Learning
This repository collects papers on embodied learning, including robot learning, humanoid motion learning, embodiment co-design. Conference acceptance information is updated occasionally.## Embodiment Co-design & Unified Control
- (ICLR 2025) **_BodyGen_**: BodyGen: Advancing Towards Efficient Embodiment Co-Design.
[Paper](https://openreview.net/pdf?id=cTR17xl89h)
[Code](https://github.com/GenesisOrigin/BodyGen)
[Project](https://genesisorigin.github.io/)- (CoRL 2024) One Policy to Run Them All: an End-to-end Learning Approach to Multi-Embodiment Locomotion.
[Paper](https://www.ias.informatik.tu-darmstadt.de/uploads/Team/NicoBohlinger/one_policy_to_run_them_all.pdf)
[Code](https://github.com/nico-bohlinger/one_policy_to_run_them_all)
[Project](https://nico-bohlinger.github.io/one_policy_to_run_them_all_website/)- (ICLR 2024) **_DittoGym_**: DittoGym: Learning to Control Soft Shape-Shifting Robots.
[Paper](https://arxiv.org/abs/2401.13231)
[Code](https://github.com/suninghuang19/dittogym)- (CoRL 2023) **_Preco_**: Preco: Enhancing generalization in co-design of modular soft robots via brain-body pre-training
[Paper](https://proceedings.mlr.press/v229/wang23b/wang23b.pdf)- (ICLR 2023) **_CuCo_**: Curriculum-based Co-design of Morphology and Control of Voxel-based Soft Robots.
[Paper](https://openreview.net/pdf?id=r9fX833CsuN)
[Code](https://github.com/Yuxing-Wang-THU/ModularEvoGym)- (ICML 2023) **_SARD_**: Symmetry-Aware Robot Design with Structured Subgroups.
[Paper](https://openreview.net/pdf?id=jeHP6aBCBu)
[Code](https://github.com/drdh/SARD)
[Project](https://sites.google.com/view/robot-design)- (ICLR 2022) Structure-aware transformer policy for inhomogeneous multi-task reinforcement learning.
[Paper](https://openreview.net/pdf?id=fy_XRVHqly)- (ICLR 2022) **_Metamorph_**: Metamorph: Learning universal controllers with transformers.
[Paper](https://arxiv.org/abs/2203.11931)- (ICRA 2021) Multi-Objective Graph Heuristic Search for Terrestrial Robot Design.
[Paper](https://people.csail.mit.edu/jiex/papers/MOGHS/paper.pdf)
[Project](https://people.csail.mit.edu/jiex/papers/MOGHS/index.html)- (ICLR 2022) **_Transform2Act_**: Transform2Act: Learning a Transform-and-Control Policy for Efficient Agent Design.
[Paper](https://openreview.net/forum?id=UcDUxjPYWSr)
[Code](https://github.com/Khrylx/Transform2Act)
[Project](https://sites.google.com/view/transform2act)- (Nature Communications 2021) **_DERL_** Embodied intelligence via learning and evolution.
[Paper](https://www.nature.com/articles/s41467-021-25874-z.pdf)
[Code](https://github.com/agrimgupta92/derl)- (ICML 2020) One Policy to Control Them All: Shared Modular Policies for Agent-Agnostic Control.
[Paper](https://www.cs.cmu.edu/~dpathak/papers/modular-rl.pdf)
[Code](https://github.com/huangwl18/modular-rl)
[Project](https://wenlong.page/modular-rl/)- (SIGGRAPH-Asia 2020) **_RoboGrammar_**: RoboGrammar: Graph Grammar for Terrain-Optimized Robot Design.
[Paper](https://people.csail.mit.edu/jiex/papers/robogrammar/paper.pdf)
[Code](https://github.com/allanzhao/RoboGrammar/)
[Project](https://people.csail.mit.edu/jiex/papers/robogrammar/index.html)- (ICLR 2019) **_NGE_**: Neural Graph Evolution: Towards Efficient Automatic Robot Design.
[Paper](https://arxiv.org/abs/1906.05370)
[Code](https://github.com/WilsonWangTHU/neural_graph_evolution)## Motion Learning
#### Human-Object Interaction (HOI)
- [ ] (CVPR 2025) **_InterAct_**: Advancing Large-Scale Versatile 3D Human-Object Interaction Generation.- [ ] (CVPR 2025) **_InterMimic_**: Towards Universal Whole-Body Control for Physics-Based Human-Object Interactions.
[Paper](https://arxiv.org/pdf/2502.20390)
[Project](https://sirui-xu.github.io/InterMimic/)- [ ] (CVPR 2025) **_PhysHOI_**: Physics-Based Imitation of Dynamic Human-Object Interaction.
[Paper](https://arxiv.org/abs/2312.04393)
[Code](https://github.com/wyhuai/PhysHOI)
[Project](https://wyhuai.github.io/physhoi-page/)- [ ] (NIPS 2024) **_CooHOI_**: Learning Cooperative Human-Object Interaction with Manipulated Object Dynamics.
[Paper](https://arxiv.org/abs/2406.14558)
[Code](https://github.com/Winston-Gu/CooHOI)
[Project](https://gao-jiawei.com/Research/CooHOI/)- [ ] (ICCV 2023) **_InterDiff_**: InterDiff: Generating 3D Human-Object Interactions with Physics-Informed Diffusion
[Paper](http://arxiv.org/abs/2308.16905)
[Code](https://github.com/Sirui-Xu/InterDiff)
[Project](https://sirui-xu.github.io/InterDiff/)#### Humanoid Motion Learning
- [ ] (ECCV 2024) **_MotionLCM_**: MotionLCM: Real-time Controllable Motion Generation via Latent Consistency Model.
[Paper](https://arxiv.org/abs/2404.19759)
[Code](https://github.com/Dai-Wenxun/MotionLCM)
[Project](https://dai-wenxun.github.io/MotionLCM-page/)- [ ] (ICLR 2023) **_MDM_**: Human Motion Diffusion Model.
[Paper](https://arxiv.org/pdf/2209.14916)
[Project](https://guytevet.github.io/mdm-page/)#### ✨ Zhengyi Luo
- [ ] (NIPS 2024) Grasping Diverse Objects with Simulated Humanoids.
[Paper](https://arxiv.org/abs/2407.11385)
[Project](https://www.zhengyiluo.com/Omnigrasp)- [ ] (ICLR 2024) **_PULSE_**: Universal Humanoid Motion Representations for Physics-Based Control.
[Paper](https://arxiv.org/abs/2310.04582v1)
[Code](https://github.com/ZhengyiLuo/PULSE)
[Project](https://zhengyiluo.github.io/PULSE/)- [ ] (ICCV 2023) **_PHC_**: Perpetual Humanoid Control for Real-time Simulated Avatars.
[Paper](https://arxiv.org/abs/2305.06456)
[Code](https://github.com/ZhengyiLuo/PHC)
[Project](https://www.zhengyiluo.com/PHC)#### ✨ Xuebin Peng
- (SIGGRAPH-Asia 2024) **_MaskedMimic_**: MaskedMimic: Unified Physics-Based Character Control Through Masked Motion Inpainting.
[Paper](https://xbpeng.github.io/projects/MaskedMimic/MaskedMimic_2024.pdf)
[Code](https://github.com/NVlabs/ProtoMotions)
[Project](https://xbpeng.github.io/projects/MaskedMimic/index.html)- [ ] (SIGGRAPH 2024) Interactive Character Control with Auto-Regressive Motion Diffusion Models.
[Paper](https://xbpeng.github.io/projects/AMDM/AMDM_2024.pdf)
[Project](https://xbpeng.github.io/projects/AMDM/index.html)- [ ] (SIGGRAPH 2024) SuperPADL: Scaling Language-Directed Physics-Based Control with Progressive Supervised Distillation.
[Paper](https://xbpeng.github.io/projects/SuperPADL/SuperPADL_2024.pdf)
[Project](https://xbpeng.github.io/projects/SuperPADL/index.html)- [ ] (SIGGRAPH 2024) Flexible Motion In-betweening with Diffusion Models.
[Paper](https://xbpeng.github.io/projects/CondMDI/CondMDI_2024.pdf)
[Project](https://xbpeng.github.io/projects/CondMDI/index.html)- [ ] (ECCV 2024) Generating Human Interaction Motions in Scenes with Text Control.
[Paper](https://xbpeng.github.io/projects/TeSMo/TeSMo_2024.pdf)
[Project](https://xbpeng.github.io/projects/TeSMo/index.html)- [ ] (SIGGRAPH 2023) Learning Physically Simulated Tennis Skills from Broadcast Videos.
[Paper](https://xbpeng.github.io/projects/Vid2Player3D/Vid2Player3D_2023.pdf)
[Project](https://xbpeng.github.io/projects/Vid2Player3D/index.html)- [ ] (SIGGRAPH 2023) Synthesizing Physical Character-Scene Interactions.
[Paper](https://xbpeng.github.io/projects/InterPhys/InterPhys_2023.pdf)
[Project](https://xbpeng.github.io/projects/InterPhys/index.html)- [ ] (SIGGRAPH 2023) **_CALM_**: CALM: Conditional Adversarial Latent Models for Directable Virtual Characters.
[Paper](https://xbpeng.github.io/projects/CALM/CALM_2023.pdf)
[Project](https://xbpeng.github.io/projects/CALM/index.html)- [ ] (SIGGRAPH-Asia 2022) **_PADL_**: PADL: Language-Directed Physics-Based Character Control.
[Paper](https://xbpeng.github.io/projects/PADL/PADL_2022.pdf)
[Project](https://xbpeng.github.io/projects/PADL/index.html)- [ ] (SIGGRAPH 2022) **_ASE_**: ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically Simulated Characters.
[Paper](https://xbpeng.github.io/projects/ASE/ASE_2022.pdf)
[Project](https://xbpeng.github.io/projects/ASE/index.html)- [ ] (SIGGRAPH 2021) **_AMP_**: AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control.
[Paper](https://xbpeng.github.io/projects/AMP/AMP_2021.pdf)
[Project](https://xbpeng.github.io/projects/AMP/index.html)- [ ] **_AWR_**: Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning.
[Paper](https://xbpeng.github.io/projects/AWR/AWR_2019.pdf)
[Project](https://xbpeng.github.io/projects/AWR/index.html)- [ ] (NIPS 2019) **_MCP_**: Learning Composable Hierarchical Control with Multiplicative Compositional Policies.
[Paper](https://xbpeng.github.io/projects/MCP/MCP_2019.pdf)
[Project](https://xbpeng.github.io/projects/MCP/index.html)- [ ] (SIGGRAPH 2018) **_DeepMimic_**: DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills.
[Paper](https://xbpeng.github.io/projects/DeepMimic/DeepMimic_2018.pdf)
[Project](https://xbpeng.github.io/projects/DeepMimic/index.html)- [ ] (SIGGRAPH 2017) **_DeepLoco_**: Developing Locomotion Skills Using Hierarchical Deep Reinforcement Learning.
[Paper](https://xbpeng.github.io/projects/DeepLoco/DeepLoco_2017.pdf)
[Project](https://xbpeng.github.io/projects/DeepLoco/index.html)# Notes
This list is mainly for personal use. If you have any questions, please feel free to contact me via email (josh00.lu (at) gmail.com).