Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/foruck/awesome-human-motion
https://github.com/foruck/awesome-human-motion
List: awesome-human-motion
character-control human-motion human-motion-analysis human-motion-generation human-motion-synthesis humanoid-control motion-control motion-generation motion-synthesis
Last synced: 20 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/foruck/awesome-human-motion
- Owner: Foruck
- Created: 2023-12-22T14:27:34.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-11-29T15:03:29.000Z (23 days ago)
- Last Synced: 2024-11-29T15:33:43.122Z (23 days ago)
- Topics: character-control, human-motion, human-motion-analysis, human-motion-generation, human-motion-synthesis, humanoid-control, motion-control, motion-generation, motion-synthesis
- Homepage:
- Size: 17.6 KB
- Stars: 3
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Awesome Human Motion
An aggregation of human motion understanding research.
[Motion Generation](#motion-generation) [Motion Editing](#motion-editing) [Motion Stylization](#motion-stylization)
[Human-Object Interaction](#hoi) [Human-Scene Interaction](#hsi) [Human-Human Interaction](#hhi)
[Datasets](#datasets) [Humanoid](#humanoid) [Bio-stuff](#bio)
## Motion Generation- [AtoM](https://atom-motion.github.io/). AToM: Aligning Text-to-Motion Model at Event-Level with GPT-4Vision Reward, Han et al. ArXiv 2024.
- [MVLift](https://lijiaman.github.io/projects/mvlift/). Lifting Motion to the 3D World via 2D Diffusion, Li et al. ArXiv 2024.
- [DisCoRD](https://whwjdqls.github.io/discord.github.io/). DisCoRD: Discrete Tokens to Continuous Motion via Rectified Flow Decoding, Cho et al. ArXiv 2024.
- [MoTe](https://arxiv.org/abs/2411.19786). MoTe: Learning Motion-Text Diffusion Model for Multiple Generation Tasks, Wue et al. ArXiv 2024.
- [ReinDiffuse](https://reindiffuse.github.io/). ReinDiffuse: Crafting Physically Plausible Motions with Reinforced Diffusion Model, Han et al. WACV 2025.
- [InfiniDreamer](https://arxiv.org/abs/2411.18303). InfiniDreamer: Arbitrarily Long Human Motion Generation via Segment Score Distillation, Zhuo et al. ArXiv 2024.
- [FTMoMamba](https://arxiv.org/abs/2411.17532). FTMoMamba: Motion Generation with Frequency and Text State Space Models, Li et al. ArXiv 2024.
- [Meng et al](https://arxiv.org/abs/2411.16575). Rethinking Diffusion for Text-Driven Human Motion Generation, Meng et al. ArXiv 2024.
- [KinMo](https://andypinxinliu.github.io/KinMo/). KinMo: Kinematic-aware Human Motion Understanding and Generation, Zhang et al. ArXiv 2024.
- [LLaMo](https://arxiv.org/abs/2411.16805). Human Motion Instruction Tuning, Li et al. ArXiv 2024.
- [Morph](https://arxiv.org/abs/2411.14951). Morph: A Motion-free Physics Optimization Framework for Human Motion Generation, Li et al. ArXiv 2024.
- [KMM](https://steve-zeyu-zhang.github.io/KMM). KMM: Key Frame Mask Mamba for Extended Motion Generation, Zhang et al. ArXiv 2024.
- [MotionGPT-2](https://arxiv.org/abs/2410.21747). MotionGPT-2: A General-Purpose Motion-Language Model for Motion Generation and Understanding, Wang et al. ArXiv 2024.
- [Lodge++](https://li-ronghui.github.io/lodgepp). Lodge++: High-quality and Long Dance Generation with Vivid Choreography Patterns, Li et al. ArXiv 2024.
- [MotionCLR](https://arxiv.org/abs/2410.18977). MotionCLR: Motion Generation and Training-free Editing via Understanding Attention Mechanisms, Chen et al. ArXiv 2024.
- [MotionGlot](https://arxiv.org/abs/2410.16623). MotionGlot: A Multi-Embodied Motion Generation Model, Harithas et al. ArXiv 2024.
- [LEAD](https://arxiv.org/abs/2410.14508). LEAD: Latent Realignment for Human Motion Diffusion, Andreou et al.
- [Leite et al.](https://arxiv.org/abs/2410.08931) Enhancing Motion Variation in Text-to-Motion Models via Pose and Video Conditioned Editing, Leite et al. ArXiv 2024.
- [MotionRL](https://arxiv.org/abs/2410.06513). MotionRL: Align Text-to-Motion Generation to Human Preferences with Multi-Reward Reinforcement Learning, Liu et al. ArXiv 2024.
- [UniMuMo](https://hanyangclarence.github.io/unimumo_demo/). UniMuMo: Unified Text, Music and Motion Generation, Yang et al. ArXiv 2024.
- [MotionCraft](https://cure-lab.github.io/MotionCraft/). MotionCraft: Crafting Whole-Body Motion with Plug-and-Play Multimodal Controls, Bian et al. ArXiv 2024.
- [MotionLLM](https://lhchen.top/MotionLLM/). MotionLLM: Understanding Human Behaviors from Human Motions and Videos, Chen et al. ArXiv 2024.
- [DART](https://zkf1997.github.io/DART/). DART: A Diffusion-Based Autoregressive Motion Model for Real-Time Text-Driven Motion Control, Zhao et al. ArXiv 2024.
- [CLoSD](https://guytevet.github.io/CLoSD-page/). CLoSD: Closing the Loop between Simulation and Diffusion for multi-task character control, Tevet et al. ArXiv 2024.
- [Wang et al](https://arxiv.org/abs/2410.03311). Quo Vadis, Motion Generation? From Large Language Models to Large Motion Models, Wang et al. ArXiv 2024.
- [Unimotion](https://coral79.github.io/uni-motion/). Unimotion: Unifying 3D Human Motion Synthesis and Understanding, Li et al. ArXiv 2024.
- [T2M-X](https://arxiv.org/abs/2409.13251). T2M-X: Learning Expressive Text-to-Motion Generation from Partially Annotated Data, Liu et al. ArXiv 2024.
- [MoRAG](https://motion-rag.github.io/). MoRAG -- Multi-Fusion Retrieval Augmented Generation for Human Motion, Shashank et al. ArXiv 2024.
- [Mandelli et al](https://arxiv.org/abs/2409.11920). Generation of Complex 3D Human Motion by Temporal and Spatial Composition of Diffusion Models, Mandelli et al. ArXiv 2024.
- [BAD](https://github.com/RohollahHS/BAD). BAD: Bidirectional Auto-regressive Diffusion for Text-to-Motion Generation, Hosseyni et al. ArXiv 2024.
- [synNsync](https://von31.github.io/synNsync/). Synergy and Synchrony in Couple Dances, Manukele et al. ArXiv 2024.
- [Dong et al](https://aclanthology.org/2024.findings-emnlp.584/). Word-Conditioned 3D American Sign Language Motion Generation, Dong et al. EMNLP 2024.
- [Text to blind motion](https://nips.cc/virtual/2024/poster/97700). Text to Blind Motion, Kim et al. NeurIPS D&B 2024.
- [UniMTS](https://github.com/xiyuanzh/UniMTS). UniMTS: Unified Pre-training for Motion Time Series, Zhang et al. NeurIPS 2024.
- [Christopher et al.](https://openreview.net/forum?id=FsdB3I9Y24). Constrained Synthesis with Projected Diffusion Models, Christopher et al. NeurIPS 2024.
- [MoMu-Diffusion](https://momu-diffusion.github.io/). MoMu-Diffusion: On Learning Long-Term Motion-Music Synchronization and Correspondence, You et al. NeurIPS 2024.
- [MoGenTS](https://aigc3d.github.io/mogents/). MoGenTS: Motion Generation based on Spatial-Temporal Joint Modeling, Yuan et al. NeurIPS 2024.
- [M3GPT](https://arxiv.org/abs/2405.16273). M3GPT: An Advanced Multimodal, Multitask Framework for Motion Comprehension and Generation, Luo et al. NeurIPS 2024.
- [Bikov et al](https://openreview.net/forum?id=BTSnh5YdeI). Fitness Aware Human Motion Generation with Fine-Tuning, Bikov et al. NeurIPS Workshop 2024.
- [SynTalker](https://bohongchen.github.io/SynTalker-Page/). Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion Generation, Chen et al. ACM MM 2024.
- [L3EM](https://dl.acm.org/doi/abs/10.1145/3664647.3681487). Towards Emotion-enriched Text-to-Motion Generation via LLM-guided Limb-level Emotion Manipulating. Yu et al. ACM MM 2024.
- [StableMoFusion](https://dl.acm.org/doi/abs/10.1145/3664647.3681657). StableMoFusion: Towards Robust and Efficient Diffusion-based Motion Generation Framework, Huang et al. ACM MM 2024.
- [SATO](https://dl.acm.org/doi/abs/10.1145/3664647.3681034). SATO: Stable Text-to-Motion Framework, Chen et al. ACM MM 2024.
- [PIDM](https://link.springer.com/chapter/10.1007/978-3-031-72356-8_2). PIDM: Personality-Aware Interaction Diffusion Model for Gesture Generation, Shibasaki et al. ICANN 2024.
- [Macwan et al](https://journals.sagepub.com/doi/full/10.1177/10711813241262026). High-Fidelity Worker Motion Simulation With Generative AI, Macwan et al. HFES 2024.
- [Jin et al.](https://jpthu17.github.io/GuidedMotion-project/). Local Action-Guided Motion Diffusion Model for Text-to-Motion Generation, Jin et al. ECCV 2024.
- [Motion Mamba](https://www.ecva.net/papers/eccv_2024/papers_ECCV/html/100_ECCV_2024_paper.php). Motion Mamba: Efficient and Long Sequence Motion Generation, Zhong et al. ECCV 2024.
- [EMDM](https://frank-zy-dou.github.io/projects/EMDM/index.html). EMDM: Efficient Motion Diffusion Model for Fast, High-Quality Human Motion Generation, Zhou et al. ECCV 2024.
- [CoMo](https://yh2371.github.io/como/). CoMo: Controllable Motion Generation through Language Guided Pose Code Editing, Huang et al. ECCV 2024.
- [CoMusion](https://github.com/jsun57/CoMusion). CoMusion: Towards Consistent Stochastic Human Motion Prediction via Motion Diffusion, Sun et al. ECCV 2024.
- [Shan et al.](https://arxiv.org/abs/2405.18483). Towards Open Domain Text-Driven Synthesis of Multi-Person Motions, Shan et al. ECCV 2024.
- [ParCo](https://github.com/qrzou/ParCo). ParCo: Part-Coordinating Text-to-Motion Synthesis, Zou et al. ECCV 2024.
- [Sampieri et al.](https://arxiv.org/abs/2407.11532). Length-Aware Motion Synthesis via Latent Diffusion, Sampieri et al. ECCV 2024.
- [ChroAccRet](https://github.com/line/ChronAccRet). Chronologically Accurate Retrieval for Temporal Grounding of Motion-Language Models, Fujiwara et al. ECCV 2024.
- [MHC](https://idigitopia.github.io/projects/mhc/). Generating Physically Realistic and Directable Human Motions from Multi-Modal Inputs, Liu et al. ECCV 2024.
- [ProMotion](https://github.com/moonsliu/Pro-Motion). Plan, Posture and Go: Towards Open-vocabulary Text-to-Motion Generation, Liu et al. ECCV 2024.
- [FreeMotion](https://arxiv.org/abs/2406.10740). FreeMotion: MoCap-Free Human Motion Synthesis with Multimodal Large Language Models, Zhang et al. ECCV 2024.
- [Text Motion Translator](https://eccv.ecva.net/virtual/2024/poster/266). Text Motion Translator: A Bi-Directional Model for Enhanced 3D Human Motion Generation from Open-Vocabulary Descriptions, Qian et al. ECCV 2024.
- [FreeMotion](https://vankouf.github.io/FreeMotion/). FreeMotion: A Unified Framework for Number-free Text-to-Motion Synthesis, Fan et al. ECCV 2024.
- [Kinematic Phrases](https://foruck.github.io/KP/). Bridging the Gap between Human Motion and Action Semantics via Kinematic Phrases, Liu et al. ECCV 2024.
- [MotionChain](https://arxiv.org/abs/2404.01700). MotionChain: Conversational Motion Controllers via Multimodal Prompts, Jiang et al. ECCV 2024.
- [SMooDi](https://neu-vi.github.io/SMooDi/). SMooDi: Stylized Motion Diffusion Model, Zhong et al. ECCV 2024.
- [BAMM](https://exitudio.github.io/BAMM-page/). BAMM: Bidirectional Autoregressive Motion Model, Pinyoanuntapong et al. ECCV 2024.
- [MotionLCM](https://dai-wenxun.github.io/MotionLCM-page/). MotionLCM: Real-time Controllable Motion Generation via Latent Consistency Model, Dai et al. ECCV 2024.
- [Ren et al.](https://arxiv.org/abs/2312.10993). Realistic Human Motion Generation with Cross-Diffusion Models, Ren et al. ECCV 2024.
- [M2D2M](https://arxiv.org/abs/2407.14502). M2D2M: Multi-Motion Generation from Text with Discrete Diffusion Models, Chi et al. ECCV 2024.
- [Large Motion Model](https://mingyuan-zhang.github.io/projects/LMM.html). Large Motion Model for Unified Multi-Modal Motion Generation, Zhang et al. ECCV 2024.
- [TesMo](https://research.nvidia.com/labs/toronto-ai/tesmo/). Generating Human Interaction Motions in Scenes with Text Control, Yi et al. ECCV 2024.
- [TLcontrol](https://tlcontrol.weilinwl.com/). TLcontrol: Trajectory and Language Control for Human Motion Synthesis, Wan et al. ECCV 2024.
- [ExpGest](https://ieeexplore.ieee.org/abstract/document/10687922). ExpGest: Expressive Speaker Generation Using Diffusion Model and Hybrid Audio-Text Guidance, Cheng et al. ICME 2024.
- [Chen et al](https://ieeexplore.ieee.org/abstract/document/10645445). Anatomically-Informed Vector Quantization Variational Auto-Encoder for Text-to-Motion Generation, Chen et al. ICME Workshop 2024.
- [HumanTOMATO](https://github.com/LinghaoChan/HumanTOMATO). HumanTOMATO: Text-aligned Whole-body Motion Generation, Lu et al. ICML 2024.
- [GPHLVM](https://sites.google.com/view/gphlvm/). Bringing Motion Taxonomies to Continuous Domains via GPLVM on Hyperbolic Manifolds, Jaquier et al. ICML 2024.
- [CondMDI](https://setarehc.github.io/CondMDI/). Flexible Motion In-betweening with Diffusion Models, Cohan et al. SIGGRAPH 2024.
- [LGTM](https://vcc.tech/research/2024/LGTM). LGTM: Local-to-Global Text-Driven Human Motion Diffusion Models, Sun et al. SIGGRAPH 2024.
- [TEDi](https://threedle.github.io/TEDi/). TEDi: Temporally-Entangled Diffusion for Long-Term Motion Synthesis, Zhang et al. SIGGRAPH 2024.
- [A-MDM](https://github.com/Yi-Shi94/AMDM). Interactive Character Control with Auto-Regressive Motion Diffusion Models, Shi et al. SIGGRAPH 2024.
- [Starke et al.](https://dl.acm.org/doi/10.1145/3658209). Categorical Codebook Matching for Embodied Character Controllers, Starke et al. SIGGRAPH 2024.
- [SuperPADL](https://arxiv.org/abs/2407.10481). SuperPADL: Scaling Language-Directed Physics-Based Control with Progressive Supervised Distillation, Juravsky et al. SIGGRAPH 2024.
- [ProgMoGen](https://hanchaoliu.github.io/Prog-MoGen/). Programmable Motion Generation for Open-set Motion Control Tasks, Liu et al. CVPR 2024.
- [PACER+](https://github.com/IDC-Flash/PacerPlus). PACER+: On-Demand Pedestrian Animation Controller in Driving Scenarios, Wang et al. CVPR 2024.
- [AMUSE](https://amuse.is.tue.mpg.de/). Emotional Speech-driven 3D Body Animation via Disentangled Latent Diffusion, Chhatre et al. CVPR 2024.
- [Liu et al.](https://feifeifeiliu.github.io/probtalk/). Towards Variable and Coordinated Holistic Co-Speech Motion Generation, Liu et al. CVPR 2024.
- [MAS](https://guytevet.github.io/mas-page/). MAS: Multi-view Ancestral Sampling for 3D motion generation using 2D diffusion, Kapon et al. CVPR 2024.
- [WANDR](https://wandr.is.tue.mpg.de/). WANDR: Intention-guided Human Motion Generation, Diomataris et al. CVPR 2024.
- [MoMask](https://ericguo5513.github.io/momask/). MoMask: Generative Masked Modeling of 3D Human Motions, Guo et al. CVPR 2024.
- [ChapPose](https://yfeng95.github.io/ChatPose/). ChatPose: Chatting about 3D Human Pose, Feng et al. CVPR 2024.
- [AvatarGPT](https://zixiangzhou916.github.io/AvatarGPT/). AvatarGPT: All-in-One Framework for Motion Understanding, Planning, Generation and Beyond, Zhou et al. CVPR 2024.
- [MMM](https://exitudio.github.io/MMM-page/). MMM: Generative Masked Motion Model, Pinyoanuntapong et al. CVPR 2024.
- [AAMDM](https://openaccess.thecvf.com/content/CVPR2024/papers/Li_AAMDM_Accelerated_Auto-regressive_Motion_Diffusion_Model_CVPR_2024_paper.pdf). AAMDM: Accelerated Auto-regressive Motion Diffusion Model, Li et al. CVPR 2024.
- [OMG](https://tr3e.github.io/omg-page/). OMG: Towards Open-vocabulary Motion Generation via Mixture of Controllers, Liang et al. CVPR 2024.
- [FlowMDM](https://barquerogerman.github.io/FlowMDM/). FlowMDM: Seamless Human Motion Composition with Blended Positional Encodings, Barquero et al. CVPR 2024.
- [Digital Life Project](https://digital-life-project.com/). Digital Life Project: Autonomous 3D Characters with Social Intelligence, Cai et al. CVPR 2024.
- [STMC](https://xbpeng.github.io/projects/STMC/index.html). Multi-Track Timeline Control for Text-Driven 3D Human Motion Generation, Petrovich et al. CVPR Workshop 2024.
- [InstructMotion](https://github.com/THU-LYJ-Lab/InstructMotion). Exploring Text-to-Motion Generation with Human Preference, Sheng et al. CVPR Workshop 2024.
- [Single Motion Diffusion](https://sinmdm.github.io/SinMDM-page/). Raab et al. ICLR 2024.
- [NeRM](https://openreview.net/forum?id=sOJriBlOFd¬eId=KaJUBoveeo). NeRM: Learning Neural Representations for High-Framerate Human Motion Synthesis, Wei et al. ICLR 2024.
- [PriorMDM](https://priormdm.github.io/priorMDM-page/). PriorMDM: Human Motion Diffusion as a Generative Prior, Shafir et al. ICLR 2024.
- [OmniControl](https://neu-vi.github.io/omnicontrol/). OmniControl: Control Any Joint at Any Time for Human Motion Generation, Xie et al. ICLR 2024.
- [Adiya et al.](https://openreview.net/forum?id=yQDFsuG9HP). Bidirectional Temporal Diffusion Model for Temporally Consistent Human Animation, Adiya et al. ICLR 2024.
- [Duolando](https://lisiyao21.github.io/projects/Duolando/). Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment, Li et al. ICLR 2024.
- [HuTuDiffusion](https://arxiv.org/abs/2312.12227). HuTuMotion: Human-Tuned Navigation of Latent Motion Diffusion Models with Minimal Feedback, Han et al. AAAI 2024.
- [AMD](https://arxiv.org/abs/2312.12763). AMD: Anatomical Motion Diffusion with Interpretable Motion Decomposition and Fusion, Jing et al. AAAI 2024.
- [MotionMix](https://nhathoang2002.github.io/MotionMix-page/). MotionMix: Weakly-Supervised Diffusion for Controllable Motion Generation, Hoang et al. AAAI 2024.
- [B2A-HDM](https://github.com/xiezhy6/B2A-HDM). Towards Detailed Text-to-Motion Synthesis via Basic-to-Advanced Hierarchical Diffusion Model, Xie et al. AAAI 2024.
- [GUESS](https://ieeexplore.ieee.org/abstract/document/10399852). GUESS: GradUally Enriching SyntheSis for Text-Driven Human Motion Generation, Gao et al. TPAMI 2024.
- [Xie et al.](https://arxiv.org/pdf/2312.12917). Sign Language Production with Latent Motion Transformer, Xie et al. WACV 2024.
- [GraphMotion](https://github.com/jpthu17/GraphMotion). Act As You Wish: Fine-grained Control of Motion Diffusion Model with Hierarchical Semantic Graphs, Jin et al. NeurIPS 2023.
- [MotionGPT](https://motion-gpt.github.io/). MotionGPT: Human Motion as Foreign Language, Jiang et al. NeurIPS 2023.
- [FineMoGen](https://mingyuan-zhang.github.io/projects/FineMoGen.html). FineMoGen: Fine-Grained Spatio-Temporal Motion Generation and Editing, Zhang et al. NeurIPS 2023.
- [InsActor](https://jiawei-ren.github.io/projects/insactor/). InsActor: Instruction-driven Physics-based Characters, Ren et al. NeurIPS 2023.
- [AttT2M](https://github.com/ZcyMonkey/AttT2M). AttT2M: Text-Driven Human Motion Generation with Multi-Perspective Attention Mechanism, Zhong et al. ICCV 2023.
- [TMR](https://mathis.petrovich.fr/tmr). TMR: Text-to-Motion Retrieval Using Contrastive 3D Human Motion Synthesis, Petrovich et al. ICCV 2023.
- [MAA](https://azadis.github.io/make-an-animation). Make-An-Animation: Large-Scale Text-conditional 3D Human Motion Generation, Azadi et al. ICCV 2023.
- [PhysDiff](https://nvlabs.github.io/PhysDiff). PhysDiff: Physics-Guided Human Motion Diffusion Model, Yuan et al. ICCV 2023.
- [ReMoDiffusion](https://mingyuan-zhang.github.io/projects/ReMoDiffuse.html). ReMoDiffuse: Retrieval-Augmented Motion Diffusion Model, Zhang et al. ICCV 2023.
- [BelFusion](https://barquerogerman.github.io/BeLFusion/). BeLFusion: Latent Diffusion for Behavior-Driven Human Motion Prediction, Barquero et al. ICCV 2023.
- [GMD](https://korrawe.github.io/gmd-project/). GMD: Guided Motion Diffusion for Controllable Human Motion Synthesis, Karunratanakul et al. ICCV 2023.
- [HMD-NeMo](https://openaccess.thecvf.com/content/ICCV2023/html/Aliakbarian_HMD-NeMo_Online_3D_Avatar_Motion_Generation_From_Sparse_Observations_ICCV_2023_paper.html). HMD-NeMo: Online 3D Avatar Motion Generation From Sparse Observations, Aliakbarian et al. ICCV 2023.
- [SINC](https://sinc.is.tue.mpg.de/). SINC: Spatial Composition of 3D Human Motions for Simultaneous Action Generation, Athanasiou et al. ICCV 2023.
- [Kong et al.](https://openaccess.thecvf.com/content/ICCV2023/html/Kong_Priority-Centric_Human_Motion_Generation_in_Discrete_Latent_Space_ICCV_2023_paper.html). Priority-Centric Human Motion Generation in Discrete Latent Space, Kong et al. ICCV 2023.
- [FgT2M](https://openaccess.thecvf.com/content/ICCV2023/html/Wang_Fg-T2M_Fine-Grained_Text-Driven_Human_Motion_Generation_via_Diffusion_Model_ICCV_2023_paper.html). Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model, Wang et al. ICCV 2023.
- [EMS](https://openaccess.thecvf.com/content/ICCV2023/html/Qian_Breaking_The_Limits_of_Text-conditioned_3D_Motion_Synthesis_with_Elaborative_ICCV_2023_paper.html). Breaking The Limits of Text-conditioned 3D Motion Synthesis with Elaborative Descriptions, Qian et al. ICCV 2023.
- [GenMM](https://weiyuli.xyz/GenMM/). Example-based Motion Synthesis via Generative Motion Matching, Li et al. SIGGRAPH 2023.
- [GestureDiffuCLIP](https://pku-mocca.github.io/GestureDiffuCLIP-Page/). GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents, Ao et al. SIGGRAPH 2023.
- [BodyFormer](https://i.cs.hku.hk/~taku/kunkun2023.pdf). BodyFormer: Semantics-guided 3D Body Gesture Synthesis with Transformer, Pang et al. SIGGRAPH 2023.
- [Alexanderson et al.](https://www.speech.kth.se/research/listen-denoise-action/). Listen, denoise, action! Audio-driven motion synthesis with diffusion models, Alexanderson et al. SIGGRAPH 2023.
- [AGroL](https://dulucas.github.io/agrol/). Avatars Grow Legs: Generating Smooth Human Motion from Sparse Tracking Inputs with Diffusion Model, Du et al. CVPR 2023.
- [TALKSHOW](https://talkshow.is.tue.mpg.de/). Generating Holistic 3D Human Motion from Speech, Yi et al. CVPR 2023.
- [T2M-GPT](https://mael-zys.github.io/T2M-GPT/). T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations, Zhang et al. CVPR 2023.
- [UDE](https://zixiangzhou916.github.io/UDE/). UDE: A Unified Driving Engine for Human Motion Generation, Zhou et al. CVPR 2023.
- [OOHMG](https://github.com/junfanlin/oohmg). Being Comes from Not-being: Open-vocabulary Text-to-Motion Generation with Wordless Training, Lin et al. CVPR 2023.
- [EDGE](https://edge-dance.github.io/). EDGE: Editable Dance Generation From Music, Tseng et al. CVPR 2023.
- [MLD](https://chenxin.tech/mld). Executing your Commands via Motion Diffusion in Latent Space, Chen et al. CVPR 2023.
- [MoDi](https://sigal-raab.github.io/MoDi). MoDi: Unconditional Motion Synthesis from Diverse Data, Raab et al. CVPR 2023.
- [MoFusion](https://vcai.mpi-inf.mpg.de/projects/MoFusion/). MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis, Dabral et al. CVPR 2023.
- [Mo et al.](https://arxiv.org/abs/2303.14926). Continuous Intermediate Token Learning with Implicit Motion Manifold for Keyframe Based Motion Interpolation, Mo et al. CVPR 2023.
- [HMDM](https://guytevet.github.io/mdm-page/). MDM: Human Motion Diffusion Model, Tevet et al. ICLR 2023.
- [MotionDiffuse](https://mingyuan-zhang.github.io/projects/MotionDiffuse.html). MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model, Zhang et al. TPAMI 2023.
- [Bailando++](https://www.mmlab-ntu.com/project/bailando/). Bailando++: 3D Dance GPT with Choreographic Memory, Li et al. TPAMI 2023.
- [UDE-2](https://zixiangzhou916.github.io/UDE-2/). A Unified Framework for Multimodal, Multi-Part Human Motion Synthesis, Zhou et al. ArXiv 2023.
- [Motion Script](https://pjyazdian.github.io/MotionScript/). MotionScript: Natural Language Descriptions for Expressive 3D Human Motions, Yazdian et al. ArXiv 2023.
- [NeMF](https://github.com/c-he/NeMF). NeMF: Neural Motion Fields for Kinematic Animation, He et al. NeurIPS 2022.
- [PADL](https://github.com/nv-tlabs/PADL). PADL: Language-Directed Physics-Based Character, Juravsky et al. SIGGRAPH Asia 2022.
- [Rhythmic Gesticulator](https://pku-mocca.github.io/Rhythmic-Gesticulator-Page/). Rhythmic Gesticulator: Rhythm-Aware Co-Speech Gesture Synthesis with Hierarchical Neural Embeddings, Ao et al. SIGGRAPH Asia 2022.
- [TEACH](https://teach.is.tue.mpg.de/). TEACH: Temporal Action Composition for 3D Human, Athanasiou et al. 3DV 2022.
- [Implicit Motion](https://github.com/PACerv/ImplicitMotion). Implicit Neural Representations for Variable Length Human Motion Generation, Cervantes et al. ECCV 2022.
- [Zhong et al.](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136810707.pdf). Learning Uncoupled-Modulation CVAE for 3D Action-Conditioned Human Motion Synthesis, Zhong et al. ECCV 2022.
- [MotionCLIP](https://guytevet.github.io/motionclip-page/). MotionCLIP: Exposing Human Motion Generation to CLIP Space, Tevet et al. ECCV 2022.
- [PoseGPT](https://europe.naverlabs.com/research/computer-vision/posegpt). PoseGPT: Quantizing human motion for large scale generative modeling, Lucas et al. ECCV 2022.
- [TEMOS](https://mathis.petrovich.fr/temos/). TEMOS: Generating diverse human motions from textual descriptions, Petrovich et al. ECCV 2022.
- [TM2T](https://ericguo5513.github.io/TM2T/). TM2T: Stochastic and Tokenized Modeling for the Reciprocal Generation of 3D Human Motions and Texts, Guo et al. ECCV 2022.
- [AvatarCLIP](https://hongfz16.github.io/projects/AvatarCLIP.html). AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars, Hong et al. SIGGRAPH 2022.
- [DeepPhase](https://dl.acm.org/doi/10.1145/3528223.3530178). Deepphase: Periodic autoencoders for learning motion phase manifolds, Starke et al. SIGGRAPH 2022.
- [Guo et al.](https://ericguo5513.github.io/text-to-motion). Generating Diverse and Natural 3D Human Motions from Text, Guo et al. CVPR 2022
- [Bailando](https://www.mmlab-ntu.com/project/bailando/). Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic Memory, Li et al. CVPR 2022.
- [ACTOR](https://mathis.petrovich.fr/actor/index.html). Action-Conditioned 3D Human Motion Synthesis with Transformer VAE, Petrovich et al. ICCV 2021.
- [AIST++](https://google.github.io/aichoreographer/). AI Choreographer: Music Conditioned 3D Dance Generation with AIST++, Li et al. ICCV 2021.
- [Starke et al.](https://dl.acm.org/doi/10.1145/3450626.3459881). Neural animation layering for synthesizing martial arts movements, Starke et al. SIGGRAPH 2021.
- [DLow](https://www.ye-yuan.com/dlow). DLow: Diversifying Latent Flows for Diverse Human Motion Prediction, Yuan et al. ECCV 2020.
- [Starke et al.](https://www.ipab.inf.ed.ac.uk/cgvu/basketball.pdf). Local motion phases for learning multi-contact character movements, Starke et al. SIGGRAPH 2020.
## Motion Editing- [MotionFix](https://motionfix.is.tue.mpg.de/). MotionFix: Text-Driven 3D Human Motion Editing, Athanasiou et al. SIGGRAPH Asia 2024.
- [CigTime](https://btekin.github.io/). CigTime: Corrective Instruction Generation Through Inverse Motion Editing, Fang et al. NeurIPS 2024.
- [Iterative Motion Editing](https://purvigoel.github.io/iterative-motion-editing/). Iterative Motion Editing with Natural Language, Goel et al. SIGGRAPH 2024.
- [DNO](https://korrawe.github.io/dno-project/). DNO: Optimizing Diffusion Noise Can Serve As Universal Motion Priors, Karunratanakul et al. CVPR 2024.
## Motion Stylization- [HUMOS](https://otaheri.github.io/publication/2024_humos/). HUMOS: Human Motion Model Conditioned on Body Shape, Tripathi et al. ECCV 2024.
- [SMEAR](https://dl.acm.org/doi/10.1145/3641519.3657457). SMEAR: Stylized Motion Exaggeration with ARt-direction, Basset et al. SIGGRAPH 2024.
- [MCM-LDM](https://xingliangjin.github.io/MCM-LDM-Web/). Arbitrary Motion Style Transfer with Multi-condition Motion Latent Diffusion Model, Song et al. CVPR 2024.
- [MoST](https://boeun-kim.github.io/page-MoST/). MoST: Motion Style Transformer between Diverse Action Contents, Kim et al. CVPR 2024.
- [GenMoStyle](https://yxmu.foo/GenMoStyle/). Generative Human Motion Stylization in Latent Space, Guo et al. ICLR 2024.
## Human-Object Interaction- [OOD-HOI](https://nickk0212.github.io/ood-hoi/). OOD-HOI: Text-Driven 3D Whole-Body Human-Object Interactions Generation Beyond Training Domains, Zhang et al. ArXiv 2024.
- [COLLAGE](https://arxiv.org/abs/2409.20502). COLLAGE: Collaborative Human-Agent Interaction Generation using Hierarchical Latent Diffusion and Language Models, Daiya et al. ArXiv 2024.
- [SMGDiff](https://arxiv.org/abs/2411.16216). SMGDiff: Soccer Motion Generation using diffusion probabilistic models, Yang et al. ArXiv 2024.
- [SkillMimic](https://ingrid789.github.io/SkillMimic/). SkillMimic: Learning Reusable Basketball Skills from Demonstrations, Wang et al. ArXiv 2024.
- [CORE4D](https://core4d.github.io/). CORE4D: A 4D Human-Object-Human Interaction Dataset for Collaborative Object REarrangement, Zhang et al. ArXiv 2024.
- [Wu et al](https://hoifhli.github.io/). Human-Object Interaction from Human-Level Instructions, Wu et al. ArXiv 2024.
- [GRIP](https://grip.is.tue.mpg.de). GRIP: Generating Interaction Poses Using Spatial Cues and Latent Consistency, Taheri et al. 3DV 2024.
- [HumanVLA](https://arxiv.org/abs/2406.19972). HumanVLA: Towards Vision-Language Directed Object Rearrangement by Physical Humanoid, Xu et al. NeurIPS 2024.
- [OmniGrasp](https://www.zhengyiluo.com/Omnigrasp-Site/). Grasping Diverse Objects with Simulated Humanoids, Luo et al. NeurIPS 2024.
- [EgoChoir](https://yyvhang.github.io/EgoChoir/). EgoChoir: Capturing 3D Human-Object Interaction Regions from Egocentric Views, Yang et al. NeurIPS 2024.
- [CooHOI](https://gao-jiawei.com/Research/CooHOI/). CooHOI: Learning Cooperative Human-Object Interaction with Manipulated Object Dynamics, Gao et al. NeurIPS 2024.
- [InterDreamer](https://arxiv.org/abs/2403.19652). InterDreamer: Zero-Shot Text to 3D Dynamic Human-Object Interaction, Xu et al. NeurIPS 2024.
- [InterFusion](https://sisidai.github.io/InterFusion/). InterFusion: Text-Driven Generation of 3D Human-Object Interaction, Dai et al. ECCV 2024.
- [CHOIS](https://lijiaman.github.io/projects/chois/). Controllable Human-Object Interaction Synthesis, Li et al. ECCV 2024.
- [F-HOI](https://f-hoi.github.io/). F-HOI: Toward Fine-grained Semantic-Aligned 3D Human-Object Interactions, Yang et al. ECCV 2024.
- [HIMO](https://lvxintao.github.io/himo/). HIMO: A New Benchmark for Full-Body Human Interacting with Multiple Objects, Lv et al. ECCV 2024.
- [PhysicsPingPong](https://jiashunwang.github.io/PhysicsPingPong/). Strategy and Skill Learning for Physics-based Table Tennis Animation, Wang et al. SIGGRAPH 2024.
- [NIFTY](https://nileshkulkarni.github.io/nifty/). NIFTY: Neural Object Interaction Fields for Guided Human Motion Synthesis, Kulkarni et al. CVPR 2024.
- [HOI Animator](https://zxylinkstart.github.io/HOIAnimator-Web/). HOIAnimator: Generating Text-prompt Human-object Animations using Novel Perceptive Diffusion Models, Son et al. CVPR 2024.
- [CG-HOI](https://cg-hoi.christian-diller.de/#main). CG-HOI: Contact-Guided 3D Human-Object Interaction Generation, Diller et al. CVPR 2024.
- [InterCap](https://intercap.is.tue.mpg.de/). InterCap: Joint Markerless 3D Tracking of Humans and Objects in Interaction, Huang et al. IJCV 2024.
- [FORCE](https://arxiv.org/abs/2403.11237). FORCE: Dataset and Method for Intuitive Physics Guided Human-object Interaction, Zhang et al. ArXiv 2024.
- [OMOMO](https://github.com/lijiaman/omomo_release). Object Motion Guided Human Motion Synthesis, Li et al. SIGGRAPH Asia 2023.
- [CHAIRS](https://jnnan.github.io/project/chairs/). Full-Body Articulated Human-Object Interaction, Jiang et al. ICCV 2023.
- [HGHOI](https://zju3dv.github.io/hghoi). Hierarchical Generation of Human-Object Interactions with Diffusion Probabilistic Models, Pi et al. ICCV 2023.
- [InterDiff](https://sirui-xu.github.io/InterDiff/). InterDiff: Generating 3D Human-Object Interactions with Physics-Informed Diffusion, Xu et al. ICCV 2023.
- [Object Pop Up](https://virtualhumans.mpi-inf.mpg.de/object_popup/). Object pop-up: Can we infer 3D objects and their poses from human interactions alone?, Petrov et al. CVPR 2023.
- [ARCTIC](https://arctic.is.tue.mpg.de/). A Dataset for Dexterous Bimanual Hand-Object Manipulation, Fan et al. CVPR 2023.
- [TOCH](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136630001.pdf). TOCH: Spatio-Temporal Object-to-Hand Correspondence for Motion Refinement, Zhou et al. ECCV 2022.
- [COUCH](https://virtualhumans.mpi-inf.mpg.de/couch/). COUCH: Towards Controllable Human-Chair Interactions, Zhang et al. ECCV 2022.
- [SAGA](https://jiahaoplus.github.io/SAGA/saga.html). SAGA: Stochastic Whole-Body Grasping with Contact, Wu et al. ECCV 2022.
- [GOAL](https://goal.is.tue.mpg.de/). GOAL: Generating 4D Whole-Body Motion for Hand-Object Grasping, Taheri et al. CVPR 2022.
- [GRAB](https://grab.is.tue.mpg.de/). GRAB: A Dataset of Whole-Body Human Grasping of Objects, Taheri et al. ECCV 2020.
## Human-Scene Interaction- [SIMS](https://arxiv.org/abs/2411.19921). SIMS: Simulating Human-Scene Interactions with Real World Script Planning, Wang et al. ArXiv 2024.
- [SAST](https://github.com/felixbmuller/SAST). Massively Multi-Person 3D Human Motion Forecasting with Scene Context, Mueller et al. ArXiv 2024.
- [DiMoP3D](https://sites.google.com/view/dimop3d). Harmonizing Stochasticity and Determinism: Scene-responsive Diverse Human Motion Prediction, Lou et al. NeurIPS 2024.
- [Liu et al.](https://arxiv.org/html/2312.02700v2). Revisit Human-Scene Interaction via Space Occupancy, Liu et al. ECCV 2024.
- [TesMo](https://research.nvidia.com/labs/toronto-ai/tesmo/). Generating Human Interaction Motions in Scenes with Text Control, Yi et al. ECCV 2024.
- [Afford-Motion](https://afford-motion.github.io/). Move as You Say, Interact as You Can: Language-guided Human Motion Generation with Scene Affordance, Wang et al. CVPR 2024.
- [GenZI](https://craigleili.github.io/projects/genzi/). GenZI: Zero-Shot 3D Human-Scene Interaction Generation, Li et al. CVPR 2024.
- [Cen et al.](https://zju3dv.github.io/text_scene_motion/). Generating Human Motion in 3D Scenes from Text Descriptions, Cen et al. CVPR 2024.
- [TRUMANS](https://jnnan.github.io/trumans/). Scaling Up Dynamic Human-Scene Interaction Modeling, Jiang et al. CVPR 2024.
- [UniHSI](https://xizaoqu.github.io/unihsi/). UniHSI: Unified Human-Scene Interaction via Prompted Chain-of-Contacts, Xiao et al. ICLR 2024.
- [DIMOS](https://github.com/zkf1997/DIMOS). DIMOS: Synthesizing Diverse Human Motions in 3D Indoor Scenes, Zhao et al. ICCV 2023.
- [LAMA](https://jiyewise.github.io/projects/LAMA/). Locomotion-Action-Manipulation: Synthesizing Human-Scene Interactions in Complex 3D Environments, Lee et al. ICCV 2023.
- [Narrator](http://cic.tju.edu.cn/faculty/likun/projects/Narrator). Narrator: Towards Natural Control of Human-Scene Interaction Generation via Relationship Reasoning, Xuan et al. ICCV 2023.
- [CIMI4D](http://www.lidarhumanmotion.net/cimi4d). CIMI4D: A Large Multimodal Climbing Motion Dataset under Human-scene Interactions, Yan et al. CVPR 2023.
- [Scene-Ego](https://people.mpi-inf.mpg.de/~jianwang/projects/sceneego/). Scene-aware Egocentric 3D Human Pose Estimation, Wang et al. CVPR 2023.
- [SLOPER4D](http://www.lidarhumanmotion.net/sloper4d). SLOPER4D: A Scene-Aware Dataset for Global 4D Human Pose Estimation in Urban Environments, Dai et al. CVPR 2023.
- [CIRCLE](https://stanford-tml.github.io/circle_dataset/). CIRCLE: Capture in Rich Contextual Environments, Araujo et al. CVPR 2023.
- [SceneDiffuser](https://scenediffuser.github.io/). Diffusion-based Generation, Optimization, and Planning in 3D Scenes, Huang et al. CVPR 2023.
- [PMP](https://github.com/jinseokbae/pmp). PMP: Learning to Physically Interact with Environments using Part-wise Motion Priors, Bae et al. SIGGRAPH 2023.
- [QuestEnvSim](https://dl.acm.org/doi/10.1145/3588432.3591504). QuestEnvSim: Environment-Aware Simulated Motion Tracking from Sparse Sensors, Lee et al. SIGGRAPH 2023.
- [Hassan et al.](https://research.nvidia.com/publication/2023-08_synthesizing-physical-character-scene-interactions) Synthesizing Physical Character-Scene Interactions, Hassan et al. SIGGRAPH 2023.
- [Mao et al.](https://github.com/wei-mao-2019/ContAwareMotionPred). Contact-aware Human Motion Forecasting, Mao et al. NeurIPS 2022.
- [HUMANISE](https://github.com/Silverster98/HUMANISE). HUMANISE: Language-conditioned Human Motion Generation in 3D Scenes, Wang et al. NeurIPS 2022.
- [EmbodiedPose](https://github.com/ZhengyiLuo/EmbodiedPose). Embodied Scene-aware Human Pose Estimation, Luo et al. NeurIPS 2022.
- [GIMO](https://github.com/y-zheng18/GIMO). GIMO: Gaze-Informed Human Motion Prediction in Context, Zheng et al. ECCV 2022.
- [Wang et al.](https://openaccess.thecvf.com/content/CVPR2022/papers/Wang_Towards_Diverse_and_Natural_Scene-Aware_3D_Human_Motion_Synthesis_CVPR_2022_paper.pdf). Towards Diverse and Natural Scene-aware 3D Human Motion Synthesis, Wang et al. CVPR 2022.
- [GAMMA](https://yz-cnsdqz.github.io/eigenmotion/GAMMA/). The Wanderings of Odysseus in 3D Scenes, Zhang et al. CVPR 2022.
- [SAMP](https://samp.is.tue.mpg.de/). Stochastic Scene-Aware Motion Prediction, Hassan et al, ICCV 2021.
- [PLACE](https://sanweiliti.github.io/PLACE/PLACE.html). PLACE: Proximity Learning of Articulation and Contact in 3D Environments, Zhang et al. 3DV 2020.
- [Starke et al.](https://www.ipab.inf.ed.ac.uk/cgvu/basketball.pdf). Local motion phases for learning multi-contact character movements, Starke et al. SIGGRAPH 2020.
- [PSI](https://openaccess.thecvf.com/content_CVPR_2020/papers/Zhang_Generating_3D_People_in_Scenes_Without_People_CVPR_2020_paper.pdf). Generating 3D People in Scenes without People, Zhang et al. CVPR 2020.
- [NSM](https://www.ipab.inf.ed.ac.uk/cgvu/nsm.pdf). Neural State Machine for Character-Scene Interactions, Starke et al. SIGGRAPH Asia 2019.
- [PROX](https://prox.is.tue.mpg.de/). Resolving 3D Human Pose Ambiguities with 3D Scene Constraints, Hassan et al. ICCV 2019
## Human-Human Interaction- [InterMask](https://gohar-malik.github.io/intermask). InterMask: 3D Human Interaction Generation via Collaborative Masked Modelling, Javed et al. ArXiv 2024.
- [COLLAGE](https://arxiv.org/abs/2409.20502). COLLAGE: Collaborative Human-Agent Interaction Generation using Hierarchical Latent Diffusion and Language Models, Daiya et al.
- [InterControl](https://github.com/zhenzhiwang/intercontrol). InterControl: Generate Human Motion Interactions by Controlling Every Joint, Wang et al. NeurIPS 2024.
- [Shan et al.](https://arxiv.org/abs/2405.18483). Towards Open Domain Text-Driven Synthesis of Multi-Person Motions, Shan et al. ECCV 2024.
- [ReMoS](https://vcai.mpi-inf.mpg.de/projects/remos/). ReMoS: 3D Motion-Conditioned Reaction Synthesis for Two-Person Interactions, Ghosh et al. ECCV 2024.
- [Inter-X](https://liangxuy.github.io/inter-x/). Inter-X: Towards Versatile Human-Human Interaction Analysis, Xu et al. CVPR 2024.
- [ReGenNet](https://github.com/liangxuy/ReGenNet). ReGenNet: Towards Human Action-Reaction Synthesis, Xu et al. CVPR 2024.
- [Fang et al.](https://openaccess.thecvf.com/content/CVPR2024/papers/Fang_Capturing_Closely_Interacted_Two-Person_Motions_with_Reaction_Priors_CVPR_2024_paper.pdf). Capturing Closely Interacted Two-Person Motions with Reaction Priors, Fan et al. CVPR 2024.
- [in2IN](https://openaccess.thecvf.com/content/CVPR2024W/HuMoGen/html/Ruiz-Ponce_in2IN_Leveraging_Individual_Information_to_Generate_Human_INteractions_CVPRW_2024_paper.html). in2IN: Leveraging Individual Information to Generate Human INteractions, Ruiz-Ponce et al. CVPR Workshop 2024.
- [InterGen](https://tr3e.github.io/intergen-page/). InterGen: Diffusion-based Multi-human Motion Generation under Complex Interactions, Liang et al. IJCV 2024.
- [ActFormer](https://liangxuy.github.io/actformer/). ActFormer: A GAN-based Transformer towards General Action-Conditioned 3D Human Motion Generation, Xu et al. ICCV 2023.
- [Tanaka et al.](https://github.com/line/Human-Interaction-Generation). Role-aware Interaction Generation from Textual Description, Tanaka et al. ICCV 2023.
- [Hi4D](https://yifeiyin04.github.io/Hi4D/). Hi4D: 4D Instance Segmentation of Close Human Interaction, Yin et al. CVPR 2023.
## Datasets & Benchmarks- [AtoM](https://atom-motion.github.io/). AToM: Aligning Text-to-Motion Model at Event-Level with GPT-4Vision Reward, Han et al. ArXiv 2024.
- [Evans et al](https://www.nature.com/articles/s41597-024-04077-3?fromPaywallRec=false). Synchronized Video, Motion Capture and Force Plate Dataset for Validating Markerless Human Movement Analysis, Evans et al. Scientific Data 2024.
- [MotionCritic](https://motioncritic.github.io/). Aligning Human Motion Generation with Human Perceptions, Wang et al. ArXiv 2024.
- [EMHI](https://arxiv.org/abs/2408.17168). EMHI: A Multimodal Egocentric Human Motion Dataset with HMD and Body-Worn IMUs, Fan et al. ArXiv 2024.
- [EgoSim](https://siplab.org/projects/EgoSim). EgoSim: An Egocentric Multi-view Simulator for Body-worn Cameras during Human Motion, Hollidt et al. NeurIPS D&B 2024.
- [synNsync](https://von31.github.io/synNsync/). Synergy and Synchrony in Couple Dances, Manukele et al. ArXiv 2024.
- [Muscles in Time](https://simplexsigil.github.io/mint). Muscles in Time: Learning to Understand Human Motion by Simulating Muscle Activations, Schneider et al. NeurIPS D&B 2024.
- [Text to blind motion](https://blindways.github.io/). Text to blind motion, Kim et al. NeurIPS D&B 2024.
- [MotionBank](https://github.com/liangxuy/MotionBank). MotionBank: A Large-scale Video Motion Benchmark with Disentangled Rule-based Annotations, Xu et al. ArXiv 2024.
- [CORE4D](https://core4d.github.io/). CORE4D: A 4D Human-Object-Human Interaction Dataset for Collaborative Object REarrangement, Zhang et al. ArXiv 2024.
- [CLaM](https://dl.acm.org/doi/abs/10.1145/3664647.3685523). CLaM: An Open-Source Library for Performance Evaluation of Text-driven Human Motion Generation, Chen et al. ACM MM 2024.
- [AddBiomechanics](https://addbiomechanics.org/). AddBiomechanics Dataset: Capturing the Physics of Human Motion at Scale, Werling et al. ECCV 2024.
- [LiveHPS++](https://4dvlab.github.io/project_page/LiveHPS2.html). LiveHPS++: Robust and Coherent Motion Capture in Dynamic Free Environment, Ren et al. ECCV 2024.
- [SignAvatars](https://signavatars.github.io/). SignAvatars: A Large-scale 3D Sign Language Holistic Motion Dataset and Benchmark, Yu et al. ECCV 2024.
- [Nymeria](https://www.projectaria.com/datasets/nymeria). Nymeria: A massive collection of multimodal egocentric daily motion in the wild, Ma et al. ECCV 2024.
- [Inter-X](https://liangxuy.github.io/inter-x/). Inter-X: Towards Versatile Human-Human Interaction Analysis, Xu et al. CVPR 2024.
- [HardMo](https://openaccess.thecvf.com/content/CVPR2024/papers/Liao_HardMo_A_Large-Scale_Hardcase_Dataset_for_Motion_Capture_CVPR_2024_paper.pdf). HardMo: ALarge-Scale Hardcase Dataset for Motion Capture, Liao et al. CVPR 2024.
- [RELI11D](http://www.lidarhumanmotion.net/reli11d/). RELI11D: A Comprehensive Multimodal Human Motion Dataset and Method, Yan et al. CVPR 2024.
- [GroundLink](https://cs-people.bu.edu/xjhan/groundlink.html). GroundLink: A Dataset Unifying Human Body Movement and Ground Reaction Dynamics, Han et al. SIGGRAPH Asia 2023.
- [HOH](https://hohdataset.github.io/). HOH: Markerless Multimodal Human-Object-Human Handover Dataset with Large Object Count, Wiederhold et al. NeurIPS D&B 2023.
- [Motion-X](https://motion-x-dataset.github.io/). Motion-X: A Large-scale 3D Expressive Whole-body Human Motion Dataset, Lin et al. NeurIPS D&B 2023.
- [Humans in Kitchens](https://github.com/jutanke/hik). Humans in Kitchens: A Dataset for Multi-Person Human Motion Forecasting with Scene Context, Tanke et al. NeurIPS D&B 2023.
- [CHAIRS](https://jnnan.github.io/project/chairs/). Full-Body Articulated Human-Object Interaction, Jiang et al. ICCV 2023.
- [CIMI4D](http://www.lidarhumanmotion.net/cimi4d). CIMI4D: A Large Multimodal Climbing Motion Dataset under Human-scene Interactions, Yan et al. CVPR 2023.
- [FLAG3D](https://andytang15.github.io/FLAG3D/). FLAG3D: A 3D Fitness Activity Dataset with Language Instruction, Tang et al. CVPR 2023.
- [Hi4D](https://yifeiyin04.github.io/Hi4D/). Hi4D: 4D Instance Segmentation of Close Human Interaction, Yin et al. CVPR 2023.
- [CIRCLE](https://stanford-tml.github.io/circle_dataset/). CIRCLE: Capture in Rich Contextual Environments, Araujo et al. CVPR 2023.
- [MoCapAct](https://github.com/microsoft/MoCapAct). MoCapAct: A Multi-Task Dataset for Simulated Humanoid Control, Wagener et al. NeurIPS 2022.
- [ForcePose](https://github.com/MichiganCOG/ForcePose?tab=readme-ov-file). Learning to Estimate External Forces of Human Motion in Video, Louis et al. ACM MM 2022.
- [BEAT](https://pantomatrix.github.io/BEAT/). BEAT: A Large-Scale Semantic and Emotional Multi-Modal Dataset for Conversational Gestures Synthesis, Liu et al. ECCV 2022.
- [BRACE](https://github.com/dmoltisanti/brace). BRACE: The Breakdancing Competition Dataset for Dance Motion Synthesis, Moltisanti et al. ECCV 2022.
- [EgoBody](https://sanweiliti.github.io/egobody/egobody.html). Egobody: Human body shape and motion of interacting people from head-mounted devices, Zhang et al. ECCV 2022.
- [GIMO](https://github.com/y-zheng18/GIMO). GIMO: Gaze-Informed Human Motion Prediction in Context, Zheng et al. ECCV 2022.
- [HuMMan](https://caizhongang.github.io/projects/HuMMan/). HuMMan: Multi-Modal 4D Human Dataset for Versatile Sensing and Modeling, Cai et al. ECCV 2022.
- [HumanML3D](https://ericguo5513.github.io/text-to-motion). Generating Diverse and Natural 3D Human Motions from Text, Guo et al. CVPR 2022
- [AIST++](https://google.github.io/aichoreographer/). AI Choreographer: Music Conditioned 3D Dance Generation with AIST++, Li et al. ICCV 2021.
- [BABEL](https://babel.is.tue.mpg.de/). BABEL: Bodies, Action and Behavior with English Labels, Punnakkal et al. CVPR 2021
- [PROX](https://prox.is.tue.mpg.de/). Resolving 3D Human Pose Ambiguities with 3D Scene Constraints, Hassan et al. ICCV 2019
- [AMASS](https://amass.is.tue.mpg.de/). AMASS: Archive of Motion Capture As Surface Shapes, Mahmood et al. ICCV 2019
## Humanoid, Simulated or Real- [SIMS](https://arxiv.org/abs/2411.19921). SIMS: Simulating Human-Scene Interactions with Real World Script Planning, Wang et al. ArXiv 2024.
- [PDP](https://arxiv.org/abs/2406.00960). PDP: Physics-Based Character Animation via Diffusion Policy, Truong et al. ArXiv 2024.
- [HOVER](https://arxiv.org/abs/2410.21229). HOVER: Versatile Neural Whole-Body Controller for Humanoid Robots, He et al. ArXiv 2024.
- [CLoSD](https://guytevet.github.io/CLoSD-page/). CLoSD: Closing the Loop between Simulation and Diffusion for multi-task character control, Tevet et al. ArXiv 2024.
- [Humanoidlympics](https://smplolympics.github.io/SMPLOlympics). Humanoidlympics: Sports Environments for Physically Simulated Humanoids, Luo et al. ArXiv 2024.
- [SkillMimic](https://ingrid789.github.io/SkillMimic/). SkillMimic: Learning Reusable Basketball Skills from Demonstrations, Wang et al. ArXiv 2024.
- [MaskedMimic](https://xbpeng.github.io/projects/MaskedMimic/index.html). MaskedMimic: Unified Physics-Based Character Control Through Masked Motion Inpainting, Tessler et al, SIGGRAPH Asia 2024.
- [HumanVLA](https://arxiv.org/abs/2406.19972). HumanVLA: Towards Vision-Language Directed Object Rearrangement by Physical Humanoid, Xu et al. NeurIPS 2024.
- [OmniGrasp](https://www.zhengyiluo.com/Omnigrasp-Site/). Grasping Diverse Objects with Simulated Humanoids, Luo et al. NeurIPS 2024.
- [InterControl](https://github.com/zhenzhiwang/intercontrol). InterControl: Generate Human Motion Interactions by Controlling Every Joint, Wang et al. NeurIPS 2024.
- [CooHOI](https://gao-jiawei.com/Research/CooHOI/). CooHOI: Learning Cooperative Human-Object Interaction with Manipulated Object Dynamics, Gao et al. NeurIPS 2024.
- [Radosavovic et al.](https://humanoid-next-token-prediction.github.io/). Humanoid Locomotion as Next Token Prediction, Radosavovic et al. NeurIPS 2024.
- [HARMON](https://ut-austin-rpl.github.io/Harmon/). Harmon: Whole-Body Motion Generation of Humanoid Robots from Language Descriptions, Jiang et al. CoRL 2024.
- [HumanPlus](https://humanoid-ai.github.io/). HumanPlus: Humanoid Shadowing and Imitation from Humans, Fu et al. CoRL 2024.
- [OmniH2O](https://omni.human2humanoid.com/). OmniH2O: Universal and Dexterous Human-to-Humanoid Whole-Body Teleoperation and Learning, He et al. CoRL 2024.
- [H2O](https://human2humanoid.com/). Learning Human-to-Humanoid Real-Time Whole-Body Teleoperation, He et al. IROS 2024.
- [MHC](https://idigitopia.github.io/projects/mhc/). Generating Physically Realistic and Directable Human Motions from Multi-Modal Inputs, Liu et al. ECCV 2024.
- [MoConVQ](https://moconvq.github.io/). MoConVQ: Unified Physics-Based Motion Control via Scalable Discrete Representations, Yao et al. SIGGRAPH 2024.
- [CAMDM](https://aiganimation.github.io/CAMDM/). Taming Diffusion Probabilistic Models for Character Control, Chen et al. SIGGRAPH 2024.
- [PhysicsPingPong](https://jiashunwang.github.io/PhysicsPingPong/). Strategy and Skill Learning for Physics-based Table Tennis Animation, Wang et al. SIGGRAPH 2024.
- [SuperPADL](https://arxiv.org/abs/2407.10481). SuperPADL: Scaling Language-Directed Physics-Based Control with Progressive Supervised Distillation, Juravsky et al. SIGGRAPH 2024.
- [SimXR](https://www.zhengyiluo.com/SimXR). Real-Time Simulated Avatar from Head-Mounted Sensors, Luo et al. CVPR 2024.
- [AnySkill](https://openaccess.thecvf.com/content/CVPR2024/html/Cui_AnySkill_Learning_Open-Vocabulary_Physical_Skill_for_Interactive_Agents_CVPR_2024_paper.html). AnySkill: Learning Open-Vocabulary Physical Skill for Interactive Agents, Cui et al. CVPR 2024.
- [PULSE](https://github.com/ZhengyiLuo/PULSE). Universal Humanoid Motion Representations for Physics-Based Control, Luo et al. ICLR 2024.
- [H-GAP](https://github.com/facebookresearch/hgap). H-GAP: Humanoid Control with a Generalist Planner, Jiang et al. ICLR 2024.
- [UniHSI](https://xizaoqu.github.io/unihsi/). UniHSI: Unified Human-Scene Interaction via Prompted Chain-of-Contacts, Xiao et al. ICLR 2024.
- [PhySHOI](https://wyhuai.github.io/physhoi-page/). PhysHOI: Physics-Based Imitation of Dynamic Human-Object Interaction, Wang et al. ArXiv 2024.
- [CASE](https://frank-zy-dou.github.io/projects/CASE/index.html). C·ASE: Learning Conditional Adversarial Skill Embeddings for Physics-based Characters
, Dou et al. SIGGRAPH Asia 2023.
- [AdaptNet](https://github.com/xupei0610/AdaptNet). AdaptNet: Policy Adaptation for Physics-Based Character Control, Xu et al. SIGGRAPH Asia 2023.
- [NCP](https://tencent-roboticsx.github.io/NCP/). Neural Categorical Priors for Physics-Based Character Control, Zhu et al. SIGGRAPH Asia 2023.
- [DROP](https://stanford-tml.github.io/drop/). DROP: Dynamics Responses from Human Motion Prior and Projective Dynamics, Jiang et al. SIGGRAPH Asia 2023.
- [InsActor](https://jiawei-ren.github.io/projects/insactor/). InsActor: Instruction-driven Physics-based Characters, Ren et al. NeurIPS 2023.
- [PHC](https://zhengyiluo.github.io/PHC/). Perpetual Humanoid Control for Real-time Simulated Avatars, Luo et al. ICCV 2023.
- [DiffMimic](https://diffmimic.github.io/). DiffMimic: Efficient Motion Mimicking with Differentiable Physics, Ren et al. ICLR 2023.
- [Vid2Player3D](https://research.nvidia.com/labs/toronto-ai/vid2player3d/). DiffMimic: Efficient Motion Mimicking with Differentiable Physics, Zhang et al. SIGGRAPH 2023.
- [QuestEnvSim](https://dl.acm.org/doi/10.1145/3588432.3591504). QuestEnvSim: Environment-Aware Simulated Motion Tracking from Sparse Sensors, Lee et al. SIGGRAPH 2023.
- [Hassan et al.](https://research.nvidia.com/publication/2023-08_synthesizing-physical-character-scene-interactions) Synthesizing Physical Character-Scene Interactions, Hassan et al. SIGGRAPH 2023.
- [CALM](https://xbpeng.github.io/projects/CALM/index.html). CALM: Conditional Adversarial Latent Models for Directable Virtual Characters, Tessler et al.
- [Composite Motion](https://github.com/xupei0610/CompositeMotion). Composite Motion Learning with Task Control, Xu et al. SIGGRAPH 2023.
- [Trace and Pace](https://xbpeng.github.io/projects/Trace_Pace/index.html). Trace and Pace: Controllable Pedestrian Animation via Guided Trajectory Diffusion, Rempe et al. CVPR 2023.
- [EmbodiedPose](https://github.com/ZhengyiLuo/EmbodiedPose). Embodied Scene-aware Human Pose Estimation, Luo et al. NeurIPS 2022.
- [MoCapAct](https://github.com/microsoft/MoCapAct). MoCapAct: A Multi-Task Dataset for Simulated Humanoid Control, Wagener et al. NeurIPS 2022.
- [Gopinath et al.](https://research.facebook.com/publications/motion-in-betweening-for-physically-simulated-characters/). Motion In-betweening for Physically Simulated Characters, Gopinath et al. SIGGRAPH Asia 2022.
- [AIP](https://dl.acm.org/doi/10.1145/3550082.3564207). AIP: Adversarial Interaction Priors for Multi-Agent Physics-based Character Control, Younes et al. SIGGRAPH Asia 2022.
- [ControlVAE](https://github.com/heyuanYao-pku/Control-VAE). ControlVAE: Model-Based Learning of Generative Controllers for Physics-Based Characters, Yao et al. SIGGRAPH Asia 2022.
- [QuestSim](https://dl.acm.org/doi/fullHtml/10.1145/3550469.3555411). QuestSim: Human Motion Tracking from Sparse Sensors with Simulated Avatars, Winkler et al. SIGGRAPH Asia 2022.
- [PADL](https://github.com/nv-tlabs/PADL). PADL: Language-Directed Physics-Based Character, Juravsky et al. SIGGRAPH Asia 2022.
- [Wang et al.](https://dl.acm.org/doi/10.1145/3550454.3555490) Differentiable Simulation of Inertial Musculotendons, Wang et al. SIGGRAPH Asia 2022.
- [ASE](https://xbpeng.github.io/projects/ASE/index.html). ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically Simulated Characters, Peng et al.
- [Learn to Move](https://xbpeng.github.io/projects/Learn_to_Move/index.html). Deep Reinforcement Learning for Modeling Human Locomotion Control in Neuromechanical Simulation, Peng et al. Journal of Neuro-Engineering and Rehabilitation 2021
- [KinPoly](https://zhengyiluo.github.io/projects/kin_poly/). Dynamics-Regulated Kinematic Policy for Egocentric Pose Estimation, Luo et al. NeurIPS 2021.
- [AMP](https://xbpeng.github.io/projects/AMP/index.html). AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control, SIGGRAPH 2021.
- [SimPoE](https://www.ye-yuan.com/simpoe). SimPoE: Simulated Character Control for 3D Human Pose Estimation, Yuan et al. CVPR 2021.
- [RFC](https://www.ye-yuan.com/rfc). Residual Force Control for Agile Human Behavior Imitation and Extended Motion Synthesis, Yuan et al. NeurIPS 2020.
- [Yuan et al.](https://arxiv.org/abs/1907.04967). Diverse Trajectory Forecasting with Determinantal Point Processes, Yuan et al. ICLR 2020.
- [Ego-Pose](https://ye-yuan.com/ego-pose/). Ego-Pose Estimation and Forecasting as Real-Time PD Control, Yuan et al. ICCV 2019.
- [DeepMimic](https://xbpeng.github.io/projects/DeepMimic/index.html). DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills, SIGGRAPH 2018.
## Bio-stuff: Human Anatomy, Biomechanics, Physiology- [BioDesign](https://cs-people.bu.edu/xjhan/bioDesign.html). Motion-Driven Neural Optimizer for Prophylactic Braces Made by Distributed Microstructures, Han et al. SIGGRAPH Asia 2024.
- [Evans et al](https://www.nature.com/articles/s41597-024-04077-3?fromPaywallRec=false). Synchronized Video, Motion Capture and Force Plate Dataset for Validating Markerless Human Movement Analysis, Evans et al. Scientific Data 2024.
- [Muscles in Time](https://simplexsigil.github.io/mint). Muscles in Time: Learning to Understand Human Motion by Simulating Muscle Activations, Schneider et al. NeurIPS D&B 2024.
- [Wei et al](https://lnsgroup.cc/research/hdsafebo). Safe Bayesian Optimization for the Control of High-Dimensional Embodied Systems, Wei et al. CoRL 2024.
- [ImDy](https://foruck.github.io/ImDy). ImDy: Human Inverse Dynamics from Imitated Observations, Liu et al. ArXiv 2024.
- [Macwan et al](https://journals.sagepub.com/doi/full/10.1177/10711813241262026). High-Fidelity Worker Motion Simulation With Generative AI, Macwan et al. HFES 2024.
- [AddBiomechanics](https://addbiomechanics.org/). AddBiomechanics Dataset: Capturing the Physics of Human Motion at Scale, Werling et al. ECCV 2024.
- [MANIKIN](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/00194.pdf). MANIKIN: Biomechanically Accurate Neural Inverse Kinematics for Human Motion Estimation, Jiang et al. ECCV 2024.
- [HIT](https://hit.is.tue.mpg.de/). HIT: Estimating Internal Human Implicit Tissues from the Body Surface, Keller et al. CVPR 2024.
- [Dai et al](https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2024.1388742/full). Full-body pose reconstruction and correction in virtual reality for rehabilitation training, Dai et al. Frontiers in Neuroscience 2024.
- [DynSyn](https://www.beanpow.top/assets/pdf/dynsyn_poster.pdf). DynSyn: Dynamical Synergistic Representation for Efficient Learning and Control in Overactuated Embodied Systems, He et al. ICML 2024.
- [He et al.](https://arxiv.org/pdf/2312.05473.pdf). Self Model for Embodied Intelligence: Modeling Full-Body Human Musculoskeletal System and Locomotion Control with Hierarchical Low-Dimensional Representation, He et al. ICRA 2024.
- [OpenCapBench](https://arxiv.org/abs/2406.09788). A Benchmark to Bridge Pose Estimation and Biomechanics, Gozlan et al. ArXiv 2024.
- [SKEL](https://skel.is.tue.mpg.de/). From skin to skeleton: Towards biomechanically accurate 3d digital humans, Keller et al. SIGGRAPH Asia 2023.
- [MuscleVAE](https://pku-mocca.github.io/MuscleVAE-page/). MuscleVAE: Model-Based Controllers of Muscle-Actuated Characters, Feng et al. SIGGRAPH Asia 2023.
- [Bidirectional GaitNet](https://github.com/namjohn10/BidirectionalGaitNet) Bidirectional GaitNet, Park et al. SIGGRAPH 2023.
- [Lee et al.](https://arxiv.org/abs/2305.04995). Anatomically Detailed Simulation of Human Torso, Lee et al. SIGGRAPH 2023.
- [MiA](https://musclesinaction.cs.columbia.edu/). Muscles in Action, Chiquer et al. ICCV 2023.
- [OSSO](https://osso.is.tue.mpg.de/). OSSO: Obtaining Skeletal Shape from Outside, Keller et al. CVPR 2022.
- [Xing et al](https://www.nature.com/articles/s41597-022-01188-7). Functional movement screen dataset collected with two Azure Kinect depth sensors, Xing et al. Scientific Data 2022.
- [LRLE](https://github.com/jyf588/lrle). Synthesis of biologically realistic human motion using joint torque actuation, Jiang et al. SIGGRAPH 2019.
- [HuGaDb](https://link.springer.com/chapter/10.1007/978-3-319-73013-4_12). HuGaDB: Human Gait Database for Activity Recognition from Wearable Inertial Sensor Networks, Chereshnev et al. AIST 2017.