Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-human-motion
https://github.com/foruck/awesome-human-motion
Last synced: 1 day ago
JSON representation
-
Human-Object Interaction
- InterDreamer - Shot Text to 3D Dynamic Human-Object Interaction, Xu et al. NeurIPS 2024.
- InterFusion - Driven Generation of 3D Human-Object Interaction, Dai et al. ECCV 2024.
- CHOIS - Object Interaction Synthesis, Li et al. ECCV 2024.
- F-HOI - HOI: Toward Fine-grained Semantic-Aligned 3D Human-Object Interactions, Yang et al. ECCV 2024.
- HIMO - Body Human Interacting with Multiple Objects, Lv et al. ECCV 2024.
- PhysicsPingPong - based Table Tennis Animation, Wang et al. SIGGRAPH 2024.
- NIFTY
- HOI Animator - prompt Human-object Animations using Novel Perceptive Diffusion Models, Son et al. CVPR 2024.
- CG-HOI - HOI: Contact-Guided 3D Human-Object Interaction Generation, Diller et al. CVPR 2024.
- ARCTIC - Object Manipulation, Fan et al. CVPR 2023.
- TOCH - Temporal Object-to-Hand Correspondence for Motion Refinement, Zhou et al. ECCV 2022.
- COUCH - Chair Interactions, Zhang et al. ECCV 2022.
- SAGA - Body Grasping with Contact, Wu et al. ECCV 2022.
- GOAL - Body Motion for Hand-Object Grasping, Taheri et al. CVPR 2022.
- GRAB - Body Human Grasping of Objects, Taheri et al. ECCV 2020.
- CHAIRS - Body Articulated Human-Object Interaction, Jiang et al. ICCV 2023.
- HGHOI - Object Interactions with Diffusion Probabilistic Models, Pi et al. ICCV 2023.
- InterDiff - Object Interactions with Physics-Informed Diffusion, Xu et al. ICCV 2023.
- Object Pop Up - up: Can we infer 3D objects and their poses from human interactions alone? Petrov et al. CVPR 2023.
- OOD-HOI - HOI: Text-Driven 3D Whole-Body Human-Object Interactions Generation Beyond Training Domains, Zhang et al. ArXiv 2024.
- COLLAGE - Agent Interaction Generation using Hierarchical Latent Diffusion and Language Models, Daiya et al. ArXiv 2024.
- SMGDiff
- SkillMimic
- CORE4D - Object-Human Interaction Dataset for Collaborative Object REarrangement, Zhang et al. ArXiv 2024.
- Wu et al - Object Interaction from Human-Level Instructions, Wu et al. ArXiv 2024.
- GRIP
- HumanVLA - Language Directed Object Rearrangement by Physical Humanoid, Xu et al. NeurIPS 2024.
- OmniGrasp
- EgoChoir - Object Interaction Regions from Egocentric Views, Yang et al. NeurIPS 2024.
- CooHOI - Object Interaction with Manipulated Object Dynamics, Gao et al. NeurIPS 2024.
- OOD-HOI - HOI: Text-Driven 3D Whole-Body Human-Object Interactions Generation Beyond Training Domains, Zhang et al. ArXiv 2024.
- COLLAGE - Agent Interaction Generation using Hierarchical Latent Diffusion and Language Models, Daiya et al. ArXiv 2024.
- SMGDiff
- SkillMimic
- CORE4D - Object-Human Interaction Dataset for Collaborative Object REarrangement, Zhang et al. ArXiv 2024.
- Wu et al - Object Interaction from Human-Level Instructions, Wu et al. ArXiv 2024.
- GRIP
- HumanVLA - Language Directed Object Rearrangement by Physical Humanoid, Xu et al. NeurIPS 2024.
- OmniGrasp
- EgoChoir - Object Interaction Regions from Egocentric Views, Yang et al. NeurIPS 2024.
- CooHOI - Object Interaction with Manipulated Object Dynamics, Gao et al. NeurIPS 2024.
- InterDreamer - Shot Text to 3D Dynamic Human-Object Interaction, Xu et al. NeurIPS 2024.
- InterFusion - Driven Generation of 3D Human-Object Interaction, Dai et al. ECCV 2024.
- CHOIS - Object Interaction Synthesis, Li et al. ECCV 2024.
- F-HOI - HOI: Toward Fine-grained Semantic-Aligned 3D Human-Object Interactions, Yang et al. ECCV 2024.
- HIMO - Body Human Interacting with Multiple Objects, Lv et al. ECCV 2024.
- PhysicsPingPong - based Table Tennis Animation, Wang et al. SIGGRAPH 2024.
- NIFTY
- HOI Animator - prompt Human-object Animations using Novel Perceptive Diffusion Models, Son et al. CVPR 2024.
- CG-HOI - HOI: Contact-Guided 3D Human-Object Interaction Generation, Diller et al. CVPR 2024.
- InterCap
- InterCap
- OMOMO
- OMOMO
- CHAIRS - Body Articulated Human-Object Interaction, Jiang et al. ICCV 2023.
- HGHOI - Object Interactions with Diffusion Probabilistic Models, Pi et al. ICCV 2023.
- InterDiff - Object Interactions with Physics-Informed Diffusion, Xu et al. ICCV 2023.
- Object Pop Up - up: Can we infer 3D objects and their poses from human interactions alone?, Petrov et al. CVPR 2023.
- ARCTIC - Object Manipulation, Fan et al. CVPR 2023.
- TOCH - Temporal Object-to-Hand Correspondence for Motion Refinement, Zhou et al. ECCV 2022.
- COUCH - Chair Interactions, Zhang et al. ECCV 2022.
- SAGA - Body Grasping with Contact, Wu et al. ECCV 2022.
- GOAL - Body Motion for Hand-Object Grasping, Taheri et al. CVPR 2022.
- GRAB - Body Human Grasping of Objects, Taheri et al. ECCV 2020.
- FORCE - object Interaction, Zhang et al. 3DV 2025.
- SyncDiff - Body Human-Object Interaction Synthesis, He et al. ArXiv 2024.
- DiffGrasp - Body Grasping Synthesis Guided by Object Motion Using a Diffusion Model, Zhang et al. AAAI 2025.
- PiMForce - Informed Muscular Force Learning for Robust Hand Pressure Estimation, Seo et al. NeurIPS 2024.
- CHOICE - Object Interaction in Cluttered Environments for Pick-and-Place Actions, Lu et al. ArXiv 2024.
- TriDi
- InterTrack
- FORCE - object Interaction, Zhang et al. 3DV 2025.
- Phys-Fullbody-Grasp - Body Hand-Object Interaction Synthesis, Braun et al. 3DV 2024.
- FAVOR - Body AR-driven Virtual Object Rearrangement Guided by Instruction Text, Li et al. AAAI 2024.
- BEHAVE
-
Motion Generation, Text/Speech/Music-Driven
- MotionCraft - Body Motion with Plug-and-Play Multimodal Controls, Bian et al. AAAI 2025.
- AAMDM - regressive Motion Diffusion Model, Li et al. CVPR 2024.
- LLaMo
- OMG - vocabulary Motion Generation via Mixture of Controllers, Liang et al. CVPR 2024.
- FlowMDM
- Digital Life Project
- STMC - Track Timeline Control for Text-Driven 3D Human Motion Generation, Petrovich et al. CVPR Workshop 2024.
- InstructMotion - to-Motion Generation with Human Preference, Sheng et al. CVPR Workshop 2024.
- Single Motion Diffusion
- NeRM - Framerate Human Motion Synthesis, Wei et al. ICLR 2024.
- PriorMDM
- OmniControl
- Adiya et al.
- ReinDiffuse
- InfiniDreamer
- FTMoMamba
- KinMo - aware Human Motion Understanding and Generation, Zhang et al. ArXiv 2024.
- Mo et al.
- HMDM
- MotionDiffuse - Driven Human Motion Generation with Diffusion Model, Zhang et al. TPAMI 2023.
- Bailando - Critic GPT with Choreographic Memory, Li et al. CVPR 2022.
- UDE-2 - Part Human Motion Synthesis, Zhou et al. ArXiv 2023.
- Motion Script
- NeMF
- PADL - Directed Physics-Based Character, Juravsky et al. SIGGRAPH Asia 2022.
- Rhythmic Gesticulator - Aware Co-Speech Gesture Synthesis with Hierarchical Neural Embeddings, Ao et al. SIGGRAPH Asia 2022.
- TEACH
- Implicit Motion
- Zhong et al. - Modulation CVAE for 3D Action-Conditioned Human Motion Synthesis, Zhong et al. ECCV 2022.
- MotionCLIP
- PoseGPT
- TEMOS
- MVLift
- DisCoRD
- MoTe - Text Diffusion Model for Multiple Generation Tasks, Wue et al. ArXiv 2024.
- MotionRL - to-Motion Generation to Human Preferences with Multi-Reward Reinforcement Learning, Liu et al. ArXiv 2024.
- UniMuMo
- Wang et al
- Unimotion
- Macwan et al - Fidelity Worker Motion Simulation With Generative AI, Macwan et al. HFES 2024.
- Jin et al. - Guided Motion Diffusion Model for Text-to-Motion Generation, Jin et al. ECCV 2024.
- Motion Mamba
- EMDM - Quality Human Motion Generation, Zhou et al. ECCV 2024.
- CoMo
- CoMusion
- Shan et al. - Driven Synthesis of Multi-Person Motions, Shan et al. ECCV 2024.
- ParCo - Coordinating Text-to-Motion Synthesis, Zou et al. ECCV 2024.
- Sampieri et al. - Aware Motion Synthesis via Latent Diffusion, Sampieri et al. ECCV 2024.
- ChroAccRet - Language Models, Fujiwara et al. ECCV 2024.
- MHC - Modal Inputs, Liu et al. ECCV 2024.
- ProMotion - vocabulary Text-to-Motion Generation, Liu et al. ECCV 2024.
- FreeMotion - Free Human Motion Synthesis with Multimodal Large Language Models, Zhang et al. ECCV 2024.
- Text Motion Translator - Directional Model for Enhanced 3D Human Motion Generation from Open-Vocabulary Descriptions, Qian et al. ECCV 2024.
- FreeMotion - free Text-to-Motion Synthesis, Fan et al. ECCV 2024.
- Kinematic Phrases
- ACTOR - Conditioned 3D Human Motion Synthesis with Transformer VAE, Petrovich et al. ICCV 2021.
- AIST++
- Starke et al. - contact character movements, Starke et al. SIGGRAPH 2020.
- OOHMG - being: Open-vocabulary Text-to-Motion Generation with Wordless Training, Lin et al. CVPR 2023.
- EDGE
- AvatarCLIP - Shot Text-Driven Generation and Animation of 3D Avatars, Hong et al. SIGGRAPH 2022.
- DeepPhase
- AtoM - to-Motion Model at Event-Level with GPT-4Vision Reward, Han et al. ArXiv 2024.
- LGTM - to-Global Text-Driven Human Motion Diffusion Models, Sun et al. SIGGRAPH 2024.
- AMUSE - driven 3D Body Animation via Disentangled Latent Diffusion, Chhatre et al. CVPR 2024.
- MAS - view Ancestral Sampling for 3D motion generation using 2D diffusion, Kapon et al. CVPR 2024.
- WANDR - guided Human Motion Generation, Diomataris et al. CVPR 2024.
- MoMask
- ChapPose
- MMM
- AMD
- PACER+ - Demand Pedestrian Animation Controller in Driving Scenarios, Wang et al. CVPR 2024.
- MotionMix - Supervised Diffusion for Controllable Motion Generation, Hoang et al. AAAI 2024.
- B2A-HDM - to-Motion Synthesis via Basic-to-Advanced Hierarchical Diffusion Model, Xie et al. AAAI 2024.
- GUESS - Driven Human Motion Generation, Gao et al. TPAMI 2024.
- Xie et al.
- MotionGPT
- FineMoGen - Grained Spatio-Temporal Motion Generation and Editing, Zhang et al. NeurIPS 2023.
- InsActor - driven Physics-based Characters, Ren et al. NeurIPS 2023.
- AttT2M - Driven Human Motion Generation with Multi-Perspective Attention Mechanism, Zhong et al. ICCV 2023.
- ReMoDiffusion - Augmented Motion Diffusion Model, Zhang et al. ICCV 2023.
- BelFusion - Driven Human Motion Prediction, Barquero et al. ICCV 2023.
- TLcontrol
- ExpGest - Text Guidance, Cheng et al. ICME 2024.
- Chen et al - Informed Vector Quantization Variational Auto-Encoder for Text-to-Motion Generation, Chen et al. ICME Workshop 2024.
- HumanTOMATO - aligned Whole-body Motion Generation, Lu et al. ICML 2024.
- CondMDI - betweening with Diffusion Models, Cohan et al. SIGGRAPH 2024.
- Morph - free Physics Optimization Framework for Human Motion Generation, Li et al. ArXiv 2024.
- Lodge++ - quality and Long Dance Generation with Vivid Choreography Patterns, Li et al. ArXiv 2024.
- MotionCLR - free Editing via Understanding Attention Mechanisms, Chen et al. ArXiv 2024.
- MotionGlot - Embodied Motion Generation Model, Harithas et al. ArXiv 2024.
- LEAD
- Leite et al. - to-Motion Models via Pose and Video Conditioned Editing, Leite et al. ArXiv 2024.
- MotionChain
- SMooDi
- BAMM
- MotionLCM - time Controllable Motion Generation via Latent Consistency Model, Dai et al. ECCV 2024.
- Ren et al. - Diffusion Models, Ren et al. ECCV 2024.
- M2D2M - Motion Generation from Text with Discrete Diffusion Models, Chi et al. ECCV 2024.
- Large Motion Model - Modal Motion Generation, Zhang et al. ECCV 2024.
- MoRAG - - Multi-Fusion Retrieval Augmented Generation for Human Motion, Shashank et al. ArXiv 2024.
- synNsync
- Dong et al - Conditioned 3D American Sign Language Motion Generation, Dong et al. EMNLP 2024.
- SynTalker - Body Control in Prompt-Based Co-Speech Motion Generation, Chen et al. ACM MM 2024.
- TEDi - Entangled Diffusion for Long-Term Motion Synthesis, Zhang et al. SIGGRAPH 2024.
- A-MDM - Regressive Motion Diffusion Models, Shi et al. SIGGRAPH 2024.
- ProgMoGen - set Motion Control Tasks, Liu et al. CVPR 2024.
- Liu et al. - Speech Motion Generation, Liu et al. CVPR 2024.
- SuperPADL - Directed Physics-Based Control with Progressive Supervised Distillation, Juravsky et al. SIGGRAPH 2024.
- Duolando - Policy Reinforcement Learning for Dance Accompaniment, Li et al. ICLR 2024.
- HuTuDiffusion - Tuned Navigation of Latent Motion Diffusion Models with Minimal Feedback, Han et al. AAAI 2024.
- Text to blind motion
- UniMTS - training for Motion Time Series, Zhang et al. NeurIPS 2024.
- Christopher et al.
- MoMu-Diffusion - Diffusion: On Learning Long-Term Motion-Music Synchronization and Correspondence, You et al. NeurIPS 2024.
- MoGenTS - Temporal Joint Modeling, Yuan et al. NeurIPS 2024.
- M3GPT
- Bikov et al - Tuning, Bikov et al. NeurIPS Workshop 2024.
- GMD
- SINC
- Kong et al. - Centric Human Motion Generation in Discrete Latent Space, Kong et al. ICCV 2023.
- FgT2M - T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model, Wang et al. ICCV 2023.
- EMS - conditioned 3D Motion Synthesis with Elaborative Descriptions, Qian et al. ICCV 2023.
- GenMM - based Motion Synthesis via Generative Motion Matching, Li et al. SIGGRAPH 2023.
- GestureDiffuCLIP
- BodyFormer - guided 3D Body Gesture Synthesis with Transformer, Pang et al. SIGGRAPH 2023.
- AGroL
- TALKSHOW
- T2M-GPT - GPT: Generating Human Motion from Textual Descriptions with Discrete Representations, Zhang et al. CVPR 2023.
- UDE
- MoDi
- MoFusion - Diffusion-based Motion Synthesis, Dabral et al. CVPR 2023.
- HMD-NeMo - NeMo: Online 3D Avatar Motion Generation From Sparse Observations, Aliakbarian et al. ICCV 2023.
- Alexanderson et al. - driven motion synthesis with diffusion models, Alexanderson et al. SIGGRAPH 2023.
- AvatarGPT - in-One Framework for Motion Understanding, Planning, Generation and Beyond, Zhou et al. CVPR 2024.
- GraphMotion - grained Control of Motion Diffusion Model with Hierarchical Semantic Graphs, Jin et al. NeurIPS 2023.
- TM2T
- SemTalk - speech Motion Generation with Frame-level Semantic Emphasis, Zhang et al. ArXiv 2024.
- InterDance
- MotionLLM
- DART - Based Autoregressive Motion Model for Real-Time Text-Driven Motion Control, Zhao et al. ArXiv 2024.
- CLoSD - task character control, Tevet et al. ArXiv 2024.
- T2M-X - X: Learning Expressive Text-to-Motion Generation from Partially Annotated Data, Liu et al. ArXiv 2024.
- Mandelli et al
- BAD - regressive Diffusion for Text-to-Motion Generation, Hosseyni et al. ArXiv 2024.
- FG-MDM - MDM: Towards Zero-Shot Human Motion Generation via ChatGPT-Refined Descriptions, ICPR 2024.
- PIDM - Aware Interaction Diffusion Model for Gesture Generation, Shibasaki et al. ICANN 2024.
- SoPo - to-Motion Generation Using Semi-Online Preference Optimization, Tan et al. ArXiv 2024.
- RMD - free Retrieval-Augmented Motion Diffuse, Liao et al. ArXiv 2024.
- BiPO - to-Motion Synthesis, Hong et al. ArXiv 2024.
- CAMDM
- MARDM - Driven Human Motion Generation, Meng et al. ArXiv 2024.
- FG-MDM - MDM: Towards Zero-Shot Human Motion Generation via ChatGPT-Refined Descriptions, ICPR 2024.
- L3EM - enriched Text-to-Motion Generation via LLM-guided Limb-level Emotion Manipulating. Yu et al. ACM MM 2024.
- PIDM - Aware Interaction Diffusion Model for Gesture Generation, Shibasaki et al. ICANN 2024.
- Mogo - Quality 3D Human Motion Generation, Fu et al. ArXiv 2024.
- CoMA - modal Agents, Sun et al. ArXiv 2024.
- StableMoFusion - based Motion Generation Framework, Huang et al. ACM MM 2024.
- ScaMo
- MotionGPT-2 - 2: A General-Purpose Motion-Language Model for Motion Generation and Understanding, Wang et al. ArXiv 2024.
- Lodge++ - quality and Long Dance Generation with Vivid Choreography Patterns, Li et al. ArXiv 2024.
- MotionCLR - free Editing via Understanding Attention Mechanisms, Chen et al. ArXiv 2024.
- MotionGlot - Embodied Motion Generation Model, Harithas et al. ArXiv 2024.
- LEAD
- Leite et al. - to-Motion Models via Pose and Video Conditioned Editing, Leite et al. ArXiv 2024.
- MotionRL - to-Motion Generation to Human Preferences with Multi-Reward Reinforcement Learning, Liu et al. ArXiv 2024.
- UniMuMo
- EnergyMoGen - Based Diffusion Model in Latent Space, Zhang et al. ArXiv 2024.
- Move in 2D - in-2D: 2D-Conditioned Human Motion Generation, Huang et al. ArXiv 2024.
- Motion-2-to-3 - 2-to-3: Leveraging 2D Motion Data to Boost 3D Motion Generation, Pi et al. ArXiv 2024.
- Light-T2M - T2M: A Lightweight and Fast Model for Text-to-motion Generation, Zeng et al. AAAI 2025.
- Languate of Motion - verbal Language of 3D Human Motion, Chen et al. ArXiv 2024.
- Meng et al - Driven Human Motion Generation, Meng et al. ArXiv 2024.
- KinMo - aware Human Motion Understanding and Generation, Zhang et al. ArXiv 2024.
- Morph - free Physics Optimization Framework for Human Motion Generation, Li et al. ArXiv 2024.
- MotionLLM
- DART - Based Autoregressive Motion Model for Real-Time Text-Driven Motion Control, Zhao et al. ArXiv 2024.
- CLoSD - task character control, Tevet et al. ArXiv 2024.
- T2M-X - X: Learning Expressive Text-to-Motion Generation from Partially Annotated Data, Liu et al. ArXiv 2024.
- MoRAG - - Multi-Fusion Retrieval Augmented Generation for Human Motion, Shashank et al. ArXiv 2024.
- Mandelli et al
- BAD - regressive Diffusion for Text-to-Motion Generation, Hosseyni et al. ArXiv 2024.
- Text to blind motion
- UniMTS - training for Motion Time Series, Zhang et al. NeurIPS 2024.
- Christopher et al.
- MoMu-Diffusion - Diffusion: On Learning Long-Term Motion-Music Synchronization and Correspondence, You et al. NeurIPS 2024.
- MoGenTS - Temporal Joint Modeling, Yuan et al. NeurIPS 2024.
- M3GPT
- Bikov et al - Tuning, Bikov et al. NeurIPS Workshop 2024.
- PIDM - Aware Interaction Diffusion Model for Gesture Generation, Shibasaki et al. ICANN 2024.
- ProgMoGen - set Motion Control Tasks, Liu et al. CVPR 2024.
- AvatarGPT - in-One Framework for Motion Understanding, Planning, Generation and Beyond, Zhou et al. CVPR 2024.
- EMAGE - Speech Gesture Generation via Expressive Masked Audio Gesture Modeling, Liu et al. CVPR 2024.
- HMD-NeMo - NeMo: Online 3D Avatar Motion Generation From Sparse Observations, Aliakbarian et al. ICCV 2023.
- Alexanderson et al. - driven motion synthesis with diffusion models, Alexanderson et al. SIGGRAPH 2023.
- UDE
- Everything2Motion
- MotionGPT - Purpose Motion Generators, Zhang et al. AAAI 2024.
- Dong et al - grained Motion Diffusion for Text-driven Human Motion Synthesis, Dong et al. AAAI 2024.
- UNIMASKM
- B2A-HDM - to-Motion Synthesis via Basic-to-Advanced Hierarchical Diffusion Model, Xie et al. AAAI 2024.
- GraphMotion - grained Control of Motion Diffusion Model with Hierarchical Semantic Graphs, Jin et al. NeurIPS 2023.
- MOJO
-
Motion Generation
- KMM
- LLaMo
- MotionCraft - Body Motion with Plug-and-Play Multimodal Controls, Bian et al. ArXiv 2024.
- PIDM - Aware Interaction Diffusion Model for Gesture Generation, Shibasaki et al. ICANN 2024.
- DLow
- MVLift
- DisCoRD
- MoTe - Text Diffusion Model for Multiple Generation Tasks, Wue et al. ArXiv 2024.
- ReinDiffuse
- TMR - to-Motion Retrieval Using Contrastive 3D Human Motion Synthesis, Petrovich et al. ICCV 2023.
- MAA - An-Animation: Large-Scale Text-conditional 3D Human Motion Generation, Azadi et al. ICCV 2023.
- PhysDiff - Guided Human Motion Diffusion Model, Yuan et al. ICCV 2023.
- InfiniDreamer
- FTMoMamba
- GPHLVM
- KMM
- FreeMotion - free Text-to-Motion Synthesis, Fan et al. ECCV 2024.
- Kinematic Phrases
- Wang et al
- Unimotion
- synNsync
- PACER+ - Demand Pedestrian Animation Controller in Driving Scenarios, Wang et al. CVPR 2024.
- AMUSE - driven 3D Body Animation via Disentangled Latent Diffusion, Chhatre et al. CVPR 2024.
- MAS - view Ancestral Sampling for 3D motion generation using 2D diffusion, Kapon et al. CVPR 2024.
- WANDR - guided Human Motion Generation, Diomataris et al. CVPR 2024.
- MoMask
- TEDi - Entangled Diffusion for Long-Term Motion Synthesis, Zhang et al. SIGGRAPH 2024.
- A-MDM - Regressive Motion Diffusion Models, Shi et al. SIGGRAPH 2024.
- AMD
- MotionMix - Supervised Diffusion for Controllable Motion Generation, Hoang et al. AAAI 2024.
- Dong et al - Conditioned 3D American Sign Language Motion Generation, Dong et al. EMNLP 2024.
- SynTalker - Body Control in Prompt-Based Co-Speech Motion Generation, Chen et al. ACM MM 2024.
- B2A-HDM - to-Motion Synthesis via Basic-to-Advanced Hierarchical Diffusion Model, Xie et al. AAAI 2024.
- GUESS - Driven Human Motion Generation, Gao et al. TPAMI 2024.
- PIDM - Aware Interaction Diffusion Model for Gesture Generation, Shibasaki et al. ICANN 2024.
- Macwan et al - Fidelity Worker Motion Simulation With Generative AI, Macwan et al. HFES 2024.
- FreeMotion - Free Human Motion Synthesis with Multimodal Large Language Models, Zhang et al. ECCV 2024.
- Text Motion Translator - Directional Model for Enhanced 3D Human Motion Generation from Open-Vocabulary Descriptions, Qian et al. ECCV 2024.
- Jin et al. - Guided Motion Diffusion Model for Text-to-Motion Generation, Jin et al. ECCV 2024.
- Motion Mamba
- EMDM - Quality Human Motion Generation, Zhou et al. ECCV 2024.
- CoMo
- CoMusion
- Shan et al. - Driven Synthesis of Multi-Person Motions, Shan et al. ECCV 2024.
- ParCo - Coordinating Text-to-Motion Synthesis, Zou et al. ECCV 2024.
- Sampieri et al. - Aware Motion Synthesis via Latent Diffusion, Sampieri et al. ECCV 2024.
- ChroAccRet - Language Models, Fujiwara et al. ECCV 2024.
- MHC - Modal Inputs, Liu et al. ECCV 2024.
- ProMotion - vocabulary Text-to-Motion Generation, Liu et al. ECCV 2024.
- OOHMG - being: Open-vocabulary Text-to-Motion Generation with Wordless Training, Lin et al. CVPR 2023.
- EDGE
- Zhong et al. - Modulation CVAE for 3D Action-Conditioned Human Motion Synthesis, Zhong et al. ECCV 2022.
- MotionCLIP
- PoseGPT
- TEMOS
- MotionChain
- SMooDi
- BAMM
- MotionLCM - time Controllable Motion Generation via Latent Consistency Model, Dai et al. ECCV 2024.
- Ren et al. - Diffusion Models, Ren et al. ECCV 2024.
- M2D2M - Motion Generation from Text with Discrete Diffusion Models, Chi et al. ECCV 2024.
- Large Motion Model - Modal Motion Generation, Zhang et al. ECCV 2024.
- TesMo
- TLcontrol
- ExpGest - Text Guidance, Cheng et al. ICME 2024.
- Chen et al - Informed Vector Quantization Variational Auto-Encoder for Text-to-Motion Generation, Chen et al. ICME Workshop 2024.
- HumanTOMATO - aligned Whole-body Motion Generation, Lu et al. ICML 2024.
- GPHLVM
- CondMDI - betweening with Diffusion Models, Cohan et al. SIGGRAPH 2024.
- LGTM - to-Global Text-Driven Human Motion Diffusion Models, Sun et al. SIGGRAPH 2024.
- SuperPADL - Directed Physics-Based Control with Progressive Supervised Distillation, Juravsky et al. SIGGRAPH 2024.
- SINC
- Kong et al. - Centric Human Motion Generation in Discrete Latent Space, Kong et al. ICCV 2023.
- FgT2M - T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model, Wang et al. ICCV 2023.
- EMS - conditioned 3D Motion Synthesis with Elaborative Descriptions, Qian et al. ICCV 2023.
- GenMM - based Motion Synthesis via Generative Motion Matching, Li et al. SIGGRAPH 2023.
- GestureDiffuCLIP
- BodyFormer - guided 3D Body Gesture Synthesis with Transformer, Pang et al. SIGGRAPH 2023.
- AGroL
- TALKSHOW
- T2M-GPT - GPT: Generating Human Motion from Textual Descriptions with Discrete Representations, Zhang et al. CVPR 2023.
- ChapPose
- Duolando - Policy Reinforcement Learning for Dance Accompaniment, Li et al. ICLR 2024.
- HuTuDiffusion - Tuned Navigation of Latent Motion Diffusion Models with Minimal Feedback, Han et al. AAAI 2024.
- MMM
- AAMDM - regressive Motion Diffusion Model, Li et al. CVPR 2024.
- OMG - vocabulary Motion Generation via Mixture of Controllers, Liang et al. CVPR 2024.
- FlowMDM
- Digital Life Project
- STMC - Track Timeline Control for Text-Driven 3D Human Motion Generation, Petrovich et al. CVPR Workshop 2024.
- InstructMotion - to-Motion Generation with Human Preference, Sheng et al. CVPR Workshop 2024.
- Single Motion Diffusion
- NeRM - Framerate Human Motion Synthesis, Wei et al. ICLR 2024.
- PriorMDM
- OmniControl
- Adiya et al.
- Xie et al.
- MotionGPT
- FineMoGen - Grained Spatio-Temporal Motion Generation and Editing, Zhang et al. NeurIPS 2023.
- InsActor - driven Physics-based Characters, Ren et al. NeurIPS 2023.
- AttT2M - Driven Human Motion Generation with Multi-Perspective Attention Mechanism, Zhong et al. ICCV 2023.
- TMR - to-Motion Retrieval Using Contrastive 3D Human Motion Synthesis, Petrovich et al. ICCV 2023.
- MAA - An-Animation: Large-Scale Text-conditional 3D Human Motion Generation, Azadi et al. ICCV 2023.
- PhysDiff - Guided Human Motion Diffusion Model, Yuan et al. ICCV 2023.
- ReMoDiffusion - Augmented Motion Diffusion Model, Zhang et al. ICCV 2023.
- BelFusion - Driven Human Motion Prediction, Barquero et al. ICCV 2023.
- GMD
- MoDi
- MoFusion - Diffusion-based Motion Synthesis, Dabral et al. CVPR 2023.
- Mo et al.
- HMDM
- Bailando++
- UDE-2 - Part Human Motion Synthesis, Zhou et al. ArXiv 2023.
- Motion Script
- NeMF
- PADL - Directed Physics-Based Character, Juravsky et al. SIGGRAPH Asia 2022.
- Rhythmic Gesticulator - Aware Co-Speech Gesture Synthesis with Hierarchical Neural Embeddings, Ao et al. SIGGRAPH Asia 2022.
- TEACH
- Implicit Motion
- DeepPhase
- ACTOR - Conditioned 3D Human Motion Synthesis with Transformer VAE, Petrovich et al. ICCV 2021.
- AIST++
- DLow
- Starke et al. - contact character movements, Starke et al. SIGGRAPH 2020.
- PIDM - Aware Interaction Diffusion Model for Gesture Generation, Shibasaki et al. ICANN 2024.
- PIDM - Aware Interaction Diffusion Model for Gesture Generation, Shibasaki et al. ICANN 2024.
-
Human-Scene Interaction
- SIMS - Scene Interactions with Real World Script Planning, Wang et al. ArXiv 2024.
- SAST - Person 3D Human Motion Forecasting with Scene Context, Mueller et al. ECCV 2024 Workshop.
- DiMoP3D - responsive Diverse Human Motion Prediction, Lou et al. NeurIPS 2024.
- Liu et al. - Scene Interaction via Space Occupancy, Liu et al. ECCV 2024.
- Afford-Motion - guided Human Motion Generation with Scene Affordance, Wang et al. CVPR 2024.
- GenZI - Shot 3D Human-Scene Interaction Generation, Li et al. CVPR 2024.
- Cen et al.
- TRUMANS - Scene Interaction Modeling, Jiang et al. CVPR 2024.
- UniHSI - Scene Interaction via Prompted Chain-of-Contacts, Xiao et al. ICLR 2024.
- DIMOS
- LAMA - Action-Manipulation: Synthesizing Human-Scene Interactions in Complex 3D Environments, Lee et al. ICCV 2023.
- TesMo
- SIMS - Scene Interactions with Real World Script Planning, Wang et al. ArXiv 2024.
- SAST - Person 3D Human Motion Forecasting with Scene Context, Mueller et al. ArXiv 2024.
- DiMoP3D - responsive Diverse Human Motion Prediction, Lou et al. NeurIPS 2024.
- Liu et al. - Scene Interaction via Space Occupancy, Liu et al. ECCV 2024.
- Afford-Motion - guided Human Motion Generation with Scene Affordance, Wang et al. CVPR 2024.
- GenZI - Shot 3D Human-Scene Interaction Generation, Li et al. CVPR 2024.
- Cen et al.
- TRUMANS - Scene Interaction Modeling, Jiang et al. CVPR 2024.
- UniHSI - Scene Interaction via Prompted Chain-of-Contacts, Xiao et al. ICLR 2024.
- DIMOS
- LAMA - Action-Manipulation: Synthesizing Human-Scene Interactions in Complex 3D Environments, Lee et al. ICCV 2023.
- Narrator - Scene Interaction Generation via Relationship Reasoning, Xuan et al. ICCV 2023.
- CIMI4D - scene Interactions, Yan et al. CVPR 2023.
- Scene-Ego - aware Egocentric 3D Human Pose Estimation, Wang et al. CVPR 2023.
- SLOPER4D - Aware Dataset for Global 4D Human Pose Estimation in Urban Environments, Dai et al. CVPR 2023.
- CIRCLE
- SceneDiffuser - based Generation, Optimization, and Planning in 3D Scenes, Huang et al. CVPR 2023.
- PMP - wise Motion Priors, Bae et al. SIGGRAPH 2023.
- Hassan et al. - Scene Interactions, Hassan et al. SIGGRAPH 2023.
- Mao et al. - aware Human Motion Forecasting, Mao et al. NeurIPS 2022.
- HUMANISE - conditioned Human Motion Generation in 3D Scenes, Wang et al. NeurIPS 2022.
- EmbodiedPose - aware Human Pose Estimation, Luo et al. NeurIPS 2022.
- GIMO - Informed Human Motion Prediction in Context, Zheng et al. ECCV 2022.
- Wang et al. - aware 3D Human Motion Synthesis, Wang et al. CVPR 2022.
- GAMMA
- SAMP - Aware Motion Prediction, Hassan et al, ICCV 2021.
- PLACE
- PSI
- NSM - Scene Interactions, Starke et al. SIGGRAPH Asia 2019.
- PROX
- ZeroHSI - Shot 4D Human-Scene Interaction by Video Generation, Li et al. ArXiv 2024.
- Mimicking-Bench - Bench: A Benchmark for Generalizable Humanoid-Scene Interaction Learning via Human Mimicking, Liu et al. ArXiv 2024.
- SCENIC - aware Semantic Navigation with Instruction-guided Control, Zhang et al. ArXiv 2024.
- Diffusion Implicit Policy - aware Motion synthesis, Gong et al. ArXiv 2024.
- LaserHuman - guided Scene-aware Human Motion Generation in Free Environment, Cong et al. ArXiv 2024.
- LINGO - Scene Interaction Synthesis from Text Instruction, Jiang et al. SIGGRAPH Asia 2024.
- InterScene
- Narrator - Scene Interaction Generation via Relationship Reasoning, Xuan et al. ICCV 2023.
- Sitcom-Crafter - Crafter: A Plot-Driven Human Motion Generation System in 3D Scenes, Chen et al. ArXiv 2024.
- Paschalidis et al - body Grasp Synthesis with Directional Controllability, Paschalidis et al. ArXiv 2024.
- EnvPoser - aware Realistic Human Motion Estimation from Sparse Observations with Uncertainty Modeling. Xia et al. ArXiv 2024.
- Kang et al - Based Characters, Kang et al. Eurographics 2024.
- Purposer
- Mir et al
-
Motion Stylization
- GenMoStyle
- MCM-LDM - condition Motion Latent Diffusion Model, Song et al. CVPR 2024.
- MoST
- HUMOS
- MCM-LDM - condition Motion Latent Diffusion Model, Song et al. CVPR 2024.
- MoST
- GenMoStyle
- HUMOS
- D-LORD - LORD for Motion Stylization, Gupta et al. TSMC 2024.
- MulSMo
-
Motion Editing
- MotionFix - Driven 3D Human Motion Editing, Athanasiou et al. SIGGRAPH Asia 2024.
- CigTime
- Iterative Motion Editing
- DNO
- MotionFix - Driven 3D Human Motion Editing, Athanasiou et al. SIGGRAPH Asia 2024.
- CigTime
- Iterative Motion Editing
- DNO
-
Datasets & Benchmarks
- Nymeria
- Inter-X - X: Towards Versatile Human-Human Interaction Analysis, Xu et al. CVPR 2024.
- HardMo - Scale Hardcase Dataset for Motion Capture, Liao et al. CVPR 2024.
- RELI11D
- GroundLink
- HOH - Object-Human Handover Dataset with Large Object Count, Wiederhold et al. NeurIPS D&B 2023.
- AtoM - to-Motion Model at Event-Level with GPT-4Vision Reward, Han et al. ArXiv 2024.
- Evans et al
- MotionCritic
- EMHI - Worn IMUs, Fan et al. ArXiv 2024.
- EgoSim - view Simulator for Body-worn Cameras during Human Motion, Hollidt et al. NeurIPS D&B 2024.
- Muscles in Time
- Text to blind motion
- MotionBank - scale Video Motion Benchmark with Disentangled Rule-based Annotations, Xu et al. ArXiv 2024.
- AddBiomechanics
- LiveHPS++
- SignAvatars - scale 3D Sign Language Holistic Motion Dataset and Benchmark, Yu et al. ECCV 2024.
- Motion-X - X: A Large-scale 3D Expressive Whole-body Human Motion Dataset, Lin et al. NeurIPS D&B 2023.
- Humans in Kitchens - Person Human Motion Forecasting with Scene Context, Tanke et al. NeurIPS D&B 2023.
-
Human-Human Interaction
- InterMask
- InterControl
- ReMoS - Conditioned Reaction Synthesis for Two-Person Interactions, Ghosh et al. ECCV 2024.
- ReGenNet - Reaction Synthesis, Xu et al. CVPR 2024.
- Fang et al. - Person Motions with Reaction Priors, Fan et al. CVPR 2024.
- in2IN - Ponce et al. CVPR Workshop 2024.
- InterGen - based Multi-human Motion Generation under Complex Interactions, Liang et al. IJCV 2024.
- ActFormer - based Transformer towards General Action-Conditioned 3D Human Motion Generation, Xu et al. ICCV 2023.
- Tanaka et al. - aware Interaction Generation from Textual Description, Tanaka et al. ICCV 2023.
- Hi4D
-
Reviews & Surveys
- Zhao et al
- Zhu et al - PAMI 2023.
Programming Languages
Categories
Sub Categories
Keywords
motion-generation
16
aigc
8
human-motion-generation
7
diffusion
6
text-to-motion
6
gpt
5
deep-learning
5
diffusion-models
5
motion-prediction
4
cvpr2024
4
generative-model
4
human-motion-prediction
4
human-scene-interaction
3
human-human-interaction
3
motion
3
motion-synthesis
2
part-coordination
2
transformer
2
belfusion
2
ddim
2
ddpm
2
generative-models
2
iccv2023
2
latent-diffusion
2
object-pose
2
human-object-interaction
2
generative-ai
2
6d
2
3d-human-pose
2
human-motion-extrapolation
2
human-motion-composition
2
human-motion
2
cvpr
2
generation
2
human-interaction-generation
2
neurips-2023
2
graph-networks
2
vq-vae
2
text2motion
2
pytorch
2
pose-prediction
2
pose-forecasting
2
motion-forecasting
2
ldm
2
computer-vision
2
human-pose-estimation
1
physics-simulation
1
pose-estimation
1
human-reaction-generation
1
interaction-order
1