Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

awesome-human-motion

An aggregation of human motion understanding research.
https://github.com/foruck/awesome-human-motion

Last synced: 3 days ago
JSON representation

  • Uncategorized

    • Uncategorized

      • InterDreamer - Shot Text to 3D Dynamic Human-Object Interaction, Xu et al.
      • CHOIS - Object Interaction Synthesis, Li et al.
      • MotionCraft - Body Motion with Plug-and-Play Multimodal Controls, Bian et al.
      • AAMDM - regressive Motion Diffusion Model, Li et al.
      • LLaMo
      • OMG - vocabulary Motion Generation via Mixture of Controllers, Liang et al.
      • FlowMDM
      • Digital Life Project
      • STMC - Track Timeline Control for Text-Driven 3D Human Motion Generation, Petrovich et al.
      • InstructMotion - to-Motion Generation with Human Preference, Sheng et al.
      • Single Motion Diffusion
      • PriorMDM
      • MobileH2R
      • ReinDiffuse
      • InfiniDreamer
      • FTMoMamba
      • KinMo - aware Human Motion Understanding and Generation, Zhang et al.
      • Mo et al.
      • HMDM
      • MotionDiffuse - Driven Human Motion Generation with Diffusion Model, Zhang et al.
      • MVLift
      • DisCoRD
      • MoTe - Text Diffusion Model for Multiple Generation Tasks, Wue et al.
      • Unimotion
      • ARCTIC - Object Manipulation, Fan et al.
      • TOCH - Temporal Object-to-Hand Correspondence for Motion Refinement, Zhou et al.
      • COUCH - Chair Interactions, Zhang et al.
      • SAGA - Body Grasping with Contact, Wu et al.
      • GOAL - Body Motion for Hand-Object Grasping, Taheri et al.
      • GRAB - Body Human Grasping of Objects, Taheri et al.
      • CHAIRS - Body Articulated Human-Object Interaction, Jiang et al.
      • InterDiff - Object Interactions with Physics-Informed Diffusion, Xu et al.
      • Object Pop Up - up: Can we infer 3D objects and their poses from human interactions alone? Petrov et al.
      • GenMoStyle
      • SMGDiff
      • MCM-LDM - condition Motion Latent Diffusion Model, Song et al.
      • MoST
      • OOHMG - being: Open-vocabulary Text-to-Motion Generation with Wordless Training, Lin et al.
      • OOD-HOI - HOI: Text-Driven 3D Whole-Body Human-Object Interactions Generation Beyond Training Domains, Zhang et al.
      • COLLAGE - Agent Interaction Generation using Hierarchical Latent Diffusion and Language Models, Daiya et al.
      • SkillMimic
      • CORE4D - Object-Human Interaction Dataset for Collaborative Object REarrangement, Zhang et al.
      • Wu et al - Object Interaction from Human-Level Instructions, Wu et al.
      • HumanVLA - Language Directed Object Rearrangement by Physical Humanoid, Xu et al.
      • OmniGrasp
      • EgoChoir - Object Interaction Regions from Egocentric Views, Yang et al.
      • CooHOI - Object Interaction with Manipulated Object Dynamics, Gao et al.
      • InterFusion - Driven Generation of 3D Human-Object Interaction, Dai et al.
      • F-HOI - HOI: Toward Fine-grained Semantic-Aligned 3D Human-Object Interactions, Yang et al.
      • AtoM - to-Motion Model at Event-Level with GPT-4Vision Reward, Han et al.
      • PACER+ - Demand Pedestrian Animation Controller in Driving Scenarios, Wang et al.
      • TesMo
      • TLcontrol
      • ExpGest - Text Guidance, Cheng et al.
      • Chen et al - Informed Vector Quantization Variational Auto-Encoder for Text-to-Motion Generation, Chen et al.
      • HumanTOMATO - aligned Whole-body Motion Generation, Lu et al.
      • CondMDI - betweening with Diffusion Models, Cohan et al.
      • Morph - free Physics Optimization Framework for Human Motion Generation, Li et al.
      • Lodge++ - quality and Long Dance Generation with Vivid Choreography Patterns, Li et al.
      • MotionCLR - free Editing via Understanding Attention Mechanisms, Chen et al.
      • MotionGlot - Embodied Motion Generation Model, Harithas et al.
      • LEAD
      • Leite et al. - to-Motion Models via Pose and Video Conditioned Editing, Leite et al.
      • FreeMotion - free Text-to-Motion Synthesis, Fan et al.
      • Kinematic Phrases
      • MotionChain
      • SMooDi
      • BAMM
      • MotionLCM - time Controllable Motion Generation via Latent Consistency Model, Dai et al.
      • Ren et al. - Diffusion Models, Ren et al.
      • M2D2M - Motion Generation from Text with Discrete Diffusion Models, Chi et al.
      • Large Motion Model - Modal Motion Generation, Zhang et al.
      • MoRAG - - Multi-Fusion Retrieval Augmented Generation for Human Motion, Shashank et al.
      • synNsync
      • Dong et al - Conditioned 3D American Sign Language Motion Generation, Dong et al.
      • SynTalker - Body Control in Prompt-Based Co-Speech Motion Generation, Chen et al.
      • TEDi - Entangled Diffusion for Long-Term Motion Synthesis, Zhang et al.
      • A-MDM - Regressive Motion Diffusion Models, Shi et al.
      • ProgMoGen - set Motion Control Tasks, Liu et al.
      • AMUSE - driven 3D Body Animation via Disentangled Latent Diffusion, Chhatre et al.
      • Liu et al. - Speech Motion Generation, Liu et al.
      • MAS - view Ancestral Sampling for 3D motion generation using 2D diffusion, Kapon et al.
      • WANDR - guided Human Motion Generation, Diomataris et al.
      • MoMask
      • SuperPADL - Directed Physics-Based Control with Progressive Supervised Distillation, Juravsky et al.
      • Duolando - Policy Reinforcement Learning for Dance Accompaniment, Li et al.
      • HuTuDiffusion - Tuned Navigation of Latent Motion Diffusion Models with Minimal Feedback, Han et al.
      • AMD
      • MotionMix - Supervised Diffusion for Controllable Motion Generation, Hoang et al.
      • Text to blind motion
      • UniMTS - training for Motion Time Series, Zhang et al.
      • Christopher et al.
      • MoMu-Diffusion - Diffusion: On Learning Long-Term Motion-Music Synchronization and Correspondence, You et al.
      • MoGenTS - Temporal Joint Modeling, Yuan et al.
      • M3GPT
      • Bikov et al - Tuning, Bikov et al.
      • B2A-HDM - to-Motion Synthesis via Basic-to-Advanced Hierarchical Diffusion Model, Xie et al.
      • GUESS - Driven Human Motion Generation, Gao et al.
      • Macwan et al - Fidelity Worker Motion Simulation With Generative AI, Macwan et al.
      • FreeMotion - Free Human Motion Synthesis with Multimodal Large Language Models, Zhang et al.
      • Text Motion Translator - Directional Model for Enhanced 3D Human Motion Generation from Open-Vocabulary Descriptions, Qian et al.
      • GMD
      • SINC
      • Kong et al. - Centric Human Motion Generation in Discrete Latent Space, Kong et al.
      • Jin et al. - Guided Motion Diffusion Model for Text-to-Motion Generation, Jin et al.
      • Motion Mamba
      • EMDM - Quality Human Motion Generation, Zhou et al.
      • CoMo
      • CoMusion
      • Shan et al. - Driven Synthesis of Multi-Person Motions, Shan et al.
      • ParCo - Coordinating Text-to-Motion Synthesis, Zou et al.
      • Sampieri et al. - Aware Motion Synthesis via Latent Diffusion, Sampieri et al.
      • ChroAccRet - Language Models, Fujiwara et al.
      • MHC - Modal Inputs, Liu et al.
      • ProMotion - vocabulary Text-to-Motion Generation, Liu et al.
      • GestureDiffuCLIP
      • EDGE
      • Zhong et al. - Modulation CVAE for 3D Action-Conditioned Human Motion Synthesis, Zhong et al.
      • MotionCLIP
      • LGTM - to-Global Text-Driven Human Motion Diffusion Models, Sun et al.
      • HMD-NeMo - NeMo: Online 3D Avatar Motion Generation From Sparse Observations, Aliakbarian et al.
      • FgT2M - T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model, Wang et al.
      • EMS - conditioned 3D Motion Synthesis with Elaborative Descriptions, Qian et al.
      • GenMM - based Motion Synthesis via Generative Motion Matching, Li et al.
      • BodyFormer - guided 3D Body Gesture Synthesis with Transformer, Pang et al.
      • Alexanderson et al. - driven motion synthesis with diffusion models, Alexanderson et al.
      • AGroL
      • TALKSHOW
      • T2M-GPT - GPT: Generating Human Motion from Textual Descriptions with Discrete Representations, Zhang et al.
      • ChapPose
      • AvatarGPT - in-One Framework for Motion Understanding, Planning, Generation and Beyond, Zhou et al.
      • MMM
      • NeRM - Framerate Human Motion Synthesis, Wei et al.
      • OmniControl
      • Adiya et al.
      • Xie et al.
      • GraphMotion - grained Control of Motion Diffusion Model with Hierarchical Semantic Graphs, Jin et al.
      • MotionGPT
      • FineMoGen - Grained Spatio-Temporal Motion Generation and Editing, Zhang et al.
      • InsActor - driven Physics-based Characters, Ren et al.
      • AttT2M - Driven Human Motion Generation with Multi-Perspective Attention Mechanism, Zhong et al.
      • MoDi
      • MoFusion - Diffusion-based Motion Synthesis, Dabral et al.
      • Motion Script
      • Rhythmic Gesticulator - Aware Co-Speech Gesture Synthesis with Hierarchical Neural Embeddings, Ao et al.
      • TEACH
      • Implicit Motion
      • TM2T
      • DeepPhase
      • MotionFix - Driven 3D Human Motion Editing, Athanasiou et al.
      • CigTime
      • HUMOS
      • PhysicsPingPong - based Table Tennis Animation, Wang et al.
      • HOI Animator - prompt Human-object Animations using Novel Perceptive Diffusion Models, Son et al.
      • InterCap
      • OMOMO
      • SIMS - Scene Interactions with Real World Script Planning, Wang et al.
      • DiMoP3D - responsive Diverse Human Motion Prediction, Lou et al.
      • GenZI - Shot 3D Human-Scene Interaction Generation, Li et al.
      • DIMOS
      • InterDance
      • MotionLLM
      • FORCE - object Interaction, Zhang et al.
      • ZeroHSI - Shot 4D Human-Scene Interaction by Video Generation, Li et al.
      • Mimicking-Bench - Bench: A Benchmark for Generalizable Humanoid-Scene Interaction Learning via Human Mimicking, Liu et al.
      • SCENIC - aware Semantic Navigation with Instruction-guided Control, Zhang et al.
      • Gu et al
      • Motion-Agent - Agent: A Conversational Framework for Human Motion Generation with LLMs, Wu et al.
      • SoPo - to-Motion Generation Using Semi-Online Preference Optimization, Tan et al.
      • RMD - free Retrieval-Augmented Motion Diffuse, Liao et al.
      • BiPO - to-Motion Synthesis, Hong et al.
      • CAMDM
      • D-LORD - LORD for Motion Stylization, Gupta et al.
      • Diffusion Implicit Policy - aware Motion synthesis, Gong et al.
      • LaserHuman - guided Scene-aware Human Motion Generation in Free Environment, Cong et al.
      • LINGO - Scene Interaction Synthesis from Text Instruction, Jiang et al.
      • InterScene
      • SyncDiff - Body Human-Object Interaction Synthesis, He et al.
      • MARDM - Driven Human Motion Generation, Meng et al.
      • L3EM - enriched Text-to-Motion Generation via LLM-guided Limb-level Emotion Manipulating. Yu et al.
      • DiffGrasp - Body Grasping Synthesis Guided by Object Motion Using a Diffusion Model, Zhang et al.
      • Starke et al. - contact character movements, Starke et al.
      • MotionFix - Driven 3D Human Motion Editing, Athanasiou et al.
      • CigTime
      • DART - Based Autoregressive Motion Model for Real-Time Text-Driven Motion Control, Zhao et al.
      • MotionRL - to-Motion Generation to Human Preferences with Multi-Reward Reinforcement Learning, Liu et al.
      • UniMuMo
      • CLoSD - task character control, Tevet et al.
      • Wang et al
      • T2M-X - X: Learning Expressive Text-to-Motion Generation from Partially Annotated Data, Liu et al.
      • Mandelli et al
      • BAD - regressive Diffusion for Text-to-Motion Generation, Hosseyni et al.
      • FG-MDM - MDM: Towards Zero-Shot Human Motion Generation via ChatGPT-Refined Descriptions, Shi et al.
      • PIDM - Aware Interaction Diffusion Model for Gesture Generation, Shibasaki et al.
      • Macwan et al - Fidelity Worker Motion Simulation With Generative AI, Macwan et al.
      • Jin et al. - Guided Motion Diffusion Model for Text-to-Motion Generation, Jin et al.
      • Motion Mamba
      • EMDM - Quality Human Motion Generation, Zhou et al.
      • CoMo
      • CoMusion
      • Shan et al. - Driven Synthesis of Multi-Person Motions, Shan et al.
      • ParCo - Coordinating Text-to-Motion Synthesis, Zou et al.
      • Sampieri et al. - Aware Motion Synthesis via Latent Diffusion, Sampieri et al.
      • ChroAccRet - Language Models, Fujiwara et al.
      • MHC - Modal Inputs, Liu et al.
      • ProMotion - vocabulary Text-to-Motion Generation, Liu et al.
      • FreeMotion - Free Human Motion Synthesis with Multimodal Large Language Models, Zhang et al.
      • Text Motion Translator - Directional Model for Enhanced 3D Human Motion Generation from Open-Vocabulary Descriptions, Qian et al.
      • FreeMotion - free Text-to-Motion Synthesis, Fan et al.
      • Kinematic Phrases
      • LGTM - to-Global Text-Driven Human Motion Diffusion Models, Sun et al.
      • AMUSE - driven 3D Body Animation via Disentangled Latent Diffusion, Chhatre et al.
      • MAS - view Ancestral Sampling for 3D motion generation using 2D diffusion, Kapon et al.
      • WANDR - guided Human Motion Generation, Diomataris et al.
      • MoMask
      • ChapPose
      • MMM
      • NeRM - Framerate Human Motion Synthesis, Wei et al.
      • OmniControl
      • Adiya et al.
      • AMD
      • MotionMix - Supervised Diffusion for Controllable Motion Generation, Hoang et al.
      • B2A-HDM - to-Motion Synthesis via Basic-to-Advanced Hierarchical Diffusion Model, Xie et al.
      • GUESS - Driven Human Motion Generation, Gao et al.
      • Xie et al.
      • MotionGPT
      • FineMoGen - Grained Spatio-Temporal Motion Generation and Editing, Zhang et al.
      • InsActor - driven Physics-based Characters, Ren et al.
      • AttT2M - Driven Human Motion Generation with Multi-Perspective Attention Mechanism, Zhong et al.
      • TMR - to-Motion Retrieval Using Contrastive 3D Human Motion Synthesis, Petrovich et al.
      • ReMoDiffusion - Augmented Motion Diffusion Model, Zhang et al.
      • BelFusion - Driven Human Motion Prediction, Barquero et al.
      • FgT2M - T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model, Wang et al.
      • EMS - conditioned 3D Motion Synthesis with Elaborative Descriptions, Qian et al.
      • GenMM - based Motion Synthesis via Generative Motion Matching, Li et al.
      • BodyFormer - guided 3D Body Gesture Synthesis with Transformer, Pang et al.
      • MoDi
      • MoFusion - Diffusion-based Motion Synthesis, Dabral et al.
      • AGroL
      • TALKSHOW
      • T2M-GPT - GPT: Generating Human Motion from Textual Descriptions with Discrete Representations, Zhang et al.
      • UDE
      • EDGE
      • Bailando - Critic GPT with Choreographic Memory, Li et al.
      • UDE-2 - Part Human Motion Synthesis, Zhou et al.
      • Motion Script
      • NeMF
      • PADL - Directed Physics-Based Character, Juravsky et al.
      • Rhythmic Gesticulator - Aware Co-Speech Gesture Synthesis with Hierarchical Neural Embeddings, Ao et al.
      • TEACH
      • Implicit Motion
      • Zhong et al. - Modulation CVAE for 3D Action-Conditioned Human Motion Synthesis, Zhong et al.
      • MotionCLIP
      • PoseGPT
      • TEMOS
      • AvatarCLIP - Shot Text-Driven Generation and Animation of 3D Avatars, Hong et al.
      • DeepPhase
      • ACTOR - Conditioned 3D Human Motion Synthesis with Transformer VAE, Petrovich et al.
      • AIST++
      • Starke et al.
      • SIMS - Scene Interactions with Real World Script Planning, Wang et al.
      • DiMoP3D - responsive Diverse Human Motion Prediction, Lou et al.
      • Liu et al. - Scene Interaction via Space Occupancy, Liu et al.
      • SAST - Person 3D Human Motion Forecasting with Scene Context, Mueller et al.
      • Afford-Motion - guided Human Motion Generation with Scene Affordance, Wang et al.
      • GenZI - Shot 3D Human-Scene Interaction Generation, Li et al.
      • Cen et al.
      • TRUMANS - Scene Interaction Modeling, Jiang et al.
      • UniHSI - Scene Interaction via Prompted Chain-of-Contacts, Xiao et al.
      • DIMOS
      • OOD-HOI - HOI: Text-Driven 3D Whole-Body Human-Object Interactions Generation Beyond Training Domains, Zhang et al.
      • Iterative Motion Editing
      • DNO
      • HUMOS
      • SMEAR - direction, Basset et al.
      • COLLAGE - Agent Interaction Generation using Hierarchical Latent Diffusion and Language Models, Daiya et al.
      • SkillMimic
      • CORE4D - Object-Human Interaction Dataset for Collaborative Object REarrangement, Zhang et al.
      • Wu et al - Object Interaction from Human-Level Instructions, Wu et al.
      • HumanVLA - Language Directed Object Rearrangement by Physical Humanoid, Xu et al.
      • OmniGrasp
      • EgoChoir - Object Interaction Regions from Egocentric Views, Yang et al.
      • CooHOI - Object Interaction with Manipulated Object Dynamics, Gao et al.
      • InterFusion - Driven Generation of 3D Human-Object Interaction, Dai et al.
      • F-HOI - HOI: Toward Fine-grained Semantic-Aligned 3D Human-Object Interactions, Yang et al.
      • HIMO - Body Human Interacting with Multiple Objects, Lv et al.
      • PhysicsPingPong - based Table Tennis Animation, Wang et al.
      • NIFTY
      • HOI Animator - prompt Human-object Animations using Novel Perceptive Diffusion Models, Son et al.
      • InterCap
      • LAMA - Action-Manipulation: Synthesizing Human-Scene Interactions in Complex 3D Environments, Lee et al.
      • Mogo - Quality 3D Human Motion Generation, Fu et al.
      • CoMA - modal Agents, Sun et al.
      • StableMoFusion - based Motion Generation Framework, Huang et al.
      • PiMForce - Informed Muscular Force Learning for Robust Hand Pressure Estimation, Seo et al.
      • CHOICE - Object Interaction in Cluttered Environments for Pick-and-Place Actions, Lu et al.
      • TriDi
      • Zhao et al
      • Zhu et al
      • ScaMo
      • MotionGPT-2 - 2: A General-Purpose Motion-Language Model for Motion Generation and Understanding, Wang et al.
      • EnergyMoGen - Based Diffusion Model in Latent Space, Zhang et al.
      • Move in 2D - in-2D: 2D-Conditioned Human Motion Generation, Huang et al.
      • Motion-2-to-3 - 2-to-3: Leveraging 2D Motion Data to Boost 3D Motion Generation, Pi et al.
      • Light-T2M - T2M: A Lightweight and Fast Model for Text-to-motion Generation, Zeng et al.
      • Languate of Motion - verbal Language of 3D Human Motion, Chen et al.
      • EMAGE - Speech Gesture Generation via Expressive Masked Audio Gesture Modeling, Liu et al.
      • Everything2Motion
      • MotionGPT - Purpose Motion Generators, Zhang et al.
      • Dong et al - grained Motion Diffusion for Text-driven Human Motion Synthesis, Dong et al.
      • UNIMASKM
      • B2A-HDM - to-Motion Synthesis via Basic-to-Advanced Hierarchical Diffusion Model, Xie et al.
      • MOJO
      • MulSMo
      • InterTrack
      • Phys-Fullbody-Grasp - Body Hand-Object Interaction Synthesis, Braun et al.
      • FAVOR - Body AR-driven Virtual Object Rearrangement Guided by Instruction Text, Li et al.
      • BEHAVE
      • Sitcom-Crafter - Crafter: A Plot-Driven Human Motion Generation System in 3D Scenes, Chen et al.
      • Paschalidis et al - body Grasp Synthesis with Directional Controllability, Paschalidis et al.
      • EnvPoser - aware Realistic Human Motion Estimation from Sparse Observations with Uncertainty Modeling. Xia et al.
      • Kang et al - Based Characters, Kang et al.
      • Purposer
      • Mir et al
      • LS-GAN - GAN: HumanMotion Synthesis with Latent-space GANs, Amballa et al.
      • FlexMotion - Aware, and Controllable Human Motion Generation, Tashakori et al.
      • PackDiT
      • Wang et al
      • FG-MDM - MDM: Towards Zero-Shot Human Motion Generation via ChatGPT-Refined Descriptions, Shi et al.
      • PIDM - Aware Interaction Diffusion Model for Gesture Generation, Shibasaki et al.
      • ReMoDiffusion - Augmented Motion Diffusion Model, Zhang et al.
      • BelFusion - Driven Human Motion Prediction, Barquero et al.
      • UDE-2 - Part Human Motion Synthesis, Zhou et al.
      • NeMF
      • PADL - Directed Physics-Based Character, Juravsky et al.
      • PoseGPT
      • TEMOS
      • Bailando - Critic GPT with Choreographic Memory, Li et al.
      • ACTOR - Conditioned 3D Human Motion Synthesis with Transformer VAE, Petrovich et al.
      • AIST++
      • Starke et al. - contact character movements, Starke et al.
      • Iterative Motion Editing
      • DNO
      • HIMO - Body Human Interacting with Multiple Objects, Lv et al.
      • NIFTY
      • Liu et al. - Scene Interaction via Space Occupancy, Liu et al.
      • SAST - Person 3D Human Motion Forecasting with Scene Context, Mueller et al.
      • Afford-Motion - guided Human Motion Generation with Scene Affordance, Wang et al.
      • Cen et al.
      • TRUMANS - Scene Interaction Modeling, Jiang et al.
      • UniHSI - Scene Interaction via Prompted Chain-of-Contacts, Xiao et al.
      • LAMA - Action-Manipulation: Synthesizing Human-Scene Interactions in Complex 3D Environments, Lee et al.
      • Lyu et al - Language Understanding via Sparse Interpretable Characterization, Lyu et al.
      • MotionLab - Condition-Motion Paradigm, Guo et al.
      • CASIM
      • MotionPCM - Time Motion Synthesis with Phased Consistency Model, Jiang et al.
      • GestureLSM - Speech Gesture Generation with Spatial-Temporal Modeling, Liu et al.
      • Free-T2M - T2M: Frequency Enhanced Text-to-Motion Diffusion Model With Consistency Loss, Chen et al.
      • ALERT-Motion - Enhanced Adversarial Attack for Text-to-Motion, Miao et al.
      • HGM³
      • FG-MDM - MDM: Towards Zero-Shot Human Motion Generation via ChatGPT-Refined Descriptions, Shi et al.
      • SATO - to-Motion Framework, Chen et al.
      • PIDM - Aware Interaction Diffusion Model for Gesture Generation, Shibasaki et al.
      • LaMP - Motion Pretraining for Motion Generation, Retrieval, and Captioning, Li et al.
      • MotionDreamer - to-Many Motion Synthesis with Localized Generative Masked Transformer, Wang et al.
  • Human-Object Interaction

    • CG-HOI - HOI: Contact-Guided 3D Human-Object Interaction Generation, Diller et al. CVPR 2024.
    • HGHOI - Object Interactions with Diffusion Probabilistic Models, Pi et al. ICCV 2023.
    • GRIP
    • SMGDiff
    • GRIP
    • InterDreamer - Shot Text to 3D Dynamic Human-Object Interaction, Xu et al. NeurIPS 2024.
    • CHOIS - Object Interaction Synthesis, Li et al. ECCV 2024.
    • CG-HOI - HOI: Contact-Guided 3D Human-Object Interaction Generation, Diller et al. CVPR 2024.
    • OMOMO
    • CHAIRS - Body Articulated Human-Object Interaction, Jiang et al. ICCV 2023.
    • HGHOI - Object Interactions with Diffusion Probabilistic Models, Pi et al. ICCV 2023.
    • InterDiff - Object Interactions with Physics-Informed Diffusion, Xu et al. ICCV 2023.
    • Object Pop Up - up: Can we infer 3D objects and their poses from human interactions alone?, Petrov et al. CVPR 2023.
    • ARCTIC - Object Manipulation, Fan et al. CVPR 2023.
    • TOCH - Temporal Object-to-Hand Correspondence for Motion Refinement, Zhou et al. ECCV 2022.
    • COUCH - Chair Interactions, Zhang et al. ECCV 2022.
    • SAGA - Body Grasping with Contact, Wu et al. ECCV 2022.
    • GOAL - Body Motion for Hand-Object Grasping, Taheri et al. CVPR 2022.
    • GRAB - Body Human Grasping of Objects, Taheri et al. ECCV 2020.
    • FORCE - object Interaction, Zhang et al. 3DV 2025.
  • Motion Generation

    • KMM
    • LLaMo
    • MotionCraft - Body Motion with Plug-and-Play Multimodal Controls, Bian et al. ArXiv 2024.
    • PIDM - Aware Interaction Diffusion Model for Gesture Generation, Shibasaki et al. ICANN 2024.
    • DLow
    • MVLift
    • DisCoRD
    • MoTe - Text Diffusion Model for Multiple Generation Tasks, Wue et al. ArXiv 2024.
    • ReinDiffuse
    • TMR - to-Motion Retrieval Using Contrastive 3D Human Motion Synthesis, Petrovich et al. ICCV 2023.
    • MAA - An-Animation: Large-Scale Text-conditional 3D Human Motion Generation, Azadi et al. ICCV 2023.
    • PhysDiff - Guided Human Motion Diffusion Model, Yuan et al. ICCV 2023.
    • InfiniDreamer
    • FTMoMamba
    • GPHLVM
    • KMM
    • Unimotion
    • synNsync
    • PACER+ - Demand Pedestrian Animation Controller in Driving Scenarios, Wang et al. CVPR 2024.
    • TEDi - Entangled Diffusion for Long-Term Motion Synthesis, Zhang et al. SIGGRAPH 2024.
    • A-MDM - Regressive Motion Diffusion Models, Shi et al. SIGGRAPH 2024.
    • Dong et al - Conditioned 3D American Sign Language Motion Generation, Dong et al. EMNLP 2024.
    • SynTalker - Body Control in Prompt-Based Co-Speech Motion Generation, Chen et al. ACM MM 2024.
    • PIDM - Aware Interaction Diffusion Model for Gesture Generation, Shibasaki et al. ICANN 2024.
    • OOHMG - being: Open-vocabulary Text-to-Motion Generation with Wordless Training, Lin et al. CVPR 2023.
    • MotionChain
    • SMooDi
    • BAMM
    • MotionLCM - time Controllable Motion Generation via Latent Consistency Model, Dai et al. ECCV 2024.
    • Ren et al. - Diffusion Models, Ren et al. ECCV 2024.
    • M2D2M - Motion Generation from Text with Discrete Diffusion Models, Chi et al. ECCV 2024.
    • Large Motion Model - Modal Motion Generation, Zhang et al. ECCV 2024.
    • TesMo
    • TLcontrol
    • ExpGest - Text Guidance, Cheng et al. ICME 2024.
    • Chen et al - Informed Vector Quantization Variational Auto-Encoder for Text-to-Motion Generation, Chen et al. ICME Workshop 2024.
    • HumanTOMATO - aligned Whole-body Motion Generation, Lu et al. ICML 2024.
    • GPHLVM
    • CondMDI - betweening with Diffusion Models, Cohan et al. SIGGRAPH 2024.
    • SuperPADL - Directed Physics-Based Control with Progressive Supervised Distillation, Juravsky et al. SIGGRAPH 2024.
    • SINC
    • Kong et al. - Centric Human Motion Generation in Discrete Latent Space, Kong et al. ICCV 2023.
    • GestureDiffuCLIP
    • Duolando - Policy Reinforcement Learning for Dance Accompaniment, Li et al. ICLR 2024.
    • HuTuDiffusion - Tuned Navigation of Latent Motion Diffusion Models with Minimal Feedback, Han et al. AAAI 2024.
    • AAMDM - regressive Motion Diffusion Model, Li et al. CVPR 2024.
    • OMG - vocabulary Motion Generation via Mixture of Controllers, Liang et al. CVPR 2024.
    • FlowMDM
    • Digital Life Project
    • STMC - Track Timeline Control for Text-Driven 3D Human Motion Generation, Petrovich et al. CVPR Workshop 2024.
    • InstructMotion - to-Motion Generation with Human Preference, Sheng et al. CVPR Workshop 2024.
    • Single Motion Diffusion
    • PriorMDM
    • TMR - to-Motion Retrieval Using Contrastive 3D Human Motion Synthesis, Petrovich et al. ICCV 2023.
    • MAA - An-Animation: Large-Scale Text-conditional 3D Human Motion Generation, Azadi et al. ICCV 2023.
    • PhysDiff - Guided Human Motion Diffusion Model, Yuan et al. ICCV 2023.
    • GMD
    • Mo et al.
    • HMDM
    • DLow
    • PIDM - Aware Interaction Diffusion Model for Gesture Generation, Shibasaki et al. ICANN 2024.
    • PIDM - Aware Interaction Diffusion Model for Gesture Generation, Shibasaki et al. ICANN 2024.
  • Motion Stylization

  • Datasets & Benchmarks

    • Nymeria
    • Inter-X - X: Towards Versatile Human-Human Interaction Analysis, Xu et al. CVPR 2024.
    • HardMo - Scale Hardcase Dataset for Motion Capture, Liao et al. CVPR 2024.
    • RELI11D
    • GroundLink
    • HOH - Object-Human Handover Dataset with Large Object Count, Wiederhold et al. NeurIPS D&B 2023.
    • AtoM - to-Motion Model at Event-Level with GPT-4Vision Reward, Han et al. ArXiv 2024.
    • Evans et al
    • MotionCritic
    • EMHI - Worn IMUs, Fan et al. ArXiv 2024.
    • EgoSim - view Simulator for Body-worn Cameras during Human Motion, Hollidt et al. NeurIPS D&B 2024.
    • Muscles in Time
    • Text to blind motion
    • MotionBank - scale Video Motion Benchmark with Disentangled Rule-based Annotations, Xu et al. ArXiv 2024.
    • AddBiomechanics
    • LiveHPS++
    • SignAvatars - scale 3D Sign Language Holistic Motion Dataset and Benchmark, Yu et al. ECCV 2024.
    • Motion-X - X: A Large-scale 3D Expressive Whole-body Human Motion Dataset, Lin et al. NeurIPS D&B 2023.
    • Humans in Kitchens - Person Human Motion Forecasting with Scene Context, Tanke et al. NeurIPS D&B 2023.
  • Human-Scene Interaction

    • Narrator - Scene Interaction Generation via Relationship Reasoning, Xuan et al. ICCV 2023.
    • CIMI4D - scene Interactions, Yan et al. CVPR 2023.
    • Scene-Ego - aware Egocentric 3D Human Pose Estimation, Wang et al. CVPR 2023.
    • SLOPER4D - Aware Dataset for Global 4D Human Pose Estimation in Urban Environments, Dai et al. CVPR 2023.
    • CIRCLE
    • SceneDiffuser - based Generation, Optimization, and Planning in 3D Scenes, Huang et al. CVPR 2023.
    • PMP - wise Motion Priors, Bae et al. SIGGRAPH 2023.
    • Hassan et al. - Scene Interactions, Hassan et al. SIGGRAPH 2023.
    • Mao et al. - aware Human Motion Forecasting, Mao et al. NeurIPS 2022.
    • HUMANISE - conditioned Human Motion Generation in 3D Scenes, Wang et al. NeurIPS 2022.
    • EmbodiedPose - aware Human Pose Estimation, Luo et al. NeurIPS 2022.
    • GIMO - Informed Human Motion Prediction in Context, Zheng et al. ECCV 2022.
    • Wang et al. - aware 3D Human Motion Synthesis, Wang et al. CVPR 2022.
    • GAMMA
    • SAMP - Aware Motion Prediction, Hassan et al, ICCV 2021.
    • PLACE
    • PSI
    • NSM - Scene Interactions, Starke et al. SIGGRAPH Asia 2019.
    • PROX
    • Narrator - Scene Interaction Generation via Relationship Reasoning, Xuan et al. ICCV 2023.
  • Human-Human Interaction

    • InterMask
    • InterControl
    • ReMoS - Conditioned Reaction Synthesis for Two-Person Interactions, Ghosh et al. ECCV 2024.
    • ReGenNet - Reaction Synthesis, Xu et al. CVPR 2024.
    • Fang et al. - Person Motions with Reaction Priors, Fan et al. CVPR 2024.
    • in2IN - Ponce et al. CVPR Workshop 2024.
    • InterGen - based Multi-human Motion Generation under Complex Interactions, Liang et al. IJCV 2024.
    • ActFormer - based Transformer towards General Action-Conditioned 3D Human Motion Generation, Xu et al. ICCV 2023.
    • Tanaka et al. - aware Interaction Generation from Textual Description, Tanaka et al. ICCV 2023.
    • Hi4D
  • Motion Generation, Text/Speech/Music-Driven

    • SemTalk - speech Motion Generation with Frame-level Semantic Emphasis, Zhang et al. ArXiv 2024.
    • FG-MDM - MDM: Towards Zero-Shot Human Motion Generation via ChatGPT-Refined Descriptions, ICPR 2024.
    • PIDM - Aware Interaction Diffusion Model for Gesture Generation, Shibasaki et al. ICANN 2024.
    • FG-MDM - MDM: Towards Zero-Shot Human Motion Generation via ChatGPT-Refined Descriptions, ICPR 2024.
    • PIDM - Aware Interaction Diffusion Model for Gesture Generation, Shibasaki et al. ICANN 2024.
    • FG-MDM - MDM: Towards Zero-Shot Human Motion Generation via ChatGPT-Refined Descriptions, ICPR 2024.
    • PIDM - Aware Interaction Diffusion Model for Gesture Generation, Shibasaki et al. ICANN 2024.
    • Lodge++ - quality and Long Dance Generation with Vivid Choreography Patterns, Li et al. ArXiv 2024.
    • MotionCLR - free Editing via Understanding Attention Mechanisms, Chen et al. ArXiv 2024.
    • MotionGlot - Embodied Motion Generation Model, Harithas et al. ArXiv 2024.
    • LEAD
    • Leite et al. - to-Motion Models via Pose and Video Conditioned Editing, Leite et al. ArXiv 2024.
    • MotionRL - to-Motion Generation to Human Preferences with Multi-Reward Reinforcement Learning, Liu et al. ArXiv 2024.
    • UniMuMo
    • Meng et al - Driven Human Motion Generation, Meng et al. ArXiv 2024.
    • KinMo - aware Human Motion Understanding and Generation, Zhang et al. ArXiv 2024.
    • Morph - free Physics Optimization Framework for Human Motion Generation, Li et al. ArXiv 2024.
    • MotionLLM
    • DART - Based Autoregressive Motion Model for Real-Time Text-Driven Motion Control, Zhao et al. ArXiv 2024.
    • CLoSD - task character control, Tevet et al. ArXiv 2024.
    • T2M-X - X: Learning Expressive Text-to-Motion Generation from Partially Annotated Data, Liu et al. ArXiv 2024.
    • MoRAG - - Multi-Fusion Retrieval Augmented Generation for Human Motion, Shashank et al. ArXiv 2024.
    • Mandelli et al
    • BAD - regressive Diffusion for Text-to-Motion Generation, Hosseyni et al. ArXiv 2024.
    • Text to blind motion
    • UniMTS - training for Motion Time Series, Zhang et al. NeurIPS 2024.
    • Christopher et al.
    • MoMu-Diffusion - Diffusion: On Learning Long-Term Motion-Music Synchronization and Correspondence, You et al. NeurIPS 2024.
    • MoGenTS - Temporal Joint Modeling, Yuan et al. NeurIPS 2024.
    • M3GPT
    • Bikov et al - Tuning, Bikov et al. NeurIPS Workshop 2024.
    • PIDM - Aware Interaction Diffusion Model for Gesture Generation, Shibasaki et al. ICANN 2024.
    • ProgMoGen - set Motion Control Tasks, Liu et al. CVPR 2024.
    • AvatarGPT - in-One Framework for Motion Understanding, Planning, Generation and Beyond, Zhou et al. CVPR 2024.
    • HMD-NeMo - NeMo: Online 3D Avatar Motion Generation From Sparse Observations, Aliakbarian et al. ICCV 2023.
    • Alexanderson et al. - driven motion synthesis with diffusion models, Alexanderson et al. SIGGRAPH 2023.
    • UDE
    • GraphMotion - grained Control of Motion Diffusion Model with Hierarchical Semantic Graphs, Jin et al. NeurIPS 2023.
    • FG-MDM - MDM: Towards Zero-Shot Human Motion Generation via ChatGPT-Refined Descriptions, ICPR 2024.
    • PIDM - Aware Interaction Diffusion Model for Gesture Generation, Shibasaki et al. ICANN 2024.
    • FG-MDM - MDM: Towards Zero-Shot Human Motion Generation via ChatGPT-Refined Descriptions, ICPR 2024.
    • PIDM - Aware Interaction Diffusion Model for Gesture Generation, Shibasaki et al. ICANN 2024.