Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-3dbody-papers
😎Awesome list of papers about 3D body
https://github.com/3DFaceBody/awesome-3dbody-papers
Last synced: about 13 hours ago
JSON representation
-
Body Model
- SCAPE: Shape Completion and Animation of People
- SMPL: A Skinned Multi-Person Linear Model
- SoftSMPL: Data-driven Modeling of Nonlinear Soft-tissue Dynamics for Parametric Humans
- Modeling and Estimation of Nonlinear Skin Mechanics for Animated Avatars
- SUPR: A Sparse Unified Part-Based Human Representation
- BLSM: A Bone-Level Skinned Model of the Human Mesh
- Joint Optimization for Multi-Person Shape Models from Markerless 3D-Scans - Systems-Research-Group/JOMS)
- PanoMan: Sparse Localized Components–based Model for Full Human Motions
- BASH: Biomechanical Animated Skinned Human for Visualization of Kinematics and Muscle Activity - lab-fau/BASH-Model)
- NPMs: Neural Parametric Models for 3D Deformable Shapes
- LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human Bodies
- LEAP: Learning Articulated Occupancy of People
- SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements
- BASH: Biomechanical Animated Skinned Human for Visualization of Kinematics and Muscle Activity - lab-fau/BASH-Model)
- NPMs: Neural Parametric Models for 3D Deformable Shapes
- LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human Bodies
- LEAP: Learning Articulated Occupancy of People
- SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements
- GHUM & GHUML: Generative 3D Human Shape and Articulated Pose Models - research/google-research/tree/master/ghum)
- LEAP: Learning Articulated Occupancy of People
- GHUM & GHUML: Generative 3D Human Shape and Articulated Pose Models - research/google-research/tree/master/ghum)
- SoftSMPL: Data-driven Modeling of Nonlinear Soft-tissue Dynamics for Parametric Humans
-
Body Pose
- Compressed Volumetric Heatmaps for Multi-Person 3D Pose Estimation
- PandaNet: Anchor-Based Single-Shot Multi-Person 3D Pose Estimation
- MotioNet: 3D Human Motion Reconstruction from Monocular Video with Skeleton Consistency
- VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera - inf.mpg.de/projects/VNect) [[Code]](http://gvv.mpi-inf.mpg.de/projects/VNect)
- XNect: Real-time Multi-person 3D Human Pose Estimation with a Single RGB Camera - inf.mpg.de%2Fprojects%2FXNect%2F) [[Code]](https://sites.google.com/view/https%3A%2F%2Fgithub.com%2Fmehtadushy%2FSelecSLS-Pytorch%2F)
- PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time - inf.mpg.de/projects/PhysCap) [[Code]](https://github.com/soshishimada/PhysCap_demo_release/)
- Neural Monocular 3D Human Motion Capture with Physical Awareness - inf.mpg.de/projects/PhysAware) [[Code]](https://github.com/soshishimada/Neural_Physcap_Demo)
- PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation
- Cascaded Deep Monocular 3D Human Pose Estimation with Evolutionary Training Data
- PoseLifter: Absolute 3D Human Pose Lifting Network from a Single Noisy 2D Human Pose
- SRNet: Improving Generalization in 3D Human Pose Estimation with a Split-and-Recombine Approach - and-Recombine-Net)
- Probabilistic Monocular 3D Human Pose Estimation with Normalizing Flows - Monocular-3D-Human-Pose-Estimation-with-Normalizing-Flows)
- Learning Skeletal Graph Neural Networks for Hard 3D Pose Estimation - GNN)
- Learnable Triangulation of Human Pose - triangulation-pytorch)
- FLEX: Parameter-free Multi-view 3D Human Motion Reconstruction
- Weakly-supervised Cross-view 3D Human Pose Estimation
- High Fidelity 3D Reconstructions with Limited Physical Views - fidelity-3d-neural-prior) [[Code]](https://github.com/mosamdabhi/neural-shape-prior)
- SMAP: Single-Shot Multi-Person Absolute 3D Pose Estimation
- PI-Net: Pose Interacting Network for Multi-Person Monocular 3D Pose Estimation
- Monocular 3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks - Multi-Person-Pose)
- FCPose: Fully Convolutional Multi-Person Pose Estimation with Dynamic Instance-Aware Convolutions
- Multi-person 3D Pose Estimation in Crowded Scenes Based on Multi-View Geometry - Crowd-Pose-Estimation-Based-on-MVG)
- Multi-View Multi-Person 3D Pose Estimation with Plane Sweep Stereo
- Direct Multi-view Multi-person 3D Human Pose Estimation - sg/mvp)
- Temporal Smoothing for 3D Human Pose Estimation and Localization for Occluded People
- Attention Mechanism Exploits Temporal Contexts: Real-time 3D Human Pose Reconstruction
- 3D Human Pose Estimation with Spatial and Temporal Transformers
- MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation
- Skeletor: Skeletal Transformers for Robust Body-Pose Estimation
- A Graph Attention Spatio-temporal Convolutional Networks for 3D Human Pose Estimation in Video - Net-3DPoseEstimation)
- TriPose: A Weakly-Supervised 3D Human Pose Estimation via Triangulation from Video
- MotioNet: 3D Human Motion Reconstruction from Monocular Video with Skeleton Consistency
- VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera - inf.mpg.de/projects/VNect) [[Code]](http://gvv.mpi-inf.mpg.de/projects/VNect)
- PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time - inf.mpg.de/projects/PhysCap) [[Code]](https://github.com/soshishimada/PhysCap_demo_release/)
- Neural Monocular 3D Human Motion Capture with Physical Awareness - inf.mpg.de/projects/PhysAware) [[Code]](https://github.com/soshishimada/Neural_Physcap_Demo)
- PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation
- Cascaded Deep Monocular 3D Human Pose Estimation with Evolutionary Training Data
- SRNet: Improving Generalization in 3D Human Pose Estimation with a Split-and-Recombine Approach - and-Recombine-Net)
- Probabilistic Monocular 3D Human Pose Estimation with Normalizing Flows - Monocular-3D-Human-Pose-Estimation-with-Normalizing-Flows)
- Learning Skeletal Graph Neural Networks for Hard 3D Pose Estimation - GNN)
- FLEX: Parameter-free Multi-view 3D Human Motion Reconstruction
- Weakly-supervised Cross-view 3D Human Pose Estimation
- High Fidelity 3D Reconstructions with Limited Physical Views - fidelity-3d-neural-prior) [[Code]](https://github.com/mosamdabhi/neural-shape-prior)
- PandaNet: Anchor-Based Single-Shot Multi-Person 3D Pose Estimation
- SMAP: Single-Shot Multi-Person Absolute 3D Pose Estimation
- PI-Net: Pose Interacting Network for Multi-Person Monocular 3D Pose Estimation
- Monocular 3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks - Multi-Person-Pose)
- FCPose: Fully Convolutional Multi-Person Pose Estimation with Dynamic Instance-Aware Convolutions
- Multi-person 3D Pose Estimation in Crowded Scenes Based on Multi-View Geometry - Crowd-Pose-Estimation-Based-on-MVG)
- Multi-View Multi-Person 3D Pose Estimation with Plane Sweep Stereo
- Direct Multi-view Multi-person 3D Human Pose Estimation - sg/mvp)
- Fast and Robust Multi-Person 3D Pose Estimation from Multiple Views
- Temporal Smoothing for 3D Human Pose Estimation and Localization for Occluded People
- Attention Mechanism Exploits Temporal Contexts: Real-time 3D Human Pose Reconstruction
- 3D Human Pose Estimation with Spatial and Temporal Transformers
- MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation
- A Graph Attention Spatio-temporal Convolutional Networks for 3D Human Pose Estimation in Video - Net-3DPoseEstimation)
- TriPose: A Weakly-Supervised 3D Human Pose Estimation via Triangulation from Video
- Learning Dynamical Human-Joint Affinity for 3D Pose Estimation in Videos
- Camera Distortion-aware 3D Human Pose Estimation in Video with Optimization-based Meta-Learning
- MeTRAbs: Metric-Scale Truncation-Robust Heatmaps for Absolute 3D Human Pose Estimation - BIOM, 2020. [[Page]](https://sites.google.com/a/udayton.edu/jshen1/cvpr2020) [[Code]](https://github.com/lrxjason/Attention3DHumanPose)
- PCLs: Geometry-aware Neural Reconstruction of 3D Pose with Perspective Crop Layers
- Real-time Lower-body Pose Prediction from Sparse Upper-body Tracking Signals
- Context Modeling in 3D Human Pose Estimation: A Unified Perspective
- CanonPose: Self-Supervised Monocular 3D Human Pose Estimation in the Wild
- Invariant Teacher and Equivariant Student for Unsupervised 3D Human Pose Estimation
- Unsupervised 3D Human Pose Representation with Viewpoint and Pose Disentanglement - human-pose)
- Neural MoCon: Neural Motion Control for Physically Plausible Human Motion Capture - NM-2022-03.html)
- DOPE: Distillation Of Part Experts for whole-body 3D pose estimation in the wild
- Residual Pose: A Decoupled Approach for Depth-based 3D Human Pose Estimation
- Learning Dynamical Human-Joint Affinity for 3D Pose Estimation in Videos
- Camera Distortion-aware 3D Human Pose Estimation in Video with Optimization-based Meta-Learning
- MeTRAbs: Metric-Scale Truncation-Robust Heatmaps for Absolute 3D Human Pose Estimation - BIOM, 2020. [[Page]](https://sites.google.com/a/udayton.edu/jshen1/cvpr2020) [[Code]](https://github.com/lrxjason/Attention3DHumanPose)
- PCLs: Geometry-aware Neural Reconstruction of 3D Pose with Perspective Crop Layers
- Real-time Lower-body Pose Prediction from Sparse Upper-body Tracking Signals
- Context Modeling in 3D Human Pose Estimation: A Unified Perspective
- CanonPose: Self-Supervised Monocular 3D Human Pose Estimation in the Wild
- Invariant Teacher and Equivariant Student for Unsupervised 3D Human Pose Estimation
- Unsupervised 3D Human Pose Representation with Viewpoint and Pose Disentanglement - human-pose)
- Neural MoCon: Neural Motion Control for Physically Plausible Human Motion Capture - NM-2022-03.html)
- MocapNET: Ensemble of SNN Encoders for 3D Human Pose Estimation in RGB Images - ModelBasedTracker/MocapNET)
- DOPE: Distillation Of Part Experts for whole-body 3D pose estimation in the wild
- Residual Pose: A Decoupled Approach for Depth-based 3D Human Pose Estimation
- PoP-Net: Pose over Parts Network for Multi-Person 3D Pose Estimation from a Depth Image
- 3D Human Reconstruction in the Wild with Collaborative Aerial Cameras
- PoP-Net: Pose over Parts Network for Multi-Person 3D Pose Estimation from a Depth Image
- 3D Human Reconstruction in the Wild with Collaborative Aerial Cameras
- Neural Monocular 3D Human Motion Capture with Physical Awareness - inf.mpg.de/projects/PhysAware) [[Code]](https://github.com/soshishimada/Neural_Physcap_Demo)
- Weakly-supervised Cross-view 3D Human Pose Estimation
- High Fidelity 3D Reconstructions with Limited Physical Views - fidelity-3d-neural-prior) [[Code]](https://github.com/mosamdabhi/neural-shape-prior)
- PandaNet: Anchor-Based Single-Shot Multi-Person 3D Pose Estimation
- SMAP: Single-Shot Multi-Person Absolute 3D Pose Estimation
- FCPose: Fully Convolutional Multi-Person Pose Estimation with Dynamic Instance-Aware Convolutions
- Direct Multi-view Multi-person 3D Human Pose Estimation - sg/mvp)
- Temporal Smoothing for 3D Human Pose Estimation and Localization for Occluded People
- MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation
- A Graph Attention Spatio-temporal Convolutional Networks for 3D Human Pose Estimation in Video - Net-3DPoseEstimation)
- TriPose: A Weakly-Supervised 3D Human Pose Estimation via Triangulation from Video
- PCLs: Geometry-aware Neural Reconstruction of 3D Pose with Perspective Crop Layers
- Real-time Lower-body Pose Prediction from Sparse Upper-body Tracking Signals
- Context Modeling in 3D Human Pose Estimation: A Unified Perspective
- CanonPose: Self-Supervised Monocular 3D Human Pose Estimation in the Wild
- Invariant Teacher and Equivariant Student for Unsupervised 3D Human Pose Estimation
- Unsupervised 3D Human Pose Representation with Viewpoint and Pose Disentanglement - human-pose)
- Neural MoCon: Neural Motion Control for Physically Plausible Human Motion Capture - NM-2022-03.html)
- 3D Human Reconstruction in the Wild with Collaborative Aerial Cameras
- PI-Net: Pose Interacting Network for Multi-Person Monocular 3D Pose Estimation
-
Clothed Body Mesh
- SMPLicit: Topology-aware Generative Model for Clothed People
- LiveCap: Real-time Human Performance Capture from Monocular Video - inf.mpg.de/projects/LiveCapV2/)
- DeepCap: Monocular Human Performance Capture Using Weak Supervision - inf.mpg.de/~mhaberma/projects/2020-cvpr-deepcap)
- LiveCap: Real-time Human Performance Capture from Monocular Video - inf.mpg.de/projects/LiveCapV2/)
- MonoClothCap: Towards Temporally Coherent Clothing Capture from Monocular RGB Video
- Human Performance Capture from Monocular Video in the Wild - performance-capture/index.php) [[Code]](https://github.com/MoyGcc/hpcwild)
- MonoClothCap: Towards Temporally Coherent Clothing Capture from Monocular RGB Video
- Human Performance Capture from Monocular Video in the Wild - performance-capture/index.php) [[Code]](https://github.com/MoyGcc/hpcwild)
- MulayCap: Multi-layer Human Performance Capture Using A Monocular Video Camera
- ChallenCap: Monocular 3D Capture of Challenging Human Performances using Multi-Modal References
- TightCap: 3D Human Shape Capture with Clothing Tightness Field
- Deep Physics-aware Inference of Cloth Deformation for Monocular Human Performance Capture
- Video Based Reconstruction of 3D People Models - bs.de/people-snapshot)
- SelfRecon: Self Reconstruction Your Digital Avatar from Monocular Video
- High-Fidelity Human Avatars from a Single RGB Camera - Avatar/) [[Code]](https://github.com/hzhao1997/HF-Avatar)
- PatchShading: High-Quality Human Reconstruction by PatchWarping and Shading Refinement
- TotalSelfScan: Learning Full-body Avatars from Self-Portrait Videos of Faces, Hands, and Bodies
- AvatarCap: Animatable Avatar Conditioned Monocular Human Volumetric Capture
- Capturing and Animation of Body and Clothing from Monocular Video
- DoubleFusion: Real-time Capture of Human Performance with Inner Body Shape from a Depth Sensor
- Multi-Garment Net: Learning to Dress 3D People from Images - inf.mpg.de/mgn)
- SimulCap : Single-View Human Performance Capture with Cloth Simulation
- OcclusionFusion: Occlusion-aware Motion Estimation for Real-time Dynamic 3D Reconstruction - lin.github.io/OcclusionFusion/) [[Code]](https://github.com/wenbin-lin/OcclusionFusion/)
- NormalGAN: Learning Detailed 3D Human from a Single RGB-D Image
- TexMesh: Reconstructing Detailed Human Texture and Geometry from RGB-D Video - reconstructing-detailed-human-texture-and-geometry-from-rgb-d-video)
- PINA: Learning a Personalized Implicit Neural Avatar from a Single RGB-D Video Sequence - dong.github.io/pina/) [[Code]](https://github.com/zj-dong/pina)
- Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction
- Function4D: Real-time Human Volumetric Capture from Very Sparse Consumer RGBD Sensors
- POSEFusion:Pose-guided Selective Fusion for Single-view Human Volumetric Capture
- DSFN: Dynamic Surface Function Networks for Clothed Human Bodies
- DeepMultiCap: Performance Capture of Multiple Characters Using Sparse Multiview Cameras
- HDHumans: A Hybrid Approach for High-fidelity Digital Humans
- Learning to Reconstruct People in Clothing from a Single RGB Camera - inf.mpg.de/octopus) [[Code]](https://github.com/thmoa/octopus)
- SiCloPe: Silhouette-Based Clothed People
- Tex2Shape: Detailed Full Human Body Geometry from a Single Image - inf.mpg.de/tex2shape) [[Code]](https://github.com/thmoa/tex2shape)
- Image-Guided Human Reconstruction via Multi-Scale Graph Transformation Networks
- ChallenCap: Monocular 3D Capture of Challenging Human Performances using Multi-Modal References
- Deep Physics-aware Inference of Cloth Deformation for Monocular Human Performance Capture
- Video Based Reconstruction of 3D People Models - bs.de/people-snapshot)
- SelfRecon: Self Reconstruction Your Digital Avatar from Monocular Video
- High-Fidelity Human Avatars from a Single RGB Camera - Avatar/) [[Code]](https://github.com/hzhao1997/HF-Avatar)
- PatchShading: High-Quality Human Reconstruction by PatchWarping and Shading Refinement
- TotalSelfScan: Learning Full-body Avatars from Self-Portrait Videos of Faces, Hands, and Bodies
- AvatarCap: Animatable Avatar Conditioned Monocular Human Volumetric Capture
- Capturing and Animation of Body and Clothing from Monocular Video
- SimulCap : Single-View Human Performance Capture with Cloth Simulation
- Realistic Virtual Humans from Smartphone Videos - gv.cs.tu-dortmund.de/downloads/publications/2020/vrst20.mp4)
- OcclusionFusion: Occlusion-aware Motion Estimation for Real-time Dynamic 3D Reconstruction - lin.github.io/OcclusionFusion/) [[Code]](https://github.com/wenbin-lin/OcclusionFusion/)
- NormalGAN: Learning Detailed 3D Human from a Single RGB-D Image
- Robust 3D Self-portraits in Seconds
- TexMesh: Reconstructing Detailed Human Texture and Geometry from RGB-D Video - reconstructing-detailed-human-texture-and-geometry-from-rgb-d-video)
- PINA: Learning a Personalized Implicit Neural Avatar from a Single RGB-D Video Sequence - dong.github.io/pina/) [[Code]](https://github.com/zj-dong/pina)
- Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction
- Function4D: Real-time Human Volumetric Capture from Very Sparse Consumer RGBD Sensors
- POSEFusion:Pose-guided Selective Fusion for Single-view Human Volumetric Capture
- DSFN: Dynamic Surface Function Networks for Clothed Human Bodies
- DeepMultiCap: Performance Capture of Multiple Characters Using Sparse Multiview Cameras
- HDHumans: A Hybrid Approach for High-fidelity Digital Humans
- Learning to Reconstruct People in Clothing from a Single RGB Camera - inf.mpg.de/octopus) [[Code]](https://github.com/thmoa/octopus)
- SiCloPe: Silhouette-Based Clothed People
- Tex2Shape: Detailed Full Human Body Geometry from a Single Image - inf.mpg.de/tex2shape) [[Code]](https://github.com/thmoa/tex2shape)
- Multi-Garment Net: Learning to Dress 3D People from Images - inf.mpg.de/mgn)
- Image-Guided Human Reconstruction via Multi-Scale Graph Transformation Networks
- SIZER: A Dataset and Model for Parsing 3D Clothing and Learning Size Sensitive 3D Clothing - inf.mpg.de/sizer) [[Code]](https://github.com/garvita-tiwari/sizer)
- PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization
- PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization
- ReFu: Refine and Fuse the Unobserved View for Detail-Preserving Single-Image 3D Human Reconstruction
- Total Scale: Face-to-Body Detail Reconstruction from Sparse RGBD Sensors
- ARCH: Animatable Reconstruction of Clothed Humans
- ARCH++: Animation-Ready Clothed Human Reconstruction Revisited
- S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling
- Detailed Human Avatars from Monocular Video
- Monocular Real-Time Volumetric Performance Capture - Splinter/MonoPort)
- Implicit Functions in Feature Space for 3D Shape Reconstruction and Completion - inf.mpg.de/ifnets) [[Code]](https://github.com/jchibane/if-net)
- Combining Implicit Function Learning and Parametric Models for 3D Human Reconstruction - inf.mpg.de/ipnet) [[Code]](https://github.com/bharat-b7/IPNet)
- PaMIR: Parametric Model-Conditioned Implicit Representation for Image-based Human Reconstruction
- RIN: Textured Human Model Recovery and Imitation with a Single Image
- 3D Human Avatar Digitization from a Single Image
- Detailed Avatar Recovery from Single Image
- High-Fidelity Clothed Avatar Reconstruction from a Single Image
- SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks
- ICON: Implicit Clothed humans Obtained from Normals
- ECON: Explicit Clothed humans Optimized via Normal integration
- Neural-GIF: Neural Generalized Implicit Functions for Animating People in Clothing - inf.mpg.de/neuralgif)
- Reconstructing NBA Players - Players)
- Capturing Detailed Deformations of Moving Human Bodies
- Towards Real-World Category-level Articulation Pose Estimation - google.github.io)
- gDNA: Towards Generative Detailed Neural Avatars - ethz.github.io/gdna/)
- SIZER: A Dataset and Model for Parsing 3D Clothing and Learning Size Sensitive 3D Clothing - inf.mpg.de/sizer) [[Code]](https://github.com/garvita-tiwari/sizer)
- PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization
- PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization
- Geo-PIFu: Geometry and Pixel Aligned Implicit Functions for Single-view Human Reconstruction - PIFu)
- ReFu: Refine and Fuse the Unobserved View for Detail-Preserving Single-Image 3D Human Reconstruction
- ARCH: Animatable Reconstruction of Clothed Humans
- ARCH++: Animation-Ready Clothed Human Reconstruction Revisited
- S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling
- Detailed Human Avatars from Monocular Video
- Monocular Real-Time Volumetric Performance Capture - Splinter/MonoPort)
- Implicit Functions in Feature Space for 3D Shape Reconstruction and Completion - inf.mpg.de/ifnets) [[Code]](https://github.com/jchibane/if-net)
- Combining Implicit Function Learning and Parametric Models for 3D Human Reconstruction - inf.mpg.de/ipnet) [[Code]](https://github.com/bharat-b7/IPNet)
- PaMIR: Parametric Model-Conditioned Implicit Representation for Image-based Human Reconstruction
- RIN: Textured Human Model Recovery and Imitation with a Single Image
- 3D Human Avatar Digitization from a Single Image
- Detailed Avatar Recovery from Single Image
- High-Fidelity Clothed Avatar Reconstruction from a Single Image
- SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks
- ICON: Implicit Clothed humans Obtained from Normals
- ECON: Explicit Clothed humans Optimized via Normal integration
- Neural-GIF: Neural Generalized Implicit Functions for Animating People in Clothing - inf.mpg.de/neuralgif)
- Reconstructing NBA Players - Players)
- Capturing Detailed Deformations of Moving Human Bodies
- Towards Real-World Category-level Articulation Pose Estimation - google.github.io)
- gDNA: Towards Generative Detailed Neural Avatars - ethz.github.io/gdna/)
- OcclusionFusion: Occlusion-aware Motion Estimation for Real-time Dynamic 3D Reconstruction - lin.github.io/OcclusionFusion/) [[Code]](https://github.com/wenbin-lin/OcclusionFusion/)
- Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction
- POSEFusion:Pose-guided Selective Fusion for Single-view Human Volumetric Capture
- MonoClothCap: Towards Temporally Coherent Clothing Capture from Monocular RGB Video
- ChallenCap: Monocular 3D Capture of Challenging Human Performances using Multi-Modal References
- Deep Physics-aware Inference of Cloth Deformation for Monocular Human Performance Capture
- High-Fidelity Human Avatars from a Single RGB Camera - Avatar/) [[Code]](https://github.com/hzhao1997/HF-Avatar)
- PatchShading: High-Quality Human Reconstruction by PatchWarping and Shading Refinement
- AvatarCap: Animatable Avatar Conditioned Monocular Human Volumetric Capture
- Capturing and Animation of Body and Clothing from Monocular Video
- SimulCap : Single-View Human Performance Capture with Cloth Simulation
- DSFN: Dynamic Surface Function Networks for Clothed Human Bodies
- Tex2Shape: Detailed Full Human Body Geometry from a Single Image - inf.mpg.de/tex2shape) [[Code]](https://github.com/thmoa/tex2shape)
- Image-Guided Human Reconstruction via Multi-Scale Graph Transformation Networks
- SIZER: A Dataset and Model for Parsing 3D Clothing and Learning Size Sensitive 3D Clothing - inf.mpg.de/sizer) [[Code]](https://github.com/garvita-tiwari/sizer)
- ReFu: Refine and Fuse the Unobserved View for Detail-Preserving Single-Image 3D Human Reconstruction
- ARCH: Animatable Reconstruction of Clothed Humans
- ARCH++: Animation-Ready Clothed Human Reconstruction Revisited
- S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling
- Neural-GIF: Neural Generalized Implicit Functions for Animating People in Clothing - inf.mpg.de/neuralgif)
- Reconstructing NBA Players - Players)
- Capturing Detailed Deformations of Moving Human Bodies
- Towards Real-World Category-level Articulation Pose Estimation - google.github.io)
- Detailed Human Avatars from Monocular Video
- Implicit Functions in Feature Space for 3D Shape Reconstruction and Completion - inf.mpg.de/ifnets) [[Code]](https://github.com/jchibane/if-net)
- PaMIR: Parametric Model-Conditioned Implicit Representation for Image-based Human Reconstruction
- RIN: Textured Human Model Recovery and Imitation with a Single Image
- 3D Human Avatar Digitization from a Single Image
- High-Fidelity Clothed Avatar Reconstruction from a Single Image
- SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks
- ICON: Implicit Clothed humans Obtained from Normals
- SiCloPe: Silhouette-Based Clothed People
- SMPLicit: Topology-aware Generative Model for Clothed People
- ARCH: Animatable Reconstruction of Clothed Humans
- StereoPIFu: Depth Aware Clothed Human Digitization via Stereo Vision
- Geometry-aware Two-scale PIFu Representation for Human Reconstruction
- Realistic Virtual Humans from Smartphone Videos - gv.cs.tu-dortmund.de/downloads/publications/2020/vrst20.mp4)
- PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization
-
Naked Body Mesh
- Keep it SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image - x)
- Learning to Estimate 3D Human Pose and Shape from a Single Color Image
- Neural Body Fitting: Unifying Deep Learning and Model Based Human Pose and Shape Estimation
- Appearance Consensus Driven Self-Supervised Human Mesh Recovery - human-mesh) [[Code]](https://github.com/rakeshramesha/SS_Human_Mesh)
- Delving Deep Into Hybrid Annotations for 3D Human Recovery in the Wild - 2019)
- Learning 3D Human Shape and Pose from Dense Body Parts
- Heuristic Weakly Supervised 3D Human Pose Estimation in Novel Contexts without Any 3D Pose Ground Truth
- Revitalizing Optimization for 3D Human Pose and Shape Estimation: A Sparse Constrained Formulation
- PARE: Part Attention Regressor for 3D Human Body Estimation
- Occluded Human Mesh Recovery
- Implicit 3D Human Mesh Recovery using Consistency with Pose and Shape from Unseen-view
- Generative Approach for Probabilistic Human Mesh Recovery using Diffusion Models - HMR)
- 3D Multi-bodies: Fitting Sets of Plausible 3D Human Models to Ambiguous Image Data
- Parametric Shape Estimation of Human Body under Wide Clothing
- Everybody Is Unique: Towards Unbiased Human Mesh Recovery
- 3D Human Pose, Shape and Texture from Low-Resolution Images and Videos
- On Self-Contact and Human Pose
- Probabilistic 3D Human Shape and Pose Estimation from Multiple Unconstrained Images in the Wild
- Hierarchical Kinematic Probability Distributions for 3D Human Shape and Pose Estimation from Images in the Wild
- Human Body Model Fitting by Learned Gradient Descent - body-fitting)
- Learning to Reconstruct 3D Human Pose and Shape via Model-fitting in the Loop
- Learning to Regress Bodies from Images using Differentiable Semantic Rendering
- 3D Human Mesh Regression with Dense Correspondence
- I2L-MeshNet: Image-to-Lixel Prediction Network for Accurate 3D Human Pose and Mesh Estimation from a Single RGB Image - MeshNet_RELEASE)
- MeshLifter: Weakly Supervised Approach for 3D Human Mesh Reconstruction from a Single 2D Pose Based on Loop Structure
- Learning 3D Human Shape and Pose from Dense Body Parts - 3DHumanReconstruction)
- Exemplar Fine-Tuning for 3D Human Pose Fitting Towards In-the-Wild 3D Human Pose Estimation
- HybrIK: A Hybrid Analytical-Neural Inverse Kinematics Solution for 3D Human Pose and Shape Estimation - sjtu/HybrIK)
- Chasing the Tail in Monocular 3D Human Reconstruction with Prototype Memory
- Keep it SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image - x)
- Learning to Estimate 3D Human Pose and Shape from a Single Color Image
- Neural Body Fitting: Unifying Deep Learning and Model Based Human Pose and Shape Estimation
- Appearance Consensus Driven Self-Supervised Human Mesh Recovery - human-mesh) [[Code]](https://github.com/rakeshramesha/SS_Human_Mesh)
- Learning 3D Human Shape and Pose from Dense Body Parts
- Heuristic Weakly Supervised 3D Human Pose Estimation in Novel Contexts without Any 3D Pose Ground Truth
- Revitalizing Optimization for 3D Human Pose and Shape Estimation: A Sparse Constrained Formulation
- Object-Occluded Human Shape and Pose Estimation from a Single Color Image - OOH-2020-03.html) [[Code]](https://gitee.com/seuvcl/CVPR2020-OOH)
- PARE: Part Attention Regressor for 3D Human Body Estimation
- Occluded Human Mesh Recovery
- Implicit 3D Human Mesh Recovery using Consistency with Pose and Shape from Unseen-view
- 3D Multi-bodies: Fitting Sets of Plausible 3D Human Models to Ambiguous Image Data
- Parametric Shape Estimation of Human Body under Wide Clothing
- Everybody Is Unique: Towards Unbiased Human Mesh Recovery
- 3D Human Pose, Shape and Texture from Low-Resolution Images and Videos
- On Self-Contact and Human Pose
- Probabilistic 3D Human Shape and Pose Estimation from Multiple Unconstrained Images in the Wild
- Hierarchical Kinematic Probability Distributions for 3D Human Shape and Pose Estimation from Images in the Wild
- Human Body Model Fitting by Learned Gradient Descent - body-fitting)
- End-to-end Recovery of Human Shape and Pose
- Learning to Reconstruct 3D Human Pose and Shape via Model-fitting in the Loop
- Learning to Regress Bodies from Images using Differentiable Semantic Rendering
- 3D Human Mesh Regression with Dense Correspondence
- Hierarchical Kinematic Human Mesh Recovery
- I2L-MeshNet: Image-to-Lixel Prediction Network for Accurate 3D Human Pose and Mesh Estimation from a Single RGB Image - MeshNet_RELEASE)
- MeshLifter: Weakly Supervised Approach for 3D Human Mesh Reconstruction from a Single 2D Pose Based on Loop Structure
- Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh Recovery from a 2D Human Pose
- PoseNet3D: Learning Temporally Consistent 3D Human Pose via Knowledge Distillation
- Human Mesh Recovery from Monocular Images via a Skeleton-disentangled Representation - CV/DSD-SATN)
- Learning 3D Human Shape and Pose from Dense Body Parts - 3DHumanReconstruction)
- Exemplar Fine-Tuning for 3D Human Pose Fitting Towards In-the-Wild 3D Human Pose Estimation
- HybrIK: A Hybrid Analytical-Neural Inverse Kinematics Solution for 3D Human Pose and Shape Estimation - sjtu/HybrIK)
- Chasing the Tail in Monocular 3D Human Reconstruction with Prototype Memory
- Beyond Weak Perspective for Monocular 3D Human Pose Estimation
- PyMAF: 3D Human Pose and Shape Regression with Pyramidal Mesh Alignment Feedback Loop
- KAMA: 3D Keypoint Aware Body Mesh Articulation
- SimPoE: Simulated Character Control for 3D Human Pose Estimation - yuan.com/simpoe)
- SportsCap: Monocular 3D Human Motion Capture and Fine-grained Understanding in Challenging Sports Videos
- CenterHMR: a Bottom-up Single-shot Method for Multi-person 3D Mesh Recovery from a Single Image
- Full-body motion capture for multiple closely interacting persons
- Coherent Reconstruction of Multiple Humans from a Single Image
- Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image
- Monocular, One-stage, Regression of Multiple 3D People
- Putting People in their Place: Monocular Regression of 3D People in Depth
- Beyond Weak Perspective for Monocular 3D Human Pose Estimation
- PyMAF: 3D Human Pose and Shape Regression with Pyramidal Mesh Alignment Feedback Loop
- KAMA: 3D Keypoint Aware Body Mesh Articulation
- SimPoE: Simulated Character Control for 3D Human Pose Estimation - yuan.com/simpoe)
- SportsCap: Monocular 3D Human Motion Capture and Fine-grained Understanding in Challenging Sports Videos
- Reconstructing 3D Human Pose by Watching Humans in the Mirror - Human) [[Code]](https://github.com/zju3dv/Mirrored-Human)
- Full-body motion capture for multiple closely interacting persons
- Coherent Reconstruction of Multiple Humans from a Single Image
- Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image
- Monocular, One-stage, Regression of Multiple 3D People
- Putting People in their Place: Monocular Regression of 3D People in Depth
- TRACE: 5D Temporal Regression of Avatars with Dynamic Cameras in 3D Environments
- GLAMR: Global Occlusion-Aware Human Mesh Recovery with Dynamic Cameras
- Scene-Aware 3D Multi-Human Motion Capture - inf.mpg.de/projects/scene-aware-3d-multi-human/) [[Code]](https://github.com/dluvizon/scene-aware-3d-multi-human)
- Body Meshes as Points
- VIBE: Video Inference for Human Body Pose and Shape Estimation
- 3D Human Motion Estimation via Motion Compression and Refinement
- Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video
- A Lightweight Graph Transformer Network for Human Mesh Reconstruction from 2D Human Pose
- THUNDR: Transformer-based 3D HUmaN Reconstruction with Markers
- Human Mesh Recovery from Multiple Shots
- PC-HMR: Pose Calibration for 3D Human Mesh Recovery from 2D Images/Videos
- Self-Attentive 3D Human Pose and Shape Estimation from Videos
- Capturing Humans in Motion: Temporal-Attentive 3D Human Pose and Shape Estimation from Monocular Video - net.github.io/MPS-Net/) [[Code]](https://github.com/MPS-Net/MPS-Net_release/)
- Physics-based Human Motion Estimation and Synthesis from Videos
- HuMoR: 3D Human Motion Model for Robust Pose Estimation
- Bilevel Online Adaptation for Out-of-Domain Human Mesh Reconstruction
- Learning Local Recurrent Models for Human Mesh Recovery
- Probabilistic Modeling for Human Mesh Recovery
- Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation
- Monocular Expressive Body Regression through Body-Driven Attention
- NeuralAnnot: Neural Annotator for in-the-wild Expressive 3D Human Pose and Mesh Training Sets
- Pose2Pose: 3D Positional Pose-Guided 3D Rotational Pose Prediction for Expressive 3D Human Pose and Mesh Estimation
- Monocular Real-time Full Body Capture with Inter-part Correlations
- Collaborative Regression of Expressive Bodies using Moderation
- One-Stage 3D Whole-Body Mesh Recovery - ubody.github.io/) [[Code]](https://github.com/IDEA-Research/OSX)
- Binarized 3D Whole-body Human Mesh Recovery
- Lightweight Multi-person Total Motion Capture Using Sparse Multi-view Cameras
- Real-time RGBD-based Extended Body Pose Estimation - violet.github.io/rgbd-kinect-pose)
- SOMA: Solving Optical Marker-Based MoCap Automatically
- TransPose: Real-time 3D Human Translation and Pose Estimation with Six Inertial Sensors - yi.github.io/TransPose) [[Code]](https://github.com/Xinyu-Yi/TransPose/)
- Physical Inertial Poser (PIP): Physics-aware Real-time Human Motion Tracking from Sparse Inertial Sensors - yi.github.io/PIP/) [[Code]](https://github.com/Xinyu-Yi/PIP)
- LiDARCap: Long-range Marker-less 3D Human Motion Capture with LiDAR Point Clouds
- TRACE: 5D Temporal Regression of Avatars with Dynamic Cameras in 3D Environments
- GLAMR: Global Occlusion-Aware Human Mesh Recovery with Dynamic Cameras
- Scene-Aware 3D Multi-Human Motion Capture - inf.mpg.de/projects/scene-aware-3d-multi-human/) [[Code]](https://github.com/dluvizon/scene-aware-3d-multi-human)
- Body Meshes as Points
- Shape-aware Multi-Person Pose Estimation from Multi-View Images - human-pose/) [[Code]](https://github.com/zj-dong/Multi-Person-Pose-Estimation)
- Learning 3D Human Dynamics from Video
- VIBE: Video Inference for Human Body Pose and Shape Estimation
- 3D Human Motion Estimation via Motion Compression and Refinement
- Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video
- End-to-End Human Pose and Mesh Reconstruction with Transformers
- Video Inference for Human Mesh Recovery with Vision Transformer
- FastMETRO: Cross-Attention of Disentangled Modalities for 3D Human Mesh Recovery with Transformers - ami/FastMETRO)
- A Lightweight Graph Transformer Network for Human Mesh Reconstruction from 2D Human Pose
- THUNDR: Transformer-based 3D HUmaN Reconstruction with Markers
- Human Mesh Recovery from Multiple Shots
- PC-HMR: Pose Calibration for 3D Human Mesh Recovery from 2D Images/Videos
- Self-Attentive 3D Human Pose and Shape Estimation from Videos
- Capturing Humans in Motion: Temporal-Attentive 3D Human Pose and Shape Estimation from Monocular Video - net.github.io/MPS-Net/) [[Code]](https://github.com/MPS-Net/MPS-Net_release/)
- Physics-based Human Motion Estimation and Synthesis from Videos
- HuMoR: 3D Human Motion Model for Robust Pose Estimation
- Bilevel Online Adaptation for Out-of-Domain Human Mesh Reconstruction
- Out-of-Domain Human Mesh Reconstruction via Bilevel Online Adaptation
- Learning Local Recurrent Models for Human Mesh Recovery
- Probabilistic Modeling for Human Mesh Recovery
- Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation
- Total Capture: A 3D Deformation Model for Tracking Faces, Hands, and Bodies
- Monocular Total Capture: Posing Face, Body and Hands in the Wild - Perceptual-Computing-Lab/MonocularTotalCapture)
- FrankMocap: A Fast Monocular 3D Hand and Body Motion Capture by Regression and Integration
- Monocular Expressive Body Regression through Body-Driven Attention
- NeuralAnnot: Neural Annotator for in-the-wild Expressive 3D Human Pose and Mesh Training Sets
- Pose2Pose: 3D Positional Pose-Guided 3D Rotational Pose Prediction for Expressive 3D Human Pose and Mesh Estimation
- Monocular Real-time Full Body Capture with Inter-part Correlations
- Collaborative Regression of Expressive Bodies using Moderation
- One-Stage 3D Whole-Body Mesh Recovery - ubody.github.io/) [[Code]](https://github.com/IDEA-Research/OSX)
- Binarized 3D Whole-body Human Mesh Recovery
- Lightweight Multi-person Total Motion Capture Using Sparse Multi-view Cameras
- Real-time RGBD-based Extended Body Pose Estimation - violet.github.io/rgbd-kinect-pose)
- SOMA: Solving Optical Marker-Based MoCap Automatically
- TransPose: Real-time 3D Human Translation and Pose Estimation with Six Inertial Sensors - yi.github.io/TransPose) [[Code]](https://github.com/Xinyu-Yi/TransPose/)
- Physical Inertial Poser (PIP): Physics-aware Real-time Human Motion Tracking from Sparse Inertial Sensors - yi.github.io/PIP/) [[Code]](https://github.com/Xinyu-Yi/PIP)
- LiDARCap: Long-range Marker-less 3D Human Motion Capture with LiDAR Point Clouds
- Expressive Body Capture: 3D Hands, Face, and Body from a Single Image - x.is.tue.mpg.de) [[Code]](https://github.com/vchoutas/smplify-x)
- Learning to Estimate 3D Human Pose and Shape from a Single Color Image
- Revitalizing Optimization for 3D Human Pose and Shape Estimation: A Sparse Constrained Formulation
- PARE: Part Attention Regressor for 3D Human Body Estimation
- 3D Multi-bodies: Fitting Sets of Plausible 3D Human Models to Ambiguous Image Data
- On Self-Contact and Human Pose
- Hierarchical Kinematic Probability Distributions for 3D Human Shape and Pose Estimation from Images in the Wild
- Learning to Reconstruct 3D Human Pose and Shape via Model-fitting in the Loop
- HybrIK: A Hybrid Analytical-Neural Inverse Kinematics Solution for 3D Human Pose and Shape Estimation - sjtu/HybrIK)
- PyMAF: 3D Human Pose and Shape Regression with Pyramidal Mesh Alignment Feedback Loop
- SimPoE: Simulated Character Control for 3D Human Pose Estimation - yuan.com/simpoe)
- Monocular, One-stage, Regression of Multiple 3D People
- PC-HMR: Pose Calibration for 3D Human Mesh Recovery from 2D Images/Videos
- Self-Attentive 3D Human Pose and Shape Estimation from Videos
- Physics-based Human Motion Estimation and Synthesis from Videos
- HuMoR: 3D Human Motion Model for Robust Pose Estimation
- NeuralAnnot: Neural Annotator for in-the-wild Expressive 3D Human Pose and Mesh Training Sets
- Pose2Pose: 3D Positional Pose-Guided 3D Rotational Pose Prediction for Expressive 3D Human Pose and Mesh Estimation
- Collaborative Regression of Expressive Bodies using Moderation
- One-Stage 3D Whole-Body Mesh Recovery - ubody.github.io/) [[Code]](https://github.com/IDEA-Research/OSX)
- Real-time RGBD-based Extended Body Pose Estimation - violet.github.io/rgbd-kinect-pose)
- Physical Inertial Poser (PIP): Physics-aware Real-time Human Motion Tracking from Sparse Inertial Sensors - yi.github.io/PIP/) [[Code]](https://github.com/Xinyu-Yi/PIP)
- Exemplar Fine-Tuning for 3D Human Pose Fitting Towards In-the-Wild 3D Human Pose Estimation
- Learning to Reconstruct 3D Human Pose and Shape via Model-fitting in the Loop
- Out-of-Domain Human Mesh Reconstruction via Bilevel Online Adaptation
-
Human Motion
- Task-Generic Hierarchical Human Motion Prior using VAEs
- 3D Semantic Trajectory Reconstruction from 3D Pixel Continuum - users.cs.umn.edu/~jsyoon/Semantic_trajectory)
- Convolutional Autoencoders for Human Motion Infilling
- Robust Motion In-betweening - motion-in-betweening-2)
- Single-Shot Motion Completion with Transformer
- Learning Compositional Representation for 4D Captures with Neural ODE - CR) [[Code]](https://github.com/BoyanJIANG/4D-Compositional-Representation)
- Graph Constrained Data Representation Learning for Human Motion Segmentation
- Predicting 3D Human Dynamics from Video
- Long-term Human Motion Prediction with Scene Context - IM-Dataset)
- Adversarial Refinement Network for Human Motion Prediction
- Towards Accurate 3D Human Motion Prediction from Incomplete Observations
- Aggregated Multi-GANs for Controlled 3D Human Motion Prediction - GAN)
- Flow-based Autoregressive Structured Prediction of Human Motion
- TRiPOD: Human Trajectory and Pose Dynamics Forecasting in the Wild
- Multi-level Motion Attention for Human Motion Prediction - mao-2019/HisRepItself)
- We are More than Our Joints: Predicting how 3D Bodies Move - cnsdqz.github.io/MOJO/MOJO.html)
- Improving Human Motion Prediction Through Continual Learning
- MSR-GCN: Multi-Scale Residual Graph Convolution Networks for Human Motion Prediction
- Stochastic Scene-Aware Motion Prediction
- Task-Generic Hierarchical Human Motion Prior using VAEs
- Convolutional Autoencoders for Human Motion Infilling
- Robust Motion In-betweening - motion-in-betweening-2)
- Single-Shot Motion Completion with Transformer
- Learning Compositional Representation for 4D Captures with Neural ODE - CR) [[Code]](https://github.com/BoyanJIANG/4D-Compositional-Representation)
- Graph Constrained Data Representation Learning for Human Motion Segmentation
- Predicting 3D Human Dynamics from Video
- Long-term Human Motion Prediction with Scene Context - IM-Dataset)
- Adversarial Refinement Network for Human Motion Prediction
- Towards Accurate 3D Human Motion Prediction from Incomplete Observations
- Aggregated Multi-GANs for Controlled 3D Human Motion Prediction - GAN)
- Flow-based Autoregressive Structured Prediction of Human Motion
- TRiPOD: Human Trajectory and Pose Dynamics Forecasting in the Wild
- Multi-level Motion Attention for Human Motion Prediction - mao-2019/HisRepItself)
- We are More than Our Joints: Predicting how 3D Bodies Move - cnsdqz.github.io/MOJO/MOJO.html)
- Improving Human Motion Prediction Through Continual Learning
- MSR-GCN: Multi-Scale Residual Graph Convolution Networks for Human Motion Prediction
- Stochastic Scene-Aware Motion Prediction
- GIMO: Gaze-Informed Human Motion Prediction in Context
- Multiscale Spatio-Temporal Graph Neural Networks for 3D Skeleton-Based Motion Prediction
- Skeleton-Graph: Long-Term 3D Motion Prediction From 2D Observations Using Deep Spatio-Temporal Graph CNNs - Graph)
- Pose Transformers (POTR): Human Motion Prediction with Non-Autoregressive Transformers
- BeLFusion: Latent Diffusion for Behavior-Driven Human Motion Prediction
- Multi-Person 3D Motion Prediction with Multi-Range Transformers
- Tracking People with 3D Representations
- Tracking People by Predicting 3D Appearance, Location and Pose
- GlocalNet: Class-aware Long-term Human Motion Synthesis
- A Causal Convolutional Neural Network for Motion Modeling and Synthesis
- TrajeVAE - Controllable Human Motion Generation from Trajectories - supplementary)
- Action-Conditioned 3D Human Motion Synthesis with Transformer VAE
- Scene-aware Generative Network for Human Motion Synthesis
- Learning a Family of Motor Skills from a Single Motion Clip
- MUGL: Large Scale Multi Person Conditional Action Generation with Locomotion
- DualMotion: Global-to-Local Casual Motion Design for Character Animations
- Character Controllers using Motion VAEs - motion-vaes)
- DanceNet3D: Music Based Dance Generation with Parametric Motion Transformer - tech.github.io/project/dancenet3d) [[Code]](https://github.com/huiye-tech/DanceNet3D)
- DanceAnyWay: Synthesizing Mixed-Genre 3D Dance Movements Through Beat Disentanglement
- Rhythm is a Dancer: Music-Driven Motion Synthesis with Global Structure
- Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic Memory
- GIMO: Gaze-Informed Human Motion Prediction in Context
- Multiscale Spatio-Temporal Graph Neural Networks for 3D Skeleton-Based Motion Prediction
- Skeleton-Graph: Long-Term 3D Motion Prediction From 2D Observations Using Deep Spatio-Temporal Graph CNNs - Graph)
- Pose Transformers (POTR): Human Motion Prediction with Non-Autoregressive Transformers
- BeLFusion: Latent Diffusion for Behavior-Driven Human Motion Prediction
- Multi-Person 3D Motion Prediction with Multi-Range Transformers
- Tracking People by Predicting 3D Appearance, Location and Pose
- Synthesizing Long-Term 3D Human Motion and Interaction in 3D - term-Motion-in-3D-Scenes) [[Code]](https://github.com/jiashunwang/Long-term-Motion-in-3D-Scenes)
- GlocalNet: Class-aware Long-term Human Motion Synthesis
- A Causal Convolutional Neural Network for Motion Modeling and Synthesis
- TrajeVAE - Controllable Human Motion Generation from Trajectories - supplementary)
- Scene-aware Generative Network for Human Motion Synthesis
- Learning a Family of Motor Skills from a Single Motion Clip
- MUGL: Large Scale Multi Person Conditional Action Generation with Locomotion
- DualMotion: Global-to-Local Casual Motion Design for Character Animations
- Character Controllers using Motion VAEs - motion-vaes)
- Learn to Dance with AIST++: Music Conditioned 3D Dance Generation
- Learning Speech-driven 3D Conversational Gestures from Video
- DanceNet3D: Music Based Dance Generation with Parametric Motion Transformer - tech.github.io/project/dancenet3d) [[Code]](https://github.com/huiye-tech/DanceNet3D)
- DanceAnyWay: Synthesizing Mixed-Genre 3D Dance Movements Through Beat Disentanglement
- Rhythm is a Dancer: Music-Driven Motion Synthesis with Global Structure
- Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic Memory
- Task-Generic Hierarchical Human Motion Prior using VAEs
- Robust Motion In-betweening - motion-in-betweening-2)
- Learning Compositional Representation for 4D Captures with Neural ODE - CR) [[Code]](https://github.com/BoyanJIANG/4D-Compositional-Representation)
- Graph Constrained Data Representation Learning for Human Motion Segmentation
- Predicting 3D Human Dynamics from Video
- Adversarial Refinement Network for Human Motion Prediction
- Aggregated Multi-GANs for Controlled 3D Human Motion Prediction - GAN)
- TRiPOD: Human Trajectory and Pose Dynamics Forecasting in the Wild
- Multi-level Motion Attention for Human Motion Prediction - mao-2019/HisRepItself)
- We are More than Our Joints: Predicting how 3D Bodies Move - cnsdqz.github.io/MOJO/MOJO.html)
- Improving Human Motion Prediction Through Continual Learning
- MSR-GCN: Multi-Scale Residual Graph Convolution Networks for Human Motion Prediction
- GIMO: Gaze-Informed Human Motion Prediction in Context
- Skeleton-Graph: Long-Term 3D Motion Prediction From 2D Observations Using Deep Spatio-Temporal Graph CNNs - Graph)
- Pose Transformers (POTR): Human Motion Prediction with Non-Autoregressive Transformers
- BeLFusion: Latent Diffusion for Behavior-Driven Human Motion Prediction
- Tracking People by Predicting 3D Appearance, Location and Pose
- GlocalNet: Class-aware Long-term Human Motion Synthesis
- A Causal Convolutional Neural Network for Motion Modeling and Synthesis
- TrajeVAE - Controllable Human Motion Generation from Trajectories - supplementary)
- Scene-aware Generative Network for Human Motion Synthesis
- MUGL: Large Scale Multi Person Conditional Action Generation with Locomotion
- DualMotion: Global-to-Local Casual Motion Design for Character Animations
- Character Controllers using Motion VAEs - motion-vaes)
- DanceAnyWay: Synthesizing Mixed-Genre 3D Dance Movements Through Beat Disentanglement
- Rhythm is a Dancer: Music-Driven Motion Synthesis with Global Structure
- Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic Memory
-
Human Depth Estimation
- A Neural Network for Detailed Human Depth Estimation from a Single Image - gruvi-3dv/deep_human)
- Learning High Fidelity Depths of Dressed Humans by Watching Social Media Dance Videos
- Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging
- Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging
- Learning the Depths of Moving People by Watching Frozen People - depth.github.io) [[Code]](https://github.com/google/mannequinchallenge)
- A Neural Network for Detailed Human Depth Estimation from a Single Image - gruvi-3dv/deep_human)
- Self-Supervised Human Depth Estimation from Monocular Videos - gruvi-3dv/Self-Supervised-Human-Depth)
- Learning High Fidelity Depths of Dressed Humans by Watching Social Media Dance Videos
- Learning High Fidelity Depths of Dressed Humans by Watching Social Media Dance Videos
- Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging
-
Human-Object Interaction
- Perceiving 3D Human-Object Spatial Arrangements from a Single Image in the Wild
- GRAB: A Dataset of Whole-Body Human Grasping of Objects
- Gravity-Aware Monocular 3D Human-Object Reconstruction - inf.mpg.de/GraviCap/) [[Code]](https://github.com/rishabhdabral/gravicap)
- CHORE: Contact, Human and Object REconstruction from a single RGB image - inf.mpg.de/chore/) [[Code]](https://github.com/xiexh20/CHORE)
- InterCap: Joint Markerless 3D Tracking of Humans and Objects in Interaction
- BEHAVE: Dataset and Method for Tracking Human Object Interactions - inf.mpg.de/behave/) [[Code]](https://github.com/xiexh20/behave-dataset)
- FLEX: Full-Body Grasping Without Full-Body Grasps
- Populating 3D Scenes by Learning Human-Scene Interaction
- Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors - inf.mpg.de/hps)
- Holistic 3D Human and Scene Mesh Estimation from Single View Images
- Soft Walks: Real-Time, Two-Ways Interaction between a Character and Loose Grounds
- RobustFusion: Robust Volumetric Performance Reconstruction under Human-object Interactions from Monocular RGBD Stream
- Perceiving 3D Human-Object Spatial Arrangements from a Single Image in the Wild
- Resolving 3D Human Pose Ambiguities with 3D Scene Constraints
- GRAB: A Dataset of Whole-Body Human Grasping of Objects
- Gravity-Aware Monocular 3D Human-Object Reconstruction - inf.mpg.de/GraviCap/) [[Code]](https://github.com/rishabhdabral/gravicap)
- CHORE: Contact, Human and Object REconstruction from a single RGB image - inf.mpg.de/chore/) [[Code]](https://github.com/xiexh20/CHORE)
- InterCap: Joint Markerless 3D Tracking of Humans and Objects in Interaction
- FLEX: Full-Body Grasping Without Full-Body Grasps
- Populating 3D Scenes by Learning Human-Scene Interaction
- Human POSEitioning System (HPS): 3D Human Pose Estimation and Self-localization in Large Scenes from Body-Mounted Sensors - inf.mpg.de/hps)
- Holistic 3D Human and Scene Mesh Estimation from Single View Images
- Soft Walks: Real-Time, Two-Ways Interaction between a Character and Loose Grounds
- RobustFusion: Robust Volumetric Performance Reconstruction under Human-object Interactions from Monocular RGBD Stream
- Perceiving 3D Human-Object Spatial Arrangements from a Single Image in the Wild
- GRAB: A Dataset of Whole-Body Human Grasping of Objects
- Gravity-Aware Monocular 3D Human-Object Reconstruction - inf.mpg.de/GraviCap/) [[Code]](https://github.com/rishabhdabral/gravicap)
- CHORE: Contact, Human and Object REconstruction from a single RGB image - inf.mpg.de/chore/) [[Code]](https://github.com/xiexh20/CHORE)
- InterCap: Joint Markerless 3D Tracking of Humans and Objects in Interaction
- Soft Walks: Real-Time, Two-Ways Interaction between a Character and Loose Grounds
- RobustFusion: Robust Volumetric Performance Reconstruction under Human-object Interactions from Monocular RGBD Stream
-
Animation
- Predicting Animation Skeletons for 3D Articulated Models via Volumetric Nets - xu/AnimSkelVolNet)
- RigNet: Neural Rigging for Articulated Characters - xu.github.io/rig-net) [[Code]](https://github.com/zhan-xu/RigNet)
- HeterSkinNet: A Heterogeneous Network for Skin Weights Prediction
- Skeleton-Aware Networks for Deep Motion Retargeting - motion-editing)
- Contact-Aware Retargeting of Skinned Motion
- Motion Retargetting based on Dilated Convolutions and Skeleton-specific Loss Functions - tdcn) [[Code]](https://sites.google.com/view/https%3A%2F%2Fgithub.com%2Fmedialab-ku%2Fretargetting-tdcn)
- Flow Guided Transformable Bottleneck Networks for Motion Retargeting
- Functionality-Driven Musculature Retargeting
- A Deep Emulator for Secondary Motion of 3D Characters
- Predicting Animation Skeletons for 3D Articulated Models via Volumetric Nets - xu/AnimSkelVolNet)
- RigNet: Neural Rigging for Articulated Characters - xu.github.io/rig-net) [[Code]](https://github.com/zhan-xu/RigNet)
- HeterSkinNet: A Heterogeneous Network for Skin Weights Prediction
- Skeleton-Aware Networks for Deep Motion Retargeting - motion-editing)
- Contact-Aware Retargeting of Skinned Motion
- Motion Retargetting based on Dilated Convolutions and Skeleton-specific Loss Functions - tdcn) [[Code]](https://sites.google.com/view/https%3A%2F%2Fgithub.com%2Fmedialab-ku%2Fretargetting-tdcn)
- Flow Guided Transformable Bottleneck Networks for Motion Retargeting
- A Deep Emulator for Secondary Motion of 3D Characters
- UniCon: Universal Neural Controller For Physics-based Character Motion - tlabs.github.io/unicon)
- DeePSD: Automatic Deep Skinning And Pose Space Deformation For 3D Garment Animation
- UniCon: Universal Neural Controller For Physics-based Character Motion - tlabs.github.io/unicon)
- Learning Skeletal Articulations With Neural Blend Shapes - blend-shapes) [[Code]](https://github.com/PeizhuoLi/neural-blend-shapes)
- Temporal Parameter-free Deep Skinning of Animated Meshes - publication?ID=48)
- Learning Skeletal Articulations With Neural Blend Shapes - blend-shapes) [[Code]](https://github.com/PeizhuoLi/neural-blend-shapes)
- Temporal Parameter-free Deep Skinning of Animated Meshes - publication?ID=48)
- Flow Guided Transformable Bottleneck Networks for Motion Retargeting
- A Deep Emulator for Secondary Motion of 3D Characters
- UniCon: Universal Neural Controller For Physics-based Character Motion - tlabs.github.io/unicon)
- Learning Skeletal Articulations With Neural Blend Shapes - blend-shapes) [[Code]](https://github.com/PeizhuoLi/neural-blend-shapes)
- Temporal Parameter-free Deep Skinning of Animated Meshes - publication?ID=48)
-
Neural Rendering
- UV Volumes for Real-time Rendering of Editable Free-view Human Performance - Volumes/) [[Code]](https://github.com/fanegg/UV-Volumes)
- Neural3D: Light-weight Neural Portrait Scanning via Context-aware Correspondence Learning
- Multi-view Neural Human Rendering
- NeuralHumanFVV: Real-Time Neural Volumetric Human Performance Rendering using RGB Cameras
- LookinGood^Ï€: Real-time Person-independent Neural Re-rendering for High-quality Human Performance Capture
- Few-shot Neural Human Performance Rendering from Sparse RGBD Videos
- SMPLpix: Neural Avatars from 3D Human Models
- Vid2Actor: Free-viewpoint Animatable Person Synthesis from Video in the Wild
- InstantAvatar: Learning Avatars from Monocular Video in 60 Seconds
- RANA: Relightable Articulated Neural Avatars
- Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans
- Efficient Neural Radiance Fields with Learned Depth-Guided Sampling
- Neural Actor: Neural Free-view Synthesis of Human Actors with Pose Control
- StylePeople: A Generative Model of Fullbody Human Avatars - violet.github.io/style-people) [[Code]](https://github.com/saic-vul/style-people)
- A-NeRF: Surface-free Human 3D Pose Refinement via Neural Rendering - Surface-free-Pose-Refinement)
- Neural3D: Light-weight Neural Portrait Scanning via Context-aware Correspondence Learning
- Multi-view Neural Human Rendering
- NeuralHumanFVV: Real-Time Neural Volumetric Human Performance Rendering using RGB Cameras
- LookinGood^Ï€: Real-time Person-independent Neural Re-rendering for High-quality Human Performance Capture
- Few-shot Neural Human Performance Rendering from Sparse RGBD Videos
- ANR: Articulated Neural Rendering for Virtual Avatars - avatars.github.io)
- SMPLpix: Neural Avatars from 3D Human Models
- Vid2Actor: Free-viewpoint Animatable Person Synthesis from Video in the Wild
- InstantAvatar: Learning Avatars from Monocular Video in 60 Seconds
- RANA: Relightable Articulated Neural Avatars
- Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans
- Neural Actor: Neural Free-view Synthesis of Human Actors with Pose Control
- StylePeople: A Generative Model of Fullbody Human Avatars - violet.github.io/style-people) [[Code]](https://github.com/saic-vul/style-people)
- A-NeRF: Surface-free Human 3D Pose Refinement via Neural Rendering - Surface-free-Pose-Refinement)
- HumanNeRF: Generalizable Neural Human Radiance Field from Sparse Inputs
- Neural Articulated Radiance Field - atsu/NARF)
- Editable Free-viewpoint Video Using a Layered Neural Representation
- D-NeRF: Neural Radiance Fields for Dynamic Scenes - NeRF/index.html)
- HumanNeRF: Generalizable Neural Human Radiance Field from Sparse Inputs
- Neural Articulated Radiance Field - atsu/NARF)
- Animatable Neural Radiance Fields for Human Body Modeling
- Editable Free-viewpoint Video Using a Layered Neural Representation
- UV Volumes for Real-time Rendering of Editable Free-view Human Performance - Volumes/) [[Code]](https://github.com/fanegg/UV-Volumes)
- Neural Free-Viewpoint Performance Rendering under Complex Human-object Interactions
- MoCo-Flow: Neural Motion Consensus Flow for Dynamic Humans in Stationary Monocular Cameras
- Rotationally-Temporally Consistent Novel-View Synthesis of Human Performance Video
- Human View Synthesis using a Single Sparse RGB-D Input
- Neural Human Performer: Learning Generalizable Radiance Fields for Human Performance Rendering
- HumanNeRF: Free-viewpoint Rendering of Moving People from Monocular Video
- Dual-Space NeRF: Learning Animatable Avatars and Scene Lighting in Separate Spaces
- NeuMan: Neural Human Radiance Field from a Single Video - neuman)
- Structured Local Radiance Fields for Human Avatar Modeling
- Animatable Neural Implicit Surfaces for Creating Avatars from Videos
- DoubleField: Bridging the Neural Surface and Radiance Fields for High-fidelity Human Reconstruction and Rendering
- Human Performance Modeling and Rendering via Neural Animated Mesh - NSR)
- Neural Free-Viewpoint Performance Rendering under Complex Human-object Interactions
- MoCo-Flow: Neural Motion Consensus Flow for Dynamic Humans in Stationary Monocular Cameras
- Rotationally-Temporally Consistent Novel-View Synthesis of Human Performance Video
- Human View Synthesis using a Single Sparse RGB-D Input
- Neural Human Performer: Learning Generalizable Radiance Fields for Human Performance Rendering
- HumanNeRF: Free-viewpoint Rendering of Moving People from Monocular Video
- Dual-Space NeRF: Learning Animatable Avatars and Scene Lighting in Separate Spaces
- NeuMan: Neural Human Radiance Field from a Single Video - neuman)
- Structured Local Radiance Fields for Human Avatar Modeling
- Animatable Neural Implicit Surfaces for Creating Avatars from Videos
- DoubleField: Bridging the Neural Surface and Radiance Fields for High-fidelity Human Reconstruction and Rendering
- Human Performance Modeling and Rendering via Neural Animated Mesh - NSR)
- Multi-view Neural Human Rendering
- Human View Synthesis using a Single Sparse RGB-D Input
- Dual-Space NeRF: Learning Animatable Avatars and Scene Lighting in Separate Spaces
- NeuMan: Neural Human Radiance Field from a Single Video - neuman)
- Structured Local Radiance Fields for Human Avatar Modeling
- DoubleField: Bridging the Neural Surface and Radiance Fields for High-fidelity Human Reconstruction and Rendering
- Human Performance Modeling and Rendering via Neural Animated Mesh - NSR)
- Neural3D: Light-weight Neural Portrait Scanning via Context-aware Correspondence Learning
-
Cloth/Try-On
- Reflection Symmetry in Textured Sewing Patterns - symmetry-sewing)
- Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction from Single-view Images
- REC-MV: REconstructing 3D Dynamic Cloth from Monocular Videos - MV/) [[Code]](https://github.com/GAP-LAB-CUHK-SZ/REC-MV)
- Garment4D: Garment Reconstruction from Point Cloud Sequences
- TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style - inf.mpg.de/tailornet) [[Code]](https://github.com/chaitanya100100/TailorNet)
- Learning-Based Animation of Clothing for Virtual Try-On - learning-based-animation)
- Detail-aware Deep Clothing Animations Infused with Multi-source Attributes
- Self-Supervised Collision Handling via Generative 3D Garment Models for Virtual Try-On
- Physically Based Neural Simulator for Garment Animation
- P-Cloth: Interactive Complex Cloth Simulation on Multi-GPU Systems using Dynamic Matrix Assembly and Pipelined Implicit Integrators - tang.github.io/home/PCloth/index.html) [[Code]](https://min-tang.github.io/home/PCloth/files/MultiGPUCGSolver-0.1.zip)
- Neural Cloth Simulation
- N-Cloth: Predicting 3D Cloth Deformation with Mesh-Based Networks - tang.github.io/home/NCloth/)
- Deep Deformation Detail Synthesis for Thin Shell Models
- DeepCloth: Neural Garment Representation for Shape and Style Editing
- 3D Custom Fit Garment Design with Body Movement
- Dynamic Neural Garments
- Motion Guided Deep Dynamic 3D Garments - Guided-Deep-Dynamic-3D-Garment)
- DiffCloth: Differentiable Cloth Simulation with Dry Frictional Contact
- Example-based Real-time Clothing Synthesis for Virtual Agents
- BCNet: Learning Body and Cloth Shape from a Single Image
- 3D Clothed Human Reconstruction in the Wild
- Robust 3D Garment Digitization from Monocular 2D Images for 3D Virtual Try-On Systems
- DIG: Draping Implicit Garment over the Human Body
- Registering Explicit to Implicit: Towards High-Fidelity Garment Mesh Reconstruction from Single Images
- PERGAMO: Personalized 3D Garments from Monocular Video
- Fully Convolutional Graph Neural Networks for Parametric Virtual Try-On
- ULNeF: Untangled Layered Neural Fields for Mix-and-Match Virtual Try-On
- SNUG: Self-Supervised Neural Dynamic Garments
- Neural 3D Clothes Retargeting from a Single Image
- DeepWrinkles: Accurate and Realistic Clothing Modeling
- Wallpaper Pattern Alignment along Garment Seams - seams)
- Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction from Single-view Images
- REC-MV: REconstructing 3D Dynamic Cloth from Monocular Videos - MV/) [[Code]](https://github.com/GAP-LAB-CUHK-SZ/REC-MV)
- Garment4D: Garment Reconstruction from Point Cloud Sequences
- TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style - inf.mpg.de/tailornet) [[Code]](https://github.com/chaitanya100100/TailorNet)
- Learning-Based Animation of Clothing for Virtual Try-On - learning-based-animation)
- Detail-aware Deep Clothing Animations Infused with Multi-source Attributes
- Physically Based Neural Simulator for Garment Animation
- P-Cloth: Interactive Complex Cloth Simulation on Multi-GPU Systems using Dynamic Matrix Assembly and Pipelined Implicit Integrators - tang.github.io/home/PCloth/index.html) [[Code]](https://min-tang.github.io/home/PCloth/files/MultiGPUCGSolver-0.1.zip)
- Neural Cloth Simulation
- N-Cloth: Predicting 3D Cloth Deformation with Mesh-Based Networks - tang.github.io/home/NCloth/)
- Deep Deformation Detail Synthesis for Thin Shell Models
- 3D Custom Fit Garment Design with Body Movement
- Dynamic Neural Garments
- Motion Guided Deep Dynamic 3D Garments - Guided-Deep-Dynamic-3D-Garment)
- DiffCloth: Differentiable Cloth Simulation with Dry Frictional Contact
- Example-based Real-time Clothing Synthesis for Virtual Agents
- BCNet: Learning Body and Cloth Shape from a Single Image
- 3D Clothed Human Reconstruction in the Wild
- Robust 3D Garment Digitization from Monocular 2D Images for 3D Virtual Try-On Systems
- DIG: Draping Implicit Garment over the Human Body
- Registering Explicit to Implicit: Towards High-Fidelity Garment Mesh Reconstruction from Single Images
- PERGAMO: Personalized 3D Garments from Monocular Video
- ULNeF: Untangled Layered Neural Fields for Mix-and-Match Virtual Try-On
- SNUG: Self-Supervised Neural Dynamic Garments
- Neural 3D Clothes Retargeting from a Single Image
- REC-MV: REconstructing 3D Dynamic Cloth from Monocular Videos - MV/) [[Code]](https://github.com/GAP-LAB-CUHK-SZ/REC-MV)
- Garment4D: Garment Reconstruction from Point Cloud Sequences
- Neural Cloth Simulation
- N-Cloth: Predicting 3D Cloth Deformation with Mesh-Based Networks - tang.github.io/home/NCloth/)
- Deep Deformation Detail Synthesis for Thin Shell Models
- 3D Custom Fit Garment Design with Body Movement
- Dynamic Neural Garments
- Motion Guided Deep Dynamic 3D Garments - Guided-Deep-Dynamic-3D-Garment)
- DiffCloth: Differentiable Cloth Simulation with Dry Frictional Contact
- Example-based Real-time Clothing Synthesis for Virtual Agents
- 3D Clothed Human Reconstruction in the Wild
- Robust 3D Garment Digitization from Monocular 2D Images for 3D Virtual Try-On Systems
- DIG: Draping Implicit Garment over the Human Body
- PERGAMO: Personalized 3D Garments from Monocular Video
- SNUG: Self-Supervised Neural Dynamic Garments
-
Dataset
- 3DPW: Recovering Accurate 3D Human Pose in The Wild Using IMUs and a Moving Camera - inf.mpg.de/3DPW)
- AMASS: Archive of Motion Capture as Surface Shapes
- 3DBodyTex: Textured 3D Body Dataset
- Motion Capture from Internet Videos
- HUMBI: A Large Multiview Dataset of Human Body Expressions - data.net) [[Code]](https://github.com/zhixuany/HUMBI)
- SMPLy Benchmarking 3D Human Pose Estimation in the Wild - vision/mannequin-benchmark)
- Reconstructing 3D Human Pose by Watching Humans in the Mirror - Human) [[Code]](https://github.com/zju3dv/Mirrored-Human)
- HuMMan: Multi-Modal 4D Human Dataset for Versatile Sensing and Modeling
- AGORA: Avatars in Geography Optimized for Regression Analysis
- BABEL: Bodies, Action and Behavior with English Labels
- BEHAVE: Dataset and Method for Tracking Human Object Interactions - inf.mpg.de/behave/) [[Code]](https://github.com/xiexh20/behave-dataset)
- 3DPW: Recovering Accurate 3D Human Pose in The Wild Using IMUs and a Moving Camera - inf.mpg.de/3DPW)
- AMASS: Archive of Motion Capture as Surface Shapes
- 3DBodyTex: Textured 3D Body Dataset
- Motion Capture from Internet Videos
- HUMBI: A Large Multiview Dataset of Human Body Expressions - data.net) [[Code]](https://github.com/zhixuany/HUMBI)
- SMPLy Benchmarking 3D Human Pose Estimation in the Wild - vision/mannequin-benchmark)
- HuMMan: Multi-Modal 4D Human Dataset for Versatile Sensing and Modeling
- AGORA: Avatars in Geography Optimized for Regression Analysis
- BABEL: Bodies, Action and Behavior with English Labels
- Object-Occluded Human Shape and Pose Estimation from a Single Color Image - OOH-2020-03.html) [[Code]](https://gitee.com/seuvcl/CVPR2020-OOH)
- 3DBodyTex: Textured 3D Body Dataset
- HUMBI: A Large Multiview Dataset of Human Body Expressions - data.net) [[Code]](https://github.com/zhixuany/HUMBI)
- SMPLy Benchmarking 3D Human Pose Estimation in the Wild - vision/mannequin-benchmark)
- HuMMan: Multi-Modal 4D Human Dataset for Versatile Sensing and Modeling
- AGORA: Avatars in Geography Optimized for Regression Analysis
- Full-Body Awareness from Partial Observations
- Reconstructing 3D Human Pose by Watching Humans in the Mirror - Human) [[Code]](https://github.com/zju3dv/Mirrored-Human)
- Object-Occluded Human Shape and Pose Estimation from a Single Color Image - OOH-2020-03.html) [[Code]](https://gitee.com/seuvcl/CVPR2020-OOH)
- 3DPeople: Modeling the Geometry of Dressed Humans - csic.es) [[Code]](https://github.com/albertpumarola/3DPeople-Dataset)