Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/pansanity666/Awesome-Avatars
List of recent advances for human avatars, including generation, reconstruction, and editing, etc.
https://github.com/pansanity666/Awesome-Avatars
List: Awesome-Avatars
3dreconstruction aigc avatar diffusion humannerf image-to-3d motion-generation nerf neural-rendering sdf smpl stable-diffusion t23d text-to-3d tt3d
Last synced: 3 months ago
JSON representation
List of recent advances for human avatars, including generation, reconstruction, and editing, etc.
- Host: GitHub
- URL: https://github.com/pansanity666/Awesome-Avatars
- Owner: pansanity666
- Created: 2023-09-19T06:16:36.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-04-13T13:43:49.000Z (8 months ago)
- Last Synced: 2024-04-24T11:05:08.579Z (8 months ago)
- Topics: 3dreconstruction, aigc, avatar, diffusion, humannerf, image-to-3d, motion-generation, nerf, neural-rendering, sdf, smpl, stable-diffusion, t23d, text-to-3d, tt3d
- Homepage:
- Size: 71.3 KB
- Stars: 204
- Watchers: 14
- Forks: 12
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- Awesome-Text2X-Resources - Awesome-Avatars
- ultimate-awesome - Awesome-Avatars - List of recent advances for human avatars, including generation, reconstruction, and editing, etc. (Other Lists / PowerShell Lists)
README
# Awesome Avatars [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome)
List of recent advances for human avatars, including generation, reconstruction, and editing, etc.
If you find any missed paper, feel free to open an issue or PR.
## Table of Contents
- [Awesome Avatars ](#awesome-avatars-)
- [Avatar Generation](#avatar-generation)
- [Per-subject Avatar Reconstruction](#per-subject-avatar-reconstruction)
- [Generalizable Avatar Novel View Synthesis](#generalizable-avatar-novel-view-synthesis)
- [Generalizable Avatar Mesh Reconstruction](#generalizable-avatar-mesh-reconstruction)
- [Text-to-Avatar](#text-to-avatar)
- [Avatar Interaction](#avatar-interaction)
- [Motion Generation](#motion-generation)
- [SMPL Estimation](#smpl-estimation)
- [Dataset](#dataset)
- [Aknowledgement](#aknowledgement)### Avatar Generation
- [Unsupervised Learning of Efficient Geometry-Aware Neural Articulated Representations](https://arxiv.org/pdf/2204.08839.pdf) (ECCV 2022)
[![Star](https://img.shields.io/github/stars/nogu-atsu/ENARF-GAN?style=social)](https://github.com/nogu-atsu/ENARF-GAN)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2204.08839.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://github.com/nogu-atsu/ENARF-GAN)- [GENERATIVE NEURAL ARTICULATED RADIANCE FIELDS](https://arxiv.org/pdf/2206.14314.pdf) (NeurIPS 2022)
[![Star](https://img.shields.io/github/stars/alexanderbergman7/GNARF?style=social)](https://github.com/alexanderbergman7/GNARF)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2206.14314.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://github.com/alexanderbergman7/GNARF)- [AvatarGen: a 3D Generative Model for Animatable Human Avatars](http://arxiv.org/abs/2208.00561) (arXiv 01/08/2022)
[![Star](https://img.shields.io/github/stars/jfzhang95/AvatarGen?style=social)](https://github.com/jfzhang95/AvatarGen)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](http://arxiv.org/abs/2208.00561)- [EVA3D: Compositional 3D Human Generation from 2D Image Collections](https://arxiv.org/pdf/2210.04888.pdf) (ICLR 2023 Spotlight)
[![Star](https://img.shields.io/github/stars/hongfz16/EVA3D?style=social)](https://github.com/hongfz16/EVA3D)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2210.04888.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://hongfz16.github.io/projects/EVA3D)- [AG3D: Learning to Generate 3D Avatars from 2D Image Collections](https://arxiv.org/pdf/2305.02312.pdf) (ICCV 2023)
[![Star](https://img.shields.io/github/stars/zj-dong/AG3D?style=social)](https://github.com/zj-dong/AG3D)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2305.02312.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://zj-dong.github.io/AG3D)- [Get3DHuman: Lifting StyleGAN-Human into a 3D Generative Model using Pixel-aligned Reconstruction Priors.](https://arxiv.org/abs/2302.01162) (ICCV 2023)
[![Star](https://img.shields.io/github/stars/X-zhangyang/Get3DHuman?style=social)](https://github.com/X-zhangyang/Get3DHuman/)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2302.01162)
[![Website](https://img.shields.io/badge/Website-9cf)](https://x-zhangyang.github.io/2023_Get3DHuman/)- [3D Magic Mirror: Clothing Reconstruction from a Single Image via a Causal Perspective.](https://arxiv.org/abs/2204.13096) (arXiv 2022)
[![Star](https://img.shields.io/github/stars/layumi/3D-Magic-Mirror?style=social)](https://github.com/layumi/3D-Magic-Mirror)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2204.13096)### Per-subject Avatar Reconstruction
- [Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans](https://arxiv.org/pdf/2012.15838.pdf) (CVPR 2021)
[![Star](https://img.shields.io/github/stars/zju3dv/neuralbody?style=social)](https://github.com/zju3dv/neuralbody)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2012.15838.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://zju3dv.github.io/neuralbody/)- [Animatable Neural Radiance Fields for Modeling Dynamic Human Bodies](https://arxiv.org/pdf/2212.07422.pdf) (ICCV 2021)
[![Star](https://img.shields.io/github/stars/zju3dv/animatable_nerf?style=social)](https://github.com/zju3dv/animatable_nerf)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2212.07422.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://zju3dv.github.io/animatable_nerf/)- [Neural Human Radiance Field from a Single Video](https://arxiv.org/pdf/2203.12575.pdf) (ECCV 2022)
[![Star](https://img.shields.io/github/stars/apple/ml-neuman?style=social)](https://github.com/apple/ml-neuman)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2203.12575.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://github.com/apple/ml-neuman)- [HumanNeRF: Free-viewpoint Rendering of Moving People from Monocular Video](https://arxiv.org/abs/2201.04127) (CVPR 2022)
[![Star](https://img.shields.io/github/stars/chungyiweng/humannerf?style=social)](https://github.com/chungyiweng/humannerf)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2201.04127)
[![Website](https://img.shields.io/badge/Website-9cf)](https://grail.cs.washington.edu/projects/humannerf/)- [MonoHuman: Animatable Human Neural Field from Monocular Video](https://arxiv.org/abs/2304.02001) (CVPR 2023)
[![Star](https://img.shields.io/github/stars/Yzmblog/MonoHuman?style=social)](https://github.com/Yzmblog/MonoHuman)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2304.02001)
[![Website](https://img.shields.io/badge/Website-9cf)](https://yzmblog.github.io/projects/MonoHuman/)- [InstantAvatar: Learning Avatars from Monocular Video in 60 Seconds](https://arxiv.org/pdf/2012.15838.pdf) (CVPR 2023)
[![Star](https://img.shields.io/github/stars/tijiang13/InstantAvatar?style=social)](https://github.com/tijiang13/InstantAvatar)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2212.07422.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://tijiang13.github.io/InstantAvatar/)- [Vid2Avatar: 3D Avatar Reconstruction From Videos in the Wild via Self-Supervised Scene Decomposition](https://arxiv.org/abs/2302.11566) (CVPR 2023)
[![Star](https://img.shields.io/github/stars/MoyGcc/vid2avatar?style=social)](https://github.com/MoyGcc/vid2avatar)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2302.11566)
[![Website](https://img.shields.io/badge/Website-9cf)](https://moygcc.github.io/vid2avatar/)- [Relightable and Animatable Neural Avatar from Sparse-View Video](http://arxiv.org/abs/2308.07903) (arXiv 17/08/2023)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](http://arxiv.org/abs/2308.07903)
[![Website](https://img.shields.io/badge/Website-9cf)](https://zju3dv.github.io/relightable_avatar/)- [TeCH: Text-guided Reconstruction of Lifelike Clothed Humans](http://arxiv.org/abs/2308.08545) (3DV 2024)
[![Star](https://img.shields.io/github/stars/huangyangyi/TeCH?style=social)](https://github.com/huangyangyi/TeCH)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](http://arxiv.org/abs/2308.08545)
[![Website](https://img.shields.io/badge/Website-9cf)](https://huangyangyi.github.io/TeCH/)- [Learning Neural Volumetric Representations of Dynamic Humans in Minutes](http://arxiv.org/abs/2302.12237) (CVPR 2023)
[![Star](https://img.shields.io/github/stars/zju3dv/instant-nvr?style=social)](https://github.com/zju3dv/instant-nvr)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](http://arxiv.org/abs/2302.12237)
[![Website](https://img.shields.io/badge/Website-9cf)](https://zju3dv.github.io/instant_nvr)- [HUGS: Human Gaussian Splats](http://arxiv.org/abs/2311.17910) (Arxiv 2023)
[![Star](https://img.shields.io/github/stars/apple/ml-hugs?style=social)](https://github.com/apple/ml-hugs)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](http://arxiv.org/abs/2311.17910)
[![Website](https://img.shields.io/badge/Website-9cf)](https://machinelearning.apple.com/research/hugs)- [GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians](http://arxiv.org/abs/2312.02134) (Arxiv 2023)
[![Star](https://img.shields.io/github/stars/huliangxiao/GaussianAvatar?style=social)](https://github.com/huliangxiao/GaussianAvatar)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](http://arxiv.org/abs/2312.02134)
[![Website](https://img.shields.io/badge/Website-9cf)](https://huliangxiao.github.io/GaussianAvatar)- [GART: Gaussian Articulated Template Models](https://arxiv.org/abs/2311.16099) (Arxiv 2023)
[![Star](https://img.shields.io/github/stars/JiahuiLei/GART?style=social)](https://github.com/JiahuiLei/GART)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2311.16099)
[![Website](https://img.shields.io/badge/Website-9cf)](https://www.cis.upenn.edu/~leijh/projects/gart/)- [Human101: Training 100+FPS Human Gaussians in 100s from 1 View](https://arxiv.org/abs/2312.15258) (Arxiv 2023)
[![Star](https://img.shields.io/github/stars/longxiang-ai/Human101?style=social)](https://github.com/longxiang-ai/Human101)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2312.15258)
[![Website](https://img.shields.io/badge/Website-9cf)](https://longxiang-ai.github.io/Human101/)### Generalizable Avatar Novel View Synthesis
- [Neural Human Performer: Learning Generalizable Radiance Fields for Human Performance Rendering](https://arxiv.org/pdf/2109.07448.pdf) (NeurIPS 2021)
[![Star](https://img.shields.io/github/stars/YoungJoongUNC/Neural_Human_Performer?style=social)](https://github.com/YoungJoongUNC/Neural_Human_Performer)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2109.07448.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://youngjoongunc.github.io/nhp/)- [MPS-NeRF: Generalizable 3D Human Rendering from Multiview Images](https://arxiv.org/abs/2203.16875) (TPAMI 2022)
[![Star](https://img.shields.io/github/stars/gaoxiangjun/MPS-NeRF?style=social)](https://github.com/gaoxiangjun/MPS-NeRF)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2203.16875)
[![Website](https://img.shields.io/badge/Website-9cf)](https://gaoxiangjun.github.io/mps_nerf/)- [GP-NeRF: Geometry-Guided Progressive NeRF for Generalizable and Efficient Neural Human Rendering](https://arxiv.org/pdf/2112.04312.pdf) (ECCV 2022)
[![Star](https://img.shields.io/github/stars/sail-sg/GP-Nerf?style=social)](https://github.com/sail-sg/GP-Nerf)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2112.04312.pdf)- [HumanNeRF: Efficiently Generated Human Radiance Field from Sparse Inputs](https://arxiv.org/pdf/2112.02789.pdf) (CVPR 2022)
[![Star](https://img.shields.io/github/stars/zhaofuq/HumanNeRF?style=social)](https://github.com/zhaofuq/HumanNeRF)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2112.02789.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://zhaofuq.github.io/humannerf/)- [MonoNHR: Monocular Neural Human Renderer](https://arxiv.org/abs/2210.00627) (3DV 2022)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2210.00627)- [Neural Novel Actor: Learning a Generalized Animatable Neural Representation for Human Actors](https://arxiv.org/abs/2208.11905) (TVCG 2023)
[![Star](https://img.shields.io/github/stars/Talegqz/neural_novel_actor?style=social)](https://github.com/Talegqz/neural_novel_actor)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2208.11905)
[![Website](https://img.shields.io/badge/Website-9cf)](https://talegqz.github.io/neural_novel_actor/)- [GHuNeRF: Generalizable Human NeRF from a Monocular Video](https://arxiv.org/pdf/2308.16576v2.pdf) (arXiv 03/09/2023)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2308.16576v2.pdf)- [Neural Image-based Avatars: Generalizable Radiance Fields for Human Avatar Modeling](https://arxiv.org/pdf/2304.04897.pdf) (ICLR 2023)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2304.04897.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://youngjoongunc.github.io/nia/)- [ActorsNeRF: Animatable Few-shot Human Rendering with Generalizable NeRFs](https://arxiv.org/abs/2304.14401) (ICCV 2023)
[![Star](https://img.shields.io/github/stars/JitengMu/ActorsNeRF?style=social)](https://github.com/JitengMu/ActorsNeRF)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2304.14401)
[![Website](https://img.shields.io/badge/Website-9cf)](https://jitengmu.github.io/ActorsNeRF/)- [SHERF: Generalizable Human NeRF from a Single Image](https://arxiv.org/abs/2303.12791) (ICCV 2023)
[![Star](https://img.shields.io/github/stars/skhu101/SHERF?style=social)](https://github.com/skhu101/SHERF)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2303.12791)
[![Website](https://img.shields.io/badge/Website-9cf)](https://skhu101.github.io/SHERF/)- [TransHuman: A Transformer-based Human Representation for Generalizable Neural Human Rendering](https://arxiv.org/abs/2307.12291) (ICCV 2023)
[![Star](https://img.shields.io/github/stars/pansanity666/TransHuman?style=social)](https://github.com/pansanity666/TransHuman/)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2307.12291)
[![Website](https://img.shields.io/badge/Website-9cf)](https://pansanity666.github.io/TransHuman/)- [GPS-Gaussian: Generalizable Pixel-wise 3D Gaussian Splatting for Real-time Human Novel View Synthesis](http://arxiv.org/abs/2312.02155) (Arxiv)
[![Star](https://img.shields.io/github/stars/ShunyuanZheng/GPS-Gaussian?style=social)](https://github.com/ShunyuanZheng/GPS-Gaussian)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](http://arxiv.org/abs/2312.02155)
[![Website](https://img.shields.io/badge/Website-9cf)](https://shunyuanzheng.github.io/GPS-Gaussian)### Generalizable Avatar Mesh Reconstruction
- [ICON : Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans](https://arxiv.org/pdf/2112.09127.pdf) (CVPR 2022)
[![Star](https://img.shields.io/github/stars/YuliangXiu/ICON?style=social)](https://github.com/YuliangXiu/ICON)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2112.09127.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://icon.is.tue.mpg.de/)- [ECON: Explicit Clothed humans Optimized via Normal integration](https://arxiv.org/pdf/2212.07422.pdf) (CVPR 2023 Highlight)
[![Star](https://img.shields.io/github/stars/YuliangXiu/ECON?style=social)](https://github.com/YuliangXiu/ECON)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2212.07422.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://xiuyuliang.cn/econ/)- [Structured 3D Features for Reconstructing Relightable and Animatable Avatars](https://arxiv.org/abs/2212.06820) (CVPR 2023)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2212.06820)
[![Website](https://img.shields.io/badge/Website-9cf)](https://enriccorona.github.io/s3f/)- [SeSDF: Self-evolved Signed Distance Field for Implicit 3D Clothed Human Reconstruction](https://yukangcao.github.io/SeSDF/index_files/SeSDF_05448.pdf) (CVPR 2023)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2304.00359)
[![Website](https://img.shields.io/badge/Website-9cf)](https://yukangcao.github.io/SeSDF/)- [DIFu: Depth-Guided Implicit Function for Clothed Human Reconstruction](https://openaccess.thecvf.com/content/CVPR2023/papers/Song_DIFu_Depth-Guided_Implicit_Function_for_Clothed_Human_Reconstruction_CVPR_2023_paper.pdf) (CVPR 2023)
[![Website](https://img.shields.io/badge/Website-9cf)](https://eadcat.github.io/DIFu/)- [Complete 3D Human Reconstruction from a Single Incomplete Image](https://openaccess.thecvf.com/content/CVPR2023/papers/Wang_Complete_3D_Human_Reconstruction_From_a_Single_Incomplete_Image_CVPR_2023_paper.pdf) (CVPR 2023)
[![Website](https://img.shields.io/badge/Website-9cf)](https://junyingw.github.io/paper/3d_inpainting/)- [High-fidelity 3D Human Digitization from Single 2K Resolution Images](https://arxiv.org/pdf/2303.15108.pdf) (CVPR 2023 Highlight)
[![Star](https://img.shields.io/github/stars/SangHunHan92/2K2K?style=social)](https://github.com/SangHunHan92/2K2K)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2303.15108.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://sanghunhan92.github.io/conference/2K2K/)- [D-IF: Uncertainty-aware Human Digitization via Implicit Distribution Field](https://arxiv.org/pdf/2308.08857.pdf) (ICCV 2023)
[![Star](https://img.shields.io/github/stars/psyai-net/D-IF_release?style=social)](https://github.com/psyai-net/D-IF_release)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2308.08857.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://yxt7979.github.io/idf/)- [Global-correlated 3D-decoupling Transformer for Clothed Avatar Reconstruction](https://arxiv.org/pdf/2112.09127.pdf) (NeurIPS 2023)
[![Star](https://img.shields.io/github/stars/River-Zhang/GTA?style=social)](https://github.com/River-Zhang/GTA)### Text-to-Avatar
- [ZeroAvatar: Zero-shot 3D Avatar Generation from a Single Image.](https://arxiv.org/abs/2305.16411) (arXiv 25/5/2023)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2305.16411)- [DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via Diffusion Models](http://arxiv.org/abs/2304.00916) (arXiv 06/04/2023)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](http://arxiv.org/abs/2304.00916)
[![Website](https://img.shields.io/badge/Website-9cf)](https://yukangcao.github.io/DreamAvatar/)- [DreamHuman: Animatable 3D Avatars from Text](https://arxiv.org/abs/2306.09329) (arXiv 15/06/2023)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2306.09329)
[![Website](https://img.shields.io/badge/Website-9cf)](https://dream-human.github.io/)- [AvatarVerse: High-quality & Stable 3D Avatar Creation from Text and Pose](http://arxiv.org/abs/2308.03610) (arXiv 07/08/2023)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](http://arxiv.org/abs/2308.03610)
[![Website](https://img.shields.io/badge/Website-9cf)](https://avatarverse3d.github.io/)- [Dancing Avatar: Pose and Text-Guided Human Motion Videos Synthesis with Image Diffusion Model](https://arxiv.org/abs/2308.07749) (arXiv 15/08/2023)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2308.07749)- [DreamWaltz: Make a Scene with Complex 3D Animatable Avatars](https://arxiv.org/pdf/2305.12529) (arXiv 12/07/2023)
[![Star](https://img.shields.io/github/stars/IDEA-Research/DreamWaltz?style=social)](https://github.com/IDEA-Research/DreamWaltz)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2305.12529)
[![Website](https://img.shields.io/badge/Website-9cf)](https://idea-research.github.io/DreamWaltz/)- [AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control](https://arxiv.org/pdf/2303.17606) (ICCV 2023)
[![Star](https://img.shields.io/github/stars/songrise/AvatarCraft?style=social)](https://github.com/songrise/AvatarCraft)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2303.17606)
[![Website](https://img.shields.io/badge/Website-9cf)](https://avatar-craft.github.io)- [AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars](https://arxiv.org/pdf/2205.08535) (SIGGRAPH 2022)
[![Star](https://img.shields.io/github/stars/hongfz16/AvatarCLIP?style=social)](https://github.com/hongfz16/AvatarCLIP)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2205.08535)
[![Website](https://img.shields.io/badge/Website-9cf)](https://hongfz16.github.io/projects/AvatarCLIP.html)- [AvatarBooth: High-Quality and Customizable 3D Human Avatar Generation](https://arxiv.org/abs/2306.09864) (arXiv 16/06/2023)
[![Star](https://img.shields.io/github/stars/zeng-yifei/AvatarBooth?style=social)](https://github.com/zeng-yifei/AvatarBooth)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2306.09864)
[![Website](https://img.shields.io/badge/Website-9cf)](https://zeng-yifei.github.io/avatarbooth_page/)- [AvatarFusion: Zero-shot Generation of Clothing-Decoupled 3D Avatars Using 2D Diffusion](https://arxiv.org/pdf/2307.06526.pdf) (ACMMM 2023)
[![Star](https://img.shields.io/github/stars/HansenHuang0823/AvatarFusion?style=social)](https://github.com/HansenHuang0823/AvatarFusion)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2307.06526.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://hansenhuang0823.github.io/AvatarFusion/)- [TADA! Text to Animatable Digital Avatars](https://arxiv.org/abs/2308.10899) (arXiv 2023)
[![Star](https://img.shields.io/github/stars/TingtingLiao/TADA?style=social)](https://github.com/TingtingLiao/TADA)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2308.10899)
[![Website](https://img.shields.io/badge/Website-9cf)](https://tada.is.tue.mpg.de/)- [Guide3D: Create 3D Avatars from Text and Image Guidance](https://arxiv.org/pdf/2308.09705.pdf) (arXiv 2023)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2308.09705.pdf)- [HumanNorm: Learning Normal Diffusion Model for High-quality and Realistic 3D Human Generation](https://arxiv.org/abs/2310.01406) (arXiv 2023)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2310.01406)
[![Website](https://img.shields.io/badge/Website-9cf)](https://humannorm.github.io/)- [Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation](http://arxiv.org/abs/2311.17117) (Arxiv)
[![Star](https://img.shields.io/github/stars/HumanAIGC/AnimateAnyone?style=social)](https://github.com/HumanAIGC/AnimateAnyone)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](http://arxiv.org/abs/2311.17117)
[![Website](https://img.shields.io/badge/Website-9cf)](https://humanaigc.github.io/animate-anyone/)- [HumanGaussian: Text-Driven 3D Human Generation with Gaussian Splatting](http://arxiv.org/abs/2311.17061) (Arxiv)
[![Star](https://img.shields.io/github/stars/alvinliu0/HumanGaussian?style=social)](https://github.com/alvinliu0/HumanGaussian)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](http://arxiv.org/abs/2311.17061)
[![Website](https://img.shields.io/badge/Website-9cf)](https://alvinliu0.github.io/projects/HumanGaussian)- [InstructHumans: Editing Animatable 3D Human Textures with Instructions](https://arxiv.org/abs/2404.04037) (Arxiv 2024)
[![Star](https://img.shields.io/github/stars/viridityzhu/InstructHumans?style=social)](https://github.com/viridityzhu/InstructHumans)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2404.04037)
[![Website](https://img.shields.io/badge/Website-9cf)](https://jyzhu.top/instruct-humans/)### Avatar Interaction
- [Hi4D: 4D Instance Segmentation of Close Human Interaction](https://arxiv.org/abs/2303.15380v1) (CVPR 2023)
[![Star](https://img.shields.io/github/stars/yifeiyin04/Hi4D?style=social)](https://github.com/yifeiyin04/Hi4D)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2303.15380v1)
[![Website](https://img.shields.io/badge/Website-9cf)](https://yifeiyin04.github.io/Hi4D/)- [NeuralDome: A Neural Modeling Pipeline on Multi-View Human-Object Interactions.](https://arxiv.org/abs/2212.07626) (CVPR 2023)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2212.07626)
[![Website](https://img.shields.io/badge/Website-9cf)](https://juzezhang.github.io/NeuralDome/)- [HOSNeRF: Dynamic Human-Object-Scene Neural Radiance Fields from a Single Video](https://arxiv.org/abs/2304.12281) (CVPR 2023)
[![Star](https://img.shields.io/github/stars/TencentARC/HOSNeRF?style=social)](https://github.com/TencentARC/HOSNeRF)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2304.12281)
[![Website](https://img.shields.io/badge/Website-9cf)](https://showlab.github.io/HOSNeRF/)### Motion Generation
- [MotionGPT: Human Motion as a Foreign Language](https://arxiv.org/pdf/2306.14795.pdf) (arxiv)
[![Star](https://img.shields.io/github/stars/OpenMotionLab/MotionGPT?style=social)](https://github.com/OpenMotionLab/MotionGPT)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2306.14795.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://motion-gpt.github.io)- [Executing your Commands via Motion Diffusion in Latent Space](https://arxiv.org/pdf/2212.04048.pdf) (CVPR 2023)
[![Star](https://img.shields.io/github/stars/ChenFengYe/motion-latent-diffusion?style=social)](https://github.com/ChenFengYe/motion-latent-diffusion)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2212.04048.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://chenxin.tech/mld/)- [TMR: Text-to-Motion Retrieval Using Contrastive 3D Human Motion Synthesis](https://arxiv.org/pdf/2305.00976.pdf) (ICCV 2023)
[![Star](https://img.shields.io/github/stars/Mathux/TMR?style=social)](https://github.com/Mathux/TMR)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2305.00976.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://mathis.petrovich.fr/tmr/)- [Make-An-Animation: Large-Scale Text-conditional 3D Human Motion Generation](https://arxiv.org/pdf/2305.00976.pdf) (ICCV 2023)
[![Star](https://img.shields.io/github/stars/Mathux/TMR?style=social)](https://github.com/Mathux/TMR)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2305.00976.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://mathis.petrovich.fr/tmr/)### SMPL Estimation
- [Learning to Reconstruct 3D Human Pose and Shape via Model-fitting in the Loop](https://arxiv.org/abs/1909.12828) (ICCV 2019)
[![Star](https://img.shields.io/github/stars/nkolot/SPIN?style=social)](https://github.com/nkolot/SPIN)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/1909.12828)
[![Website](https://img.shields.io/badge/Website-9cf)](https://www.nikoskolot.com/projects/spin/)- [End-to-end human pose and mesh reconstruction with transformers.](https://openaccess.thecvf.com/content/CVPR2021/papers/Lin_End-to-End_Human_Pose_and_Mesh_Reconstruction_with_Transformers_CVPR_2021_paper.pdf) (CVPR 2021)
[![Star](https://img.shields.io/github/stars/microsoft/MeshTransformer?style=social)](https://github.com/microsoft/MeshTransformer)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2012.09760.pdf)- [Hybrik: A hybrid analytical-neural inverse kinematics solution for 3d human pose and shape estimation](https://openaccess.thecvf.com/content/CVPR2021/papers/Li_HybrIK_A_Hybrid_Analytical-Neural_Inverse_Kinematics_Solution_for_3D_Human_CVPR_2021_paper.pdf) (CVPR 2021)
[![Star](https://img.shields.io/github/stars/Jeff-sjtu/HybrIK?style=social)](https://github.com/Jeff-sjtu/HybrIK)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2011.14672.pdf)- [GLAMR: Global occlusion-aware human mesh recovery with dynamic cameras](https://openaccess.thecvf.com/content/CVPR2022/papers/Yuan_GLAMR_Global_Occlusion-Aware_Human_Mesh_Recovery_With_Dynamic_Cameras_CVPR_2022_paper.pdf) (CVPR 2022 Oral)
[![Star](https://img.shields.io/github/stars/NVlabs/GLAMR?style=social)](https://github.com/NVlabs/GLAMR)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2112.01524.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://nvlabs.github.io/GLAMR/)- [D&d: Learning human dynamics from dynamic camera](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136650470.pdf) (ECCV 2022 Oral)
[![Star](https://img.shields.io/github/stars/Jeff-sjtu/DnD?style=social)](https://github.com/Jeff-sjtu/DnD)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2209.08790.pdf)- [CLIFF: Carrying location information in full frames into human pose and shape estimation](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136650580.pdf) (ECCV 2022 Oral)
[![Star](https://img.shields.io/github/stars/haofanwang/CLIFF?style=social)](https://github.com/haofanwang/CLIFF)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2208.00571.pdf)- [Global-to-Local Modeling for Video-based 3D Human Pose and Shape Estimation](https://arxiv.org/pdf/2303.14747.pdf) (CVPR 2023)
[![Star](https://img.shields.io/github/stars/sxl142/GLoT?style=social)](https://github.com/sxl142/GLoT)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2303.14747.pdf)- [JOTR: 3D Joint Contrastive Learning with Transformers for Occluded Human Mesh Recovery](https://arxiv.org/abs/2307.16377) (ICCV 2023)
[![Star](https://img.shields.io/github/stars/xljh0520/jotr?style=social)](https://github.com/xljh0520/jotr)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2307.16377)- [TRACE: 5D Temporal Regression of Avatars with Dynamic Cameras in 3D Environments](https://openaccess.thecvf.com/content/CVPR2023/papers/Sun_TRACE_5D_Temporal_Regression_of_Avatars_With_Dynamic_Cameras_in_CVPR_2023_paper.pdf) (CVPR 2023)
[![Star](https://img.shields.io/github/stars/Arthur151/ROMP?style=social)](https://github.com/Arthur151/ROMP)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2306.02850.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://www.yusun.work/TRACE/TRACE.html)### Dataset
- [Function4D: Real-time Human Volumetric Capture from Very Sparse RGBD Sensors (Thuman-2.0 Dataset)](https://arxiv.org/abs/2105.01859) (CVPR 2021 Oral)
[![Star](https://img.shields.io/github/stars/ytrock/THuman2.0-Dataset?style=social)](https://github.com/ytrock/THuman2.0-Dataset)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2105.01859)
[![Website](https://img.shields.io/badge/Website-9cf)](http://www.liuyebin.com/Function4D/Function4D.html)- [HuMMan: Multi-Modal 4D Human Dataset for Versatile Sensing and Modeling](https://arxiv.org/pdf/2204.13686.pdf) (ECCV 2022 Oral)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/pdf/2204.13686.pdf)
[![Website](https://img.shields.io/badge/Website-9cf)](https://caizhongang.github.io/projects/HuMMan/)- [CLOTH4D: A Dataset for Clothed Human Reconstruction](https://openaccess.thecvf.com/content/CVPR2023/papers/Zou_CLOTH4D_A_Dataset_for_Clothed_Human_Reconstruction_CVPR_2023_paper.pdf) (CVPR 2023)
[![Star](https://img.shields.io/github/stars/AemikaChow/CLOTH4D?style=social)](https://github.com/AemikaChow/CLOTH4D)- [High-fidelity 3D Human Digitization from Single 2K Resolution Images (2K2K)](https://arxiv.org/abs/2303.15108) (CVPR 2023)
[![Star](https://img.shields.io/github/stars/SangHunHan92/2K2K?style=social)](https://github.com/SangHunHan92/2K2K)
[![arXiv](https://img.shields.io/badge/arXiv-b31b1b.svg)](https://arxiv.org/abs/2303.15108)
[![Website](https://img.shields.io/badge/Website-9cf)](https://sanghunhan92.github.io/conference/2K2K/)### Aknowledgement
- We thank the template from [Awesome-Video-Diffusion](https://github.com/showlab/Awesome-Video-Diffusion).
- The main contributors of this project are from ReLER Lab, CCAI, Zhejiang University.