Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/weihaox/awesome-digital-human
A collection of resources on digital human including clothed people digitalization, virtual try-on, and other related directions.
https://github.com/weihaox/awesome-digital-human
List: awesome-digital-human
avatar clothed-people-digitalization digital-human virtual-try-on
Last synced: 20 days ago
JSON representation
A collection of resources on digital human including clothed people digitalization, virtual try-on, and other related directions.
- Host: GitHub
- URL: https://github.com/weihaox/awesome-digital-human
- Owner: weihaox
- License: mit
- Created: 2021-05-13T07:10:10.000Z (over 3 years ago)
- Default Branch: main
- Last Pushed: 2024-04-08T11:53:27.000Z (7 months ago)
- Last Synced: 2024-05-20T01:49:35.101Z (6 months ago)
- Topics: avatar, clothed-people-digitalization, digital-human, virtual-try-on
- Homepage: https://github.com/weihaox/awesome-digital-human
- Size: 167 KB
- Stars: 1,218
- Watchers: 60
- Forks: 112
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-3D-human - Detailed 3D Human Recovery \(Clothing\)
- ultimate-awesome - awesome-digital-human - A collection of resources on digital human including clothed people digitalization, virtual try-on, and other related directions. (Other Lists / PowerShell Lists)
- Awesome-Text2X-Resources - Awesome Digital Human
README
# Awesome Digital Human [![Awesome](https://awesome.re/badge.svg)](https://awesome.re)
> A curated collection of resources on clothed people: full body recontruction, head reconstruction, digital human related projects, etc.
## Contributing
Feedback and contributions are welcome! If you think I have missed out on something (or) have any suggestions (papers, implementations and other resources), feel free to [pull a request](https://github.com/weihaox/awesome-digital-human/pulls). You could manually edit items or use the [script](https://github.com/weihaox/arxiv_daily_tools) to produce them in the markdown format provided below.
```Markdown
**Here is the Paper Name.**
*[Author 1](homepage), Author 2, and Author 3.*
Conference or Journal Year. [[PDF](link)] [[Project](link)] [[Code](link)] [[Data](link)]
```Table of Contents
- [Industry Demo or Product](#industry-demo-or-product)
- [3D Human Avatar Generation and Animation](#3d-human-avatar-generation-and-animation)
- [3D Head Animatable Avatar (from 2D Image Collections)](#3d-head-animatable-avatar--from-2d-image-collections-)
- [(Clothed) Human Motion Generation](#-clothed--human-motion-generation)
- [Clothed Human Digitalization](#clothed-human-digitalization)
- [Cloth Modelling, Draping, Simulation, and Dressing](#cloth-modelling--draping--simulation--and-dressing)
- [Human Image and Video Generation](#human-image-and-video-generation)
- [Image-Based Virtual Try-On](#image-based-virtual-try-on)
- [Human Body Reshaping](#human-body-reshaping)
- [Garment Design](#garment-design)
- [Fashion Style Influences](#fashion-style-influences)
- [Team and People](#team-and-people)
- [Dataset](#dataset)
- [Applications](#applications)## Industry Demo or Product
Highavenue: Turn yourself into a 3D model.
## 3D/4D Human Avatar Generation and Animation
**DressRecon: Freeform 4D Human Reconstruction from Monocular Video.**
*Jeff Tan, Donglai Xiang, Shubham Tulsiani, Deva Ramanan, Gengshan Yang.*
arxiv 2024. [[PDF](https://arxiv.org/abs/2409.20563)] [[Project](https://jefftan969.github.io/dressrecon/)] [[Code](https://github.com/jefftan969/dressrecon)]**Disco4D: Disentangled 4D Human Generation and Animation from a Single Image.**
*Hui En Pang, Shuai Liu, Zhongang Cai, Lei Yang, Tianwei Zhang, Ziwei Liu.*
arxiv 2024. [[PDF](https://arxiv.org/abs/2409.17280)] [[Project](https://disco-4d.github.io/)]**RodinHD: High-Fidelity 3D Avatar Generation with Diffusion Models.**
*Bowen Zhang, Yiji Cheng, Chunyu Wang, Ting Zhang, Jiaolong Yang, Yansong Tang, Feng Zhao, Dong Chen, Baining Guo.*
ECCV 2024. [[PDF](https://arxiv.org/abs/2407.06938)] [[Project](https://rodinhd.github.io)] [[Code]()]**Animatable Gaussians: Learning Pose-dependent Gaussian Maps for High-fidelity Human Avatar Modeling.**
*Zhe Li, Zerong Zheng, Lizhen Wang, Yebin Liu..*
CVPR 2024. [[PDF](http://arxiv.org/abs/2311.16096)] [[Project](https://animatable-gaussians.github.io)] [[Code](https://github.com/lizhe00/AnimatableGaussians)]**HumanNorm: Learning Normal Diffusion Model for High-quality and Realistic 3D Human Generation.**
*Xin Huang, Ruizhi Shao, Qi Zhang, Hongwen Zhang, Ying Feng, Yebin Liu, Qing Wang..*
CVPR 2024. [[PDF]()] [[Project](https://humannorm.github.io/)]**RAM-Avatar: Real-time Photo-Realistic Avatar from Monocular Videos with Full-body Control.**
*Xiang Deng, Zerong Zheng, Yuxiang Zhang, Jingxiang Sun, Chao Xu, XiaoDong Yang, Lizhen Wang, Yebin Liu.*
CVPR 2024. [[PDF](https://cloud.tsinghua.edu.cn/f/6b7a88c3b4ac43b0b506/?dl=1)] [[Project](https://github.com/Xiang-Deng00/RAM-Avatar/)] [[Code](https://github.com/Xiang-Deng00/RAM-Avatar)]**TexVocab:Texture Vocabulary-conditioned Human Avatars.**
*Yuxiao Liu, Zhe Li, Yebin Liu, Haoqian Wang.*
CVPR 2024. [[PDF](https://arxiv.org/abs/2404.00524)] [[Project](https://texvocab.github.io/)]**HumanGaussian: Text-Driven 3D Human Generation with Gaussian Splatting.**
*Xian Liu, Xiaohang Zhan, Jiaxiang Tang, Ying Shan, Gang Zeng, Dahua Lin, Xihui Liu, Ziwei Liu.*
CVPR 2024. [[PDF](http://arxiv.org/abs/2311.17061)] [[Project](https://alvinliu0.github.io/projects/HumanGaussian)]**DreamAvatar: Text-and-Shape Guided 3D Human Avatar Generation via Diffusion Models.**
*[Yukang Cao](https://yukangcao.github.io/), [Yan-Pei Cao](https://yanpei.me/), [Kai Han](https://www.kaihan.org/), [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ&hl=zh-CN), [Kwan-Yee K. Wong](https://i.cs.hku.hk/~kykwong/).*
CVPR 2024. [[PDF](https://arxiv.org/abs/2304.00916)] [[Project](https://yukangcao.github.io/DreamAvatar/)] [[Code](https://yukangcao.github.io/DreamAvatar/)]**SCULPT: Shape-Conditioned Unpaired Learning of Pose-dependent Clothed and Textured Human Meshes.**
*Soubhik Sanyal, Partha Ghosh, Jinlong Yang, Michael J. Black, Justus Thies, Timo Bolkart.*
CVPR 2024. [[PDF](https://arxiv.org/abs/2308.10638)] [[Project](https://huangyangyi.github.io/tech/)]**3DGS-Avatar: Animatable Avatars via Deformable 3D Gaussian Splatting.**
*Zhiyin Qian, Shaofei Wang, Marko Mihajlovic, Andreas Geiger, Siyu Tang.*
CVPR 2024. [[PDF](https://arxiv.org/abs/2312.09228)] [[Project](https://neuralbodies.github.io/3DGS-Avatar/)]**Emotional Speech-driven 3D Body Animation Via Disentangled Latent Diffusion.**
*Kiran Chhatre, Radek Daněček, Nikos Athanasiou, Giorgio Becherini, Christopher Peters, Michael J. Black, Timo Bolkart.*
CVPR 2024. [[PDF](http://arxiv.org/abs/2312.04466)]**GauHuman: Articulated Gaussian Splatting from Monocular Human Videos.**
*Shoukang Hu, Ziwei Liu.*
CVPR 2024. [[PDF](http://arxiv.org/abs/2312.02973v1)] [[Project](https://skhu101.github.io/GauHuman/)] [[Code](https://github.com/skhu101/GauHuman)]**FlashAvatar: High-Fidelity Digital Avatar Rendering at 300FPS.**
*Jun Xiang, Xuan Gao, Yudong Guo, Juyong Zhang.*
CVPR 2024. [[PDF](http://arxiv.org/abs/2312.02214v1)] [[Project](https://ustc3dv.github.io/FlashAvatar/)]**PEGASUS: Personalized Generative 3D Avatars with Composable Attributes.**
*[Hyunsoo Cha](https://research.hyunsoocha.com/), [Byungjun Kim](https://bjkim95.github.io/), [Hanbyul Joo](https://jhugestar.github.io/).*
CVPR 2024. [[PDF](https://arxiv.org/pdf/2402.10636)] [[Project](https://snuvclab.github.io/pegasus/)] [[Code](https://github.com/snuvclab/pegasus)]**TADA! Text to Animatable Digital Avatars.**
*[Tingting Liao](https://github.com/TingtingLiao), [Hongwei Yi](https://xyyhw.top/), [Yuliang Xiu](http://xiuyuliang.cn/), [Jiaxiang Tang](https://me.kiui.moe/), [Yangyi Huang](https://github.com/huangyangyi/), [Justus Thies](https://justusthies.github.io/), [Michael J. Black](https://ps.is.tuebingen.mpg.de/person/black).*
3DV 2024. [[PDF](https://arxiv.org)] [[Project](https://tada.is.tue.mpg.de/)] [[Code](https://github.com/TingtingLiao/TADA)]**Efficient 3D Articulated Human Generation with Layered Surface Volumes.**
*Yinghao Xu, Wang Yifan, Alexander W. Bergman, Menglei Chai, Bolei Zhou, Gordon Wetzstein.*
3DV 2024. [[PDF](https://arxiv.org/abs/2307.05462)] [[Project](https://www.computationalimaging.org/publications/lsv/)]**TECA: Text-Guided Generation and Editing of Compositional 3D Avatars.**
*[Hao Zhang](https://haozhang990127.github.io/), [Yao Feng](https://scholar.google.com/citations?user=wNQQhSIAAAAJ&hl=en&oi=ao), [Peter Kulits](https://kulits.github.io/), [Yandong Wen](https://is.mpg.de/person/ydwen), [Justus Thies](https://justusthies.github.io/), [Michael J. Black](https://ps.is.mpg.de/person/black).*
3DV 2024. [[PDF](https://arxiv.org/abs/2309.07125)] [[Project](https://yfeng95.github.io/teca/)]**FLARE: Fast Learning of Animatable and Relightable Mesh Avatars.**
*Shrisha Bharadwaj, Yufeng Zheng, Otmar Hilliges, Michael J. Black, Victoria Fernandez-Abrevaya.*
SIGGRAPH Asia 2023. [[PDF](https://arxiv.org/abs/2310.17519)]**Single-Shot Implicit Morphable Faces with Consistent Texture Parameterization.**
*Connor Z. Lin, Koki Nagano, Jan Kautz, Eric R. Chan, Umar Iqbal, Leonidas Guibas, Gordon Wetzstein, Sameh Khamis.*
SIGGRAPH 2023. [[PDF](https://arxiv.org/abs/2305.03043)] [[Project](https://research.nvidia.com/labs/toronto-ai/ssif/)] [[Code]()]**DELIFFAS: Deformable Light Fields for Fast Avatar Synthesis.**
*Youngjoong Kwon, Lingjie Liu, Henry Fuchs, Marc Habermann, Christian Theobalt.*
NeurIPS 2023. [[PDF](http://arxiv.org/abs/2310.11449)]**PrimDiffusion: Volumetric Primitives Diffusion for 3D Human Generation.**
*Zhaoxi Chen, Fangzhou Hong, Haiyi Mei, Guangcong Wang, Lei Yang, Ziwei Liu.*
NeurIPS 2023. [[PDF](http://arxiv.org/abs/2312.04559)] [[Project](https://frozenburning.github.io/projects/primdiffusion/)] [[Code](https://github.com/FrozenBurning/PrimDiffusion)]**XAGen: 3D Expressive Human Avatars Generation.**
*Zhongcong Xu, Jianfeng Zhang, Jun Hao Liew, Jiashi Feng, Mike Zheng Shou.*
NeurIPS 2023. [[PDF](https://arxiv.org/abs/2311.13574)] [[Project](https://showlab.github.io/xagen/)] [[Code](https://github.com/magic-research/xagen)]**DreamHuman: Animatable 3D Avatars from Text.**
*[Nikos Kolotouros](https://www.nikoskolot.com/), [Thiemo Alldieck](https://research.google/people/107250/), [Andrei Zanfir](https://scholar.google.com/citations?user=8lmzWycAAAAJ&hl=en&oi=ao), [Eduard Gabriel Bazavan](https://research.google/people/107659/), [Mihai Fieraru](https://mihaifieraru.github.io/), [Cristian Sminchisescu](https://research.google/people/CristianSminchisescu/).*
NeurIPS 2023. [[PDF](https://arxiv.org/abs/2306.09329)] [[Project](https://dream-human.github.io/)] [[Avatar Gallery](https://dream-human.github.io/avatar_gallery.html)]**DINAR: Diffusion Inpainting of Neural Textures for One-Shot Human Avatars.**
*David Svitov, Dmitrii Gudkov, Renat Bashirov, Victor Lempitsky.*
ICCV 2023. [[PDF](https://arxiv.org/abs/2303.09375)] [[Code](https://github.com/SamsungLabs/DINAR)]**AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control.**
*[Ruixiang Jiang](https://j-rx.com/), [Can Wang](https://cassiepython.github.io/), [Jingbo Zhang](https://eckertzhang.github.io/), [Menglei Chai](https://mlchai.com/), [Mingming He](https://mingminghe.com/), [Dongdong Chen](https://www.dongdongchen.bid/), [Jing Liao](https://liaojing.github.io/html/).*
ICCV 2023. [[PDF](https://arxiv.org/abs/2303.17606)] [[Project](https://avatar-craft.github.io/)] [[Code](https://github.com/songrise/avatarcraft)] [[Data](https://drive.google.com/drive/folders/1m97mmoAtDes0mBwkS4q2VNBivXSmRkOK)]**StyleAvatar3D: Leveraging Image-Text Diffusion Models for High-Fidelity 3D Avatar Generation.**
*Chi Zhang, Yiwen Chen, Yijun Fu, Zhenglin Zhou, Gang YU, Billzb Wang, Bin Fu, Tao Chen, Guosheng Lin, Chunhua Shen.*
ICCV 2023. [[PDF](http://arxiv.org/abs/2305.19012)] [[Project](https://x-zhangyang.github.io/2023_Get3DHuman/)] [[Code](https://github.com/icoz69/StyleAvatar3D)]**Get3DHuman: Lifting StyleGAN-Human into a 3D Generative Model using Pixel-aligned Reconstruction Priors.**
*Zhangyang Xiong, Di Kang, Derong Jin, Weikai Chen, Linchao Bao, Xiaoguang Han.*
ICCV 2023. [[PDF](https://arxiv.org/abs/2302.01162)] [[Code](https://github.com/X-zhangyang/Get3DHuman/)]**GETAvatar: Generative Textured Meshes for Animatable Human Avatars.**
*Xuanmeng Zhang, Jianfeng Zhang, Rohan Chacko, Hongyi Xu, Guoxian Song, Yi Yang, Jiashi Feng.*
ICCV 2023. [[PDF](https://arxiv.org/abs/2310.02714)] [[Project](https://getavatar.github.io/)]**AG3D: Learning to Generate 3D Avatars from 2D Image Collections.**
*[Zijian Dong](https://ait.ethz.ch/people/zijian/), [Xu Chen](https://ait.ethz.ch/people/xu/), [Jinlong Yang](https://is.mpg.de/~jyang), [Michael J. Black](https://ps.is.mpg.de/~black), [Otmar Hilliges](https://ait.ethz.ch/people/hilliges), [Andreas Geiger](http://www.cvlibs.net/).*
ICCV 2023. [[PDF](http://arxiv.org/abs/2305.02312)] [[Project](https://zj-dong.github.io/AG3D/)] [[Code](https://github.com/zj-dong/AG3D)]**Learning Locally Editable Virtual Humans.**
*[Hsuan-I Ho](https://azuxmioy.github.io/), [Lixin Xue](https://lxxue.github.io/), [Jie Song](https://ait.ethz.ch/people/song), [Otmar Hilliges](https://ait.ethz.ch/people/hilliges).*
CVPR 2023. [[PDF](https://files.ait.ethz.ch/projects/custom-humans/paper.pdf)] [[Project](https://custom-humans.github.io/)] [[Code](https://github.com/custom-humans/editable-humans)]**PersonNeRF: Personalized Reconstruction from Photo Collections.**
*[Chung-Yi Weng](https://homes.cs.washington.edu/~chungyi/), Pratul P. Srinivasan, Brian Curless, Ira Kemelmacher-Shlizerman.*
CVPR 2023. [[PDF](https://arxiv.org/abs/2302.01162)] [[Project](https://grail.cs.washington.edu/projects/personnerf/)] [[Code]()]**Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures.**
*Gal Metzer, Elad Richardson, Or Patashnik, Raja Giryes, Daniel Cohen-Or.*
CVPR 2023. [[PDF](https://arxiv.org/abs/2211.07600)] [[Code](https://github.com/eladrich/latent-nerf)]**EVA3D: Compositional 3D Human Generation from 2D Image Collections.**
*[Fangzhou Hong](https://hongfz16.github.io/), [Zhaoxi Chen](https://frozenburning.github.io/), [Yushi Lan](https://github.com/NIRVANALAN), [Liang Pan](https://scholar.google.com/citations?user=lSDISOcAAAAJ&hl=zh-CN), [Ziwei Liu](https://liuziwei7.github.io/).*
ICLR 2023. [[PDF](https://arxiv.org/abs/2210.04888)] [[Project](https://hongfz16.github.io/projects/EVA3D.html)] [[Code]()]**CLIP-Actor: Text-Driven Recommendation and Stylization for Animating Human Meshes.**
*Kim Youwang, Kim Ji-Yeon, Tae-Hyun Oh.*
ECCV 2022. [[PDF](https://arxiv.org/abs/2208.03550)] [[Project](https://clip-actor.github.io/)] [[Code](https://github.com/postech-ami/CLIP-Actor)]**AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars.**
*[Fangzhou Hong](https://hongfz16.github.io/), Mingyuan Zhang, Liang Pan, Zhongang Cai, Lei Yang, Ziwei Liu.*
SIGGRAPH (TOG) 2022. [[PDF](https://arxiv.org/abs/2205.08535)] [[Project](https://hongfz16.github.io/projects/AvatarCLIP.html)] [[Code](https://github.com/hongfz16/AvatarCLIP)]**WildAvatar: Web-scale In-the-wild Video Dataset for 3D Avatar Creation.**
*Zihao Huang, ShouKang Hu, Guangcong Wang, Tianqi Liu, Yuhang Zang, Zhiguo Cao, Wei Li, Ziwei Liu.*
arXiv 2024. [[PDF](https://arxiv.org/abs/2407.02165)] [[Project](https://wildavatar.github.io/)]**Drivable 3D Gaussian Avatars.**
*Wojciech Zielonka, Timur Bagautdinov, Shunsuke Saito, Michael Zollhöfer, Justus Thies, Javier Romero.*
arXiv 2023. [[PDF](https://arxiv.org/abs/2311.08581)] [[Project](https://zielon.github.io/d3ga/)]**MagicAvatar: Multimodal Avatar Generation and Animation.**
*Jianfeng Zhang, Hanshu Yan, Zhongcong Xu, Jiashi Feng, Jun Hao Liew.*
arXiv 2023. [[PDF](http://arxiv.org/abs/2308.14748)] [[Project](https://magic-avatar.github.io/)] [[Code](https://github.com/magic-research/magic-avatar)]**DELTA: Learning Disentangled Avatars with Hybrid 3D Representations.**
*[Yao Feng](https://scholar.google.com/citations?user=wNQQhSIAAAAJ&hl=en&oi=ao), [Weiyang Liu](https://wyliu.com/), [Timo Bolkart](https://sites.google.com/site/bolkartt/), [Jinlong Yang](https://is.mpg.de/~jyang), [Marc Pollefeys](https://people.inf.ethz.ch/pomarc/), [Michael J. Black](https://ps.is.mpg.de/person/black).*
arXiv 2023. [[PDF](https://arxiv.org/abs/2309.06441)] [[Project](https://yfeng95.github.io/delta/)] [[Code](https://github.com/yfeng95/DELTA)]**AvatarBooth: High-Quality and Customizable 3D Human Avatar Generation.**
*[Yifei Zeng](https://github.com/zeng-yifei), [Yuanxun Lu](https://github.com/YuanxunLu), [Xinya Ji](https://github.com/jixinya), [Yao Yao](https://yoyo000.github.io/), [Hao Zhu](http://zhuhao.cc/), [Xun Cao](https://cite.nju.edu.cn/People/Faculty/20190621/i5054.html).*
arXiv 2023. [[PDF](https://arxiv.org/abs/2306.09864)] [[Project](https://zeng-yifei.github.io/avatarbooth_page/)] [[Code](https://github.com/zeng-yifei/AvatarBooth)]## 3D Head Animatable Avatar (from 2D Image Collections)
**PAV: Personalized Head Avatar from Unstructured Video Collection.**
*Akin Caliskan, Berkay Kicanaoglu, Hyeongwoo Kim.*
ECCV 2024. [[PDF](https://arxiv.org/abs/2407.21047)] [[Project](https://akincaliskan3d.github.io/PAV/)]**Gaussian Head Avatar: Ultra High-fidelity Head Avatar via Dynamic Gaussians.**
*Yuelang Xu, Benwang Chen, Zhe Li, Hongwen Zhang, Lizhen Wang, Zerong Zheng, Yebin Liu.*
CVPR 2024. [[PDF](https://arxiv.org/abs/2312.03029)] [[Project](https://yuelangx.github.io/gaussianheadavatar/)]**HeadArtist: Text-conditioned 3D Head Generation with Self Score Distillation.**
*Hongyu Liu, Xuan Wang, Ziyu Wan, Yujun Shen, Yibing Song, Jing Liao, Qifeng Chen.*
SIGGRAPH 2024. [[PDF](http://arxiv.org/abs/2312.07539)] [[Project](https://kumapowerliu.github.io/HeadArtist)]**GAN-Avatar: Controllable Personalized GAN-based Human Head Avatar.**
*Berna Kabadayi, Wojciech Zielonka, Bharat Lal Bhatnagar, Gerard Pons-Moll, Justus Thies.*
3DV 2024. [[PDF](https://arxiv.org/abs/2311.13655)] [[Project](https://ganavatar.github.io/)]**NeRFEditor: Differentiable Style Decomposition for Full 3D Scene Editing.**
*[Chunyi Sun](https://chuny1.github.io/NeRFEditor/nerfeditor.html), [Yanbin Liu](https://chuny1.github.io/NeRFEditor/nerfeditor.html), [Junlin Han](https://junlinhan.github.io/), [Stephen Gould](https://chuny1.github.io/NeRFEditor/nerfeditor.html).*
WACV 2024. [[PDF](https://arxiv.org/abs/2212.03848)] [[Project](https://chuny1.github.io/NeRFEditor/nerfeditor.html)]**AlbedoGAN: Towards Realistic Generative 3D Face Models.**
*[Aashish Rai](https://aashishrai3799.github.io/), [Hiresh Gupta](https://hireshgupta1997.github.io/), Ayush Pandey, Francisco Vicente Carrasco, Shingo Jason Takagi, Amaury Aubel, Daeil Kim, [Aayush Prakash](https://aayushp.github.io/), [Fernando de la Torre](https://www.cs.cmu.edu/~ftorre/).*
WACV 2024. [[PDF](https://arxiv.org/abs/2304.12483)] [[Project](https://aashishrai3799.github.io/Towards-Realistic-Generative-3D-Face-Models/)] [[Code](https://lnkd.in/gVz8Hzn3)]**AvatarStudio: Text-driven Editing of 3D Dynamic Human Head Avatars.**
*Mohit Mendiratta. Xingang Pan, Mohamed Elgharib, Kartik Teotia, Mallikarjun B R, Ayush Tewari, Vladislav Golyanik, Adam Kortylewski, Christian Theobalt.*
TOG 2024. [[PDF](http://arxiv.org/abs/2306.00547)] [[Project](https://vcai.mpi-inf.mpg.de/projects/AvatarStudio/)]**HQ3DAvatar: High Quality Controllable 3D Head Avatar.**
*[Kartik Teotia](https://www.linkedin.com/in/kartik-teotia/?originalSubdomain=de), Mallikarjun B R, Xingang Pan, Hyeongwoo Kim, Pablo Garrido, Mohamed Elgharib, Christian Theobalt.*
TOG 20234. [[PDF](https://vcai.mpi-inf.mpg.de/projects/HQ3DAvatar/)] [[Project](https://vcai.mpi-inf.mpg.de/projects/HQ3DAvatar/)] [[Code]()]**CLIPFace: Text-guided Editing of Textured 3D Morphable Models.**
*[Shivangi Aneja](https://niessnerlab.org/members/shivangi_aneja/profile.html), [Justus Thies](https://is.mpg.de/~jthies), [Angela Dai](https://www.professoren.tum.de/en/dai-angela), [Matthias Nießner](https://niessnerlab.org/members/matthias_niessner/profile.html).*
SIGGRAPH 2023. [[PDF](https://arxiv.org/abs/2212.01406)] [[Project](https://shivangi-aneja.github.io/projects/clipface/)] [[Code](https://github.com/shivangi-aneja/ClipFace)]**DreamFace: Progressive Generation of Animatable 3D Faces under Text Guidance.**
*Longwen Zhang, Qiwei Qiu, Hongyang Lin, Qixuan Zhang, Cheng Shi, Wei Yang, Ye Shi, Sibei Yang, Lan Xu, Jingyi Yu.*
SIGGRAPH 2023. [[PDF](https://arxiv.org/abs/2304.03117)] [[Project](https://sites.google.com/view/dreamface)] [[Demo](https://hyperhuman.deemos.com/)] [[HuggingFace](https://huggingface.co/spaces/DEEMOSTECH/ChatAvatar)]**StyleAvatar: Real-time Photo-realistic Neural Portrait Avatar from a Single Video.**
*Lizhen Wang, Xiaochen Zhao, Jingxiang Sun, Yuxiang Zhang, Hongwen Zhang, Tao Yu✝, Yebin Liu.*
SIGGRAPH 2023. [[PDF](https://arxiv.org/abs/2305.00942)]**LatentAvatar: Learning Latent Expression Code for Expressive Neural Head Avatar.**
*Yuelang Xu, Hongwen Zhang, Lizhen Wang, Xiaochen Zhao, Han Huang, Guojun Qi, Yebin Liu.*
SIGGRAPH 2023. [[PDF](https://arxiv.org/abs/2305.01190)], [[Project](https://www.liuyebin.com/latentavatar/)],[[Code](https://github.com/YuelangX/LatentAvatar)]**NeRSemble: Multi-view Radiance Field Reconstruction of Human Heads.**
*[Tobias Kirschstein](https://tobias-kirschstein.github.io/), [Shenhan Qian](https://shenhanqian.com/), [Simon Giebenhain](https://simongiebenhain.github.io/), Tim Walter, Matthias Nießner.*
SIGGRAPH 2023. [[PDF](http://arxiv.org/abs/2305.03027)] [[Project](https://tobias-kirschstein.github.io/nersemble/)] [[Video](https://youtu.be/a-OAWqBzldU)]**GOHA: Generalizable One-shot Neural Head Avatar.**
*[Xueting Li](https://sunshineatnoon.github.io/), [Shalini De Mello](https://research.nvidia.com/person/shalini-de-mello), [Sifei Liu](https://www.sifeiliu.net/), [Koki Nagano](https://luminohope.org/), [Umar Iqbal](https://research.nvidia.com/person/umar-iqbal), [Jan Kautz](https://jankautz.com/).*
NeurIPS 2023. [[PDF](https://arxiv.org/abs/2306.08768)] [[Project](https://research.nvidia.com/labs/lpr/one-shot-avatar)]**GANHead: Towards Generative Animatable Neural Head Avatars.**
*[Sijing Wu]((https://wsj-sjtu.github.io), [Yichao Yan](https://daodaofr.github.io/), Yunhao Li, Yuhao Cheng, Wenhan Zhu, Ke Gao, Xiaobo Li, [Guangtao Zhai](https://scholar.google.com/citations?user=E6zbSYgAAAAJ&hl=en&oi=ao).*
CVPR 2023. [[PDF](https://arxiv.org/abs/2304.03950)] [[Project](https://wsj-sjtu.github.io/GANHead/)]**FitMe: Deep Photorealistic 3D Morphable Model Avatars.**
*[Alexandros Lattas](https://alexlattas.com/), [Stylianos Moschoglou](https://moschoglou.com/), [Stylianos Ploumpis](https://www.ploumpis.com/), [Baris Gecer](https://barisgecer.github.io/), [Jiankang Deng](https://jiankangdeng.github.io/), [Stefanos Zafeiriou](https://www.imperial.ac.uk/people/s.zafeiriou).*
CVPR 2023. [[PDF](https://arxiv.org/abs/2305.09641)] [[Project](https://alexlattas.com/fitme)]**Next3D: Generative Neural Texture Rasterization for 3D-Aware Head Avatars.**
*[Jingxiang Sun](https://mrtornado24.github.io/), [Xuan Wang](https://xuanwangvc.github.io/), [Lizhen Wang](https://lizhenwangt.github.io/), [Xiaoyu Li](https://xiaoyu258.github.io/), [Yong Zhang](https://yzhang2016.github.io/yongnorriszhang.github.io/), [Hongwen Zhang](https://hongwenzhang.github.io/), [Yebin Liu](http://www.liuyebin.com/).*
CVPR 2023 (Highlight). [[PDF](https://arxiv.org/pdf/2211.11208.pdf)] [[Project](https://mrtornado24.github.io/Next3D/)] [[Code](https://github.com/MrTornado24/Next3D)]**BlendFields: Few-Shot Example-Driven Facial Modeling.**
*Kacper Kania, Stephan J. Garbin, Andrea Tagliasacchi, Virginia Estellers, Kwang Moo Yi, Julien Valentin, Tomasz Trzciński, Marek Kowalski.*
CVPR 2023. [[PDF](https://arxiv.org/abs/2305.07514)] [[Project](https://blendfields.github.io/)]**OTAvatar: One-shot Talking Face Avatar with Controllable Tri-plane Rendering.**
*Zhiyuan Ma, Xiangyu Zhu, Guojun Qi, Zhen Lei, Lei Zhang.*
CVPR 2023. [[PDF](https://arxiv.org/pdf/2303.14662)] [[Code](https://github.com/theEricMa/OTAvatar)] [[Demo](https://youtu.be/qpIoMYFr7Aw)]**PanoHead: Geometry-Aware 3D Full-Head Synthesis in 360∘.**
*Sizhe An, Hongyi Xu, Yichun Shi, Guoxian Song, Umit Ogras, Linjie Luo.*
CVPR 2023. [[PDF](https://arxiv.org/abs/2303.13071) [[Project](https://sizhean.github.io/panohead)]**Efficient Meshy Neural Fields for Animatable Human Avatars.**
*Xiaoke Huang, Yiji Cheng, Yansong Tang, Xiu Li, Jie Zhou, Jiwen Lu .*
CVPR 2023. [[PDF](https://arxiv.org/abs/2303.12965)] [[Project](https://xk-huang.github.io/ema/)]**Rodin: A Generative Model for Sculpting 3D Digital Avatars Using Diffusion.**
*Tengfei Wang, Bo Zhang, Ting Zhang, Shuyang Gu, Jianmin Bao, Tadas Baltrusaitis, Jingjing Shen, Dong Chen, Fang Wen, Qifeng Chen, Baining Guo.*
CVPR 2023. [[PDF](https://arxiv.org/abs/2212.06135)] [[Project](https://3d-avatar-diffusion.microsoft.com/)]**OmniAvatar: Geometry-Guided Controllable 3D Head Synthesis.**
*Hongyi Xu, Guoxian Song, Zihang Jiang, Jianfeng Zhang, Yichun Shi, Jing Liu, Wanchun Ma, Jiashi Feng, Linjie Luo.*
CVPR 2023. [[PDF](https://arxiv.org/abs/2303.15539)]**PointAvatar: Deformable Point-based Head Avatars from Videos.**
*Yufeng Zheng, Wang Yifan, Gordon Wetzstein, Michael J. Black, Otmar Hilliges.*
CVPR 2023. [[PDF](https://arxiv.org/abs/2212.08377)] [[Project](https://zhengyuf.github.io/PointAvatar/)] [[Code](https://github.com/zhengyuf/pointavatar)]**MEGANE: Morphable Eyeglass and Avatar Network.**
*Junxuan Li, Shunsuke Saito, Tomas Simon, Stephen Lombardi, Hongdong Li, Jason Saragih.*
CVPR 2023. [[PDF](https://arxiv.org/abs/2302.04868)] [[Project](https://junxuan-li.github.io/megane/)]**Reconstructing Personalized Semantic Facial NeRF Models From Monocular Video.**
*Xuan Gao, ChengLai Zhong, Jun Xiang, Yang Hong, Yudong Guo, Juyong Zhang.*
TOG 2022. [[PDF](https://arxiv.org/abs/2210.06108)] [[Project](https://ustc3dv.github.io/NeRFBlendShape/)] [[Code](https://github.com/USTC3DV/NeRFBlendShape-code)]**H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction.**
*Eduard Ramon, Gil Triginer, Janna Escur, Albert Pumarola, Jaime Garcia, Xavier Giro-i-Nieto, Francesc Moreno-Noguer.*
ICCV 2021. [[PDF](https://arxiv.org/abs/2107.12512)] [[Project](https://crisalixsa.github.io/h3d-net/)] [[H3DS Dataset](https://docs.google.com/forms/d/e/1FAIpQLScbs-m-Me85KeMqJ2WnCwvToeSRIHC8sJckhxX3eknQu8ItRQ/viewform)]**AvatarMe++: Facial Shape and BRDF Inference with Photorealistic Rendering-Aware GANs.**
*[Alexandros Lattas](https://www.imperial.ac.uk/people/a.lattas), [Stylianos Moschoglou](https://www.doc.ic.ac.uk/~sm3515/), [Stylianos Ploumpis](https://www.imperial.ac.uk/people/s.ploumpis), [Baris Gecer](http://barisgecer.github.io), [Abhijeet Ghosh](https://www.doc.ic.ac.uk/~ghosh/), [Stefanos Zafeiriou](https://wp.doc.ic.ac.uk/szafeiri/).*
TPAMI 2021. [[PDF](https://arxiv.org/abs/2112.05957)] [[Code](https://github.com/lattas/AvatarMe)]**AvatarMe: Realistically Renderable 3D Facial Reconstruction "in-the-wild".**
*Alexandros Lattas, Stylianos Moschoglou, Baris Gecer, Stylianos Ploumpis, Vasileios Triantafyllou, Abhijeet Ghosh, Stefanos Zafeiriou.*
CVPR 2020. [[PDF](https://arxiv.org/abs/2003.13845)] [[Code](https://github.com/lattas/AvatarMe)]## (Clothed) Human Motion Generation
**ReLoo: Reconstructing Humans Dressed in Loose Garments from Monocular Video in the Wild.**
*Chen Guo, Tianjian Jiang, Manuel Kaufmann, Chengwei Zheng, Julien Valentin, Jie Song, Otmar Hilliges.*
ECCV 2024. [[PDF](https://arxiv.org/abs/2409.15269)] [[Project](https://moygcc.github.io/ReLoo/)]**TLControl: Trajectory and Language Control for Human Motion Synthesis.**
*Weilin Wan, Zhiyang Dou, Taku Komura, Wenping Wang, Dinesh Jayaraman, Lingjie Liu.*
ECCV 2024. [[PDF](http://arxiv.org/abs/2311.17135)] [[Project](https://tlcontrol.weilinwl.com/)]**CoMo: Controllable Motion Generation through Language Guided Pose Code Editing.**
*Yiming Huang, Weilin Wan, Yue Yang, Chris Callison-Burch, Mark Yatskar, Lingjie Liu.*
ECCV 2024. [[PDF](https://arxiv.org/abs/2403.13900)] [[Project](https://yh2371.github.io/como/)]**Total Selfie: Generating Full-Body Selfies.**
*Bowei Chen, Brian Curless, Ira Kemelmacher-Shlizerman, Steve Seitz.*
CVPR 2024 (Highlight). [[PDF](http://arxiv.org/abs/2308.14740)] [[Project](https://homes.cs.washington.edu/~boweiche/project_page/totalselfie/)]**OmniControl: Control Any Joint at Any Time for Human Motion Generation.**
*Yiming Xie, Varun Jampani, Lei Zhong, Deqing Sun, Huaizu Jiang.*
ICLR 2024. [[PDF](http://arxiv.org/abs/2310.08580)] [[Project](https://neu-vi.github.io/omnicontrol/)]**MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model.**
*[Mingyuan Zhang](https://mingyuan-zhang.github.io/), [Zhongang Cai](https://caizhongang.github.io/), [Liang Pan](https://github.com/paul007pl), [Fangzhou Hong](https://hongfz16.github.io/), [Xinying Guo](https://gxyes.github.io/), [Lei Yang](https://yanglei.me/), [Ziwei Liu](https://liuziwei7.github.io/).*
TPAMI 2024. [[PDF](https://arxiv.org/abs/2208.15001)] [[Project](https://mingyuan-zhang.github.io/projects/MotionDiffuse.html)] [[Code](https://github.com/mingyuan-zhang/MotionDiffuse)]**TMR: Text-to-Motion Retrieval Using Contrastive 3D Human Motion Synthesis.**
*Mathis Petrovich, Michael J. Black and Gül Varol.*
ICCV 2023. [[PDF](https://arxiv.org/abs/2305.00976)] [[Project](https://mathis.petrovich.fr/tmr/index.html)] [[Code](https://github.com/Mathux/TMR)]**SINC: Spatial Composition of 3D Human Motions for Simultaneous Action Generation.**
*Nikos Athanasiou, Mathis Petrovich, Michael J. Black, Gül Varol.*
ICCV 2023. [[PDF](https://arxiv.org/abs/2304.10417)] [[Project](https://sinc.is.tue.mpg.de/)] [[Code](https://github.com/athn-nik/sinc)]**HumanRF: High-Fidelity Neural Radiance Fields for Humans in Motion.**
*[Mustafa Işık](https://www.mustafaisik.net/), Martin Rünz, Markos Georgopoulos, Taras Khakhulin, Jonathan Starck, Lourdes Agapito, Matthias Nießner.*
SIGGRAPH 2023. [[PDF](http://arxiv.org/abs/2305.06356)] [[Project](http://www.synthesiaresearch.github.io/humanrf)] [[Code](https://github.com/synthesiaresearch/humanrf)] [[Data](http://www.actors-hq.com/)]**GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents.**
*[Tenglong Ao](https://aubrey-ao.github.io/), Zeyi Zhang, [Libin Liu](https://libliu.info/).*
SIGGRAPH 2023 (Journal Track). [[PDF](https://arxiv.org/abs/2303.14613)] [[Project](https://pku-mocca.github.io/GestureDiffuCLIP-Page/)] [[Code]()]**MDM: Human Motion Diffusion Model.**
*Guy Tevet, Sigal Raab, Brian Gordon, Yonatan Shafir, Daniel Cohen-Or, Amit H. Bermano.*
ICLR 2023. [[PDF](https://arxiv.org/abs/2209.14916)] [[Project](https://guytevet.github.io/mdm-page/)] [[Code](https://github.com/GuyTevet/motion-diffusion-model)]**MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis.**
*[Rishabh Dabral](https://www.cse.iitb.ac.in/~rdabral/), [Muhammad Hamza Mughal](https://m-hamza-mughal.github.io/), [Vladislav Golyanik](https://people.mpi-inf.mpg.de/~golyanik/), [Christian Theobalt](https://people.mpi-inf.mpg.de/~theobalt/).*
CVPR 2023. [[PDF](https://arxiv.org/abs/2212.04495)] [[Project](https://vcai.mpi-inf.mpg.de/projects/MoFusion/)]**MotionCLIP: Exposing Human Motion Generation to CLIP Space.**
*Guy Tevet, Brian Gordon, Amir Hertz, Amit H. Bermano, Daniel Cohen-Or.*
ECCV 2022. [[PDF](https://arxiv.org/abs/2203.08063)] [[Project](https://guytevet.github.io/motionclip-page/)] [[Code](https://github.com/GuyTevet/MotionCLIP)]**TEMOS: Generating diverse human motions from textual descriptions.**
*[Mathis Petrovich](https://mathis.petrovich.fr/), Michael J. Black, Gül Varol.*
ECCV 2022. [[PDF](https://arxiv.org/abs/2204.14109)] [[Project](https://mathis.petrovich.fr/temos/)] [[Code](https://github.com/Mathux/TEMOS)]**TEACH: Temporal Action Composition for 3D Human.**
*Nikos Athanasiou, Mathis Petrovich, Michael J. Black, Gül Varol.*
3DV 2022. [[PDF](https://arxiv.org/abs/2209.04066)] [[Project](https://teach.is.tue.mpg.de/)] [[Code](https://github.com/athn-nik/teach)]## Clothed Human Digitalization
[Project Splinter](https://project-splinter.github.io/): Human Digitalization with Implicit Representation.
**PuzzleAvatar: Assembling 3D Avatars from Personal Albums.**
*Yuliang Xiu, Yufei Ye, Zhen Liu, Dimitrios Tzionas, Michael J. Black.*
SIGGRAPH Asia (TOG) 2024. [[PDF](https://arxiv.org/abs/2405.14869)]**iHuman: Instant Animatable Digital Humans From Monocular Videos.**
*Pramish Paudel, Anubhav Khanal, Ajad Chhatkuli, Danda Pani Paudel, Jyoti Tandukar.*
ECCV 2024. [[PDF](https://arxiv.org/abs/2407.11174)]**HiLo: Detailed and Robust 3D Clothed Human Reconstruction with High-and Low-Frequency Information of Parametric Models.**
*Yifan Yang, Dong Liu, Shuhai Zhang, Zeshuai Deng, Zixiong Huang, Mingkui Tan.*
CVPR 2024. [[PDF](https://arxiv.org/abs/2404.04876)] [[Github](https://github.com/YifYang993/HiLo)]**IntrinsicAvatar: Physically Based Inverse Rendering of Dynamic Humans from Monocular Videos Via Explicit Ray Tracing.**
*Shaofei Wang, Božidar Antić, Andreas Geiger, Siyu Tang.*
CVPR 2024. [[PDF](http://arxiv.org/abs/2312.05210)] [[Project](https://neuralbodies.github.io/IntrinsicAvatar)]**GaussianAvatar: Towards Realistic Human Avatar Modeling from A Single Video Via Animatable 3D Gaussians.**
*Liangxiao Hu, Hongwen Zhang, Yuxiang Zhang, Boyao Zhou, Boning Liu, Shengping Zhang, Liqiang Nie.*
CVPR 2024. [[PDF](http://arxiv.org/abs/2312.02134)] [[Project](https://huliangxiao.github.io/GaussianAvatar)] [[Github](https://github.com/aipixel/GaussianAvatar)]**SiTH: Single-view Textured Human Reconstruction with Image-Conditioned Diffusion.**
*[Hsuan-I Ho](https://azuxmioy.github.io/), [Jie Song](https://ait.ethz.ch/people/song), [Otmar Hilliges](https://ait.ethz.ch/people/hilliges).*
CVPR 2024. [[PDF](https://arxiv.org/abs/2311.15855)] [[Project](https://sith-diffusion.github.io/)]**Recovering 3D Human Mesh from Monocular Images: A Survey.**
*[Yating Tian](https://github.com/tinatiansjz), [Hongwen Zhang](https://github.com/HongwenZhang), [Yebin Liu](https://www.liuyebin.com/), [Limin Wang](https://wanglimin.github.io/).*
TPAMI 2023. [[PDF](https://arxiv.org/abs/2203.01923)] [[Project](https://github.com/tinatiansjz/hmr-survey)] [[Dataset](https://github.com/tinatiansjz/hmr-survey#datasets)] [[Benchmarks](https://github.com/tinatiansjz/hmr-survey#benchmarks)]**Mirror-Aware Neural Humans.**
*Daniel Ajisafe, James Tang, Shih-Yang Su, Bastian Wandt, Helge Rhodin.*
3DV 2024. [[PDF](https://arxiv.org/abs/2311.09221)] [[Project](https://danielajisafe.github.io/mirror-aware-neural-humans/)]**TeCH: Text-guided Reconstruction of Lifelike Clothed Humans.**
*[Yangyi Huang](https://huangyangyi.github.io/), [Hongwei Yi](https://xyyhw.top/), [Yuliang Xiu](https://xiuyuliang.cn/), [Tingting Liao](https://github.com/tingtingliao), [Jiaxiang Tang](https://me.kiui.moe/), [Deng Cai](http://www.cad.zju.edu.cn/home/dengcai/), [Justus Thies](http://justusthies.github.io/).*
3DV 2024. [[PDF](https://arxiv.org/abs/2308.08545)] [[Project](https://huangyangyi.github.io/TeCH/)]**Single-Image 3D Human Digitization with Shape-Guided Diffusion.**
*Badour AlBahar, Shunsuke Saito, Hung-Yu Tseng, Changil Kim, Johannes Kopf, Jia-Bin Huang.*
SIGGRAPH Asia 2023. [[PDF](https://arxiv.org/abs/2208.15001)] [[Project](https://human-sgd.github.io/)]**Global-correlated 3D-decoupling Transformer for Clothed Avatar Reconstruction.**
*Zechuan Zhang, Li Sun, Zongxin Yang, Ling Chen, Yi Yang.*
NeurIPS 2023. [[PDF](https://arxiv.org/abs/2309.13524)]**ISP: Multi-Layered Garment Draping with Implicit Sewing Patterns.**
*Ren Li, Benoît Guillard, Pascal Fua.*
NeurIPS 2023. [[PDF](http://arxiv.org/abs/2305.14100)]**NCHO: Unsupervised Learning for Neural 3D Composition of Humans and Objects.**
*[Taeksoo Kim](https://taeksuu.github.io/), [Shunsuke Saito](https://shunsukesaito.github.io/), [Hanbyul Joo](https://jhugestar.github.io/).*
ICCV 2023. [[PDF](https://arxiv.org/abs/2305.14345)] [[Project](https://taeksuu.github.io/ncho/)]**Chupa: Carving 3D Clothed Humans from Skinned Shape Priors using 2D Diffusion Probabilistic Models.**
*[Byungjun Kim](https://bjkim95.github.io/), Patrick Kwon, Kwangho Lee, Myunggi Lee, Sookwan Han, Daesik Kim, Hanbyul Joo.*
ICCV 2023 (Oral). [[PDF](https://arxiv.org/abs/2305.11870)] [[Project](https://snuvclab.github.io/chupa)]**SHERF: Generalizable Human NeRF from a Single Image.**
*[Shoukang Hu](https://skhu101.github.io/), [Fangzhou Hong](https://hongfz16.github.io/), [Liang Pan](https://scholar.google.com/citations?user=lSDISOcAAAAJ), Haiyi Mei, [Lei Yang](https://scholar.google.com.hk/citations?user=jZH2IPYAAAAJ&hl=en), [Ziwei Liu](https://liuziwei7.github.io/).*
ICCV 2023. [[PDF](https://arxiv.org/abs/2303.12791)] [[Project](https://skhu101.github.io/SHERF/)] [[Code](https://github.com/skhu101/SHERF)]**SynBody: Synthetic Dataset with Layered Human Models for 3D Human Perception and Modeling.**
*Zhitao Yang, Zhongang Cai, Haiyi Mei, Shuai Liu, Zhaoxi Chen, Weiye Xiao, Yukun Wei, Zhongfei Qing, Chen Wei, Bo Dai, Wayne Wu, Chen Qian, Dahua Lin, Ziwei Liu, Lei Yang.*
ICCV 2023. [[PDF](https://arxiv.org/abs/2303.17368)] [[Project](https://maoxie.github.io/SynBody/)]**Cyclic Test-Time Adaptation on Monocular Video for 3D Human Mesh Reconstruction.**
*Hyeongjin Nam, Daniel Sungho Jung, Yeonguk Oh, Kyoung Mu Lee.*
ICCV 2023. [[PDF](https://arxiv.org/abs/2308.06554)]**HumanRF: High-Fidelity Neural Radiance Fields for Humans in Motion.**
*[Mustafa Işık](https://www.mustafaisik.net/), Martin Rünz, Markos Georgopoulos, Taras Khakhulin, Jonathan Starck, Lourdes Agapito, Matthias Nießner.*
SIGGRAPH 2023. [[PDF](http://arxiv.org/abs/2305.06356)] [[Project](http://www.synthesiaresearch.github.io/humanrf)] [[Code](https://github.com/synthesiaresearch/humanrf)] [[Data](http://www.actors-hq.com/)]**AvatarReX: Real-time Expressive Full-body Avatars.**
*Zerong Zheng, Xiaochen Zhao, Hongwen Zhang, Boning Liu, Yebin Liu.*
SIGGRAPH 2023 [[PDF](https://arxiv.org/abs/2305.04789)] [[Project](https://liuyebin.com/AvatarRex/)]**PoseVocab: Learning Joint-structured Pose Embeddings for Human Avatar Modeling.**
*Zhe Li, Zerong Zheng, Yuxiao Liu, Boyao Zhou, Yebin Liu.*
SIGGRAPH 2023 [[PDF](https://arxiv.org/abs/2304.13006)] [[Project](https://lizhe00.github.io/projects/posevocab)]**High-Fidelity Clothed Avatar Reconstruction from a Single Image.**
*Tingting Liao, Xiaomei Zhang, Yuliang Xiu, Hongwei Yi, Xudong Liu, Guo-Jun Qi, Yong Zhang, Xuan Wang, Xiangyu Zhu, Zhen Lei.*
CVPR 2023. [[PDF](https://arxiv.org/abs/2304.03903)] [[Code](https://github.com/TingtingLiao/CAR)]**SeSDF: Self-evolved Signed Distance Field for Implicit 3D Clothed Human Reconstruction.**
*[Yukang Cao](https://yukangcao.github.io/), [Kai Han](https://www.kaihan.org/), [Kenneth Kwan-Yee K. Wong](https://i.cs.hku.hk/~kykwong/).*
CVPR 2023. [[PDF](https://arxiv.org/abs/2304.00359)] [[Project](https://yukangcao.github.io/SeSDF/)] [[Code](https://yukangcao.github.io/)]**Structured 3D Features for Reconstructing Relightable and Animatable Avatars.**
*Enric Corona, Mihai Zanfir, Thiemo Alldieck, Eduard Gabriel Bazavan, Andrei Zanfir, Cristian Sminchisescu.*
CVPR 2023. [[PDF](https://arxiv.org/abs/2212.06820)] [[Project](https://enriccorona.github.io/s3f/)]**Reconstructing Animatable Categories from Videos.**
*[Gengshan Yang](https://gengshan-y.github.io/), [Chaoyang Wang](https://mightychaos.github.io/), [N Dinesh Reddy](https://dineshreddy91.github.io/), [Deva Ramanan](http://www.cs.cmu.edu/~deva/).*
CVPR 2023. [[PDF](http://arxiv.org/abs/2305.06351)] [[Project](https://gengshan-y.github.io/rac-www/)] [[Code](https://github.com/gengshan-y/rac)]**Representing Volumetric Videos as Dynamic MLP Maps.**
*Sida Peng, Yunzhi Yan, Qing Shuai, Hujun Bao, Xiaowei Zhou.*
CVPR 2023. [[PDF](https://arxiv.org/pdf/2304.06717.pdf)] [[Project](https://zju3dv.github.io/mlp_maps)] [[Code](https://github.com/zju3dv/mlp_maps)]**Learning Neural Volumetric Representations of Dynamic Humans in Minutes.**
*[Chen Geng](https://chen-geng.com/), Sida Peng, Zhen Xu, Hujun Bao, Xiaowei Zhou.*
CVPR 2023. [[PDF](https://chen-geng.com/files/instant_nvr.pdf)] [[Project](https://zju3dv.github.io/instant_nvr/)] [[Code](https://github.com/zju3dv/instant-nvr/)]**CloSET: Modeling Clothed Humans on Continuous Surface with Explicit Template Decomposition.**
*[Hongwen Zhang](https://hongwenzhang.github.io/), [Siyou Lin](https://jsnln.github.io/), [Ruizhi Shao](https://dsaurus.github.io/saurus), [Yuxiang Zhang](https://zhangyux15.github.io/), [Zerong Zheng](https://zhengzerong.github.io/), [Han Huang](http://www.liuyebin.com/closet/#), [Yandong Guo](http://www.liuyebin.com/closet/#), [Yebin Liu](https://liuyebin.com/).*
CVPR 2023. [[PDF](http://www.liuyebin.com/closet/assets/CloSET_CVPR2023.pdf)] [[Project](http://www.liuyebin.com/closet/)]**MonoHuman: Animatable Human Neural Field from Monocular Video.**
*Zhengming Yu, Wei Cheng, Xian Liu, Wayne Wu, Kwan-Yee Lin.*
CVPR 2023. [[PDF](https://arxiv.org/abs/2304.02001)] [[Project](https://yzmblog.github.io/projects/MonoHuman/)]**FlexNeRF: Photorealistic Free-viewpoint Rendering of Moving Humans from Sparse Views.**
*Vinoj Jayasundara, Amit Agrawal, Nicolas Heron, Abhinav Shrivastava, Larry S. Davis.*
CVPR 2023. [[PDF](https://arxiv.org/abs/2303.14368)]**High-fidelity 3D Human Digitization from Single 2K Resolution Images.**
*Sang-Hun Han, Min-Gyu Park, Ju Hong Yoon, Ju-Mi Kang, Young-Jae Park, Hae-Gon Jeon.*
CVPR 2023 (Highlight). [[PDF](https://arxiv.org/abs/2303.15108)] [[Code](https://github.com/SangHunHan92/2K2K)]**Learning Neural Volumetric Representations of Dynamic Humans in Minutes.**
*Chen Geng, Sida Peng, Zhen Xu, Hujun Bao, Xiaowei Zhou.*
CVPR 2023. [[PDF](https://arxiv.org/abs/2302.12237)] [[Project](https://zju3dv.github.io/instant_nvr/)]**Vid2Avatar: 3D Avatar Reconstruction from Videos in the Wild via Self-supervised Scene Decomposition.**
*Chen Guo, Tianjian Jiang, Xu Chen, Jie Song, Otmar Hilliges.*
CVPR 2023. [[PDF](https://arxiv.org/abs/2302.11566)] [[Project](https://moygcc.github.io/vid2avatar/)]**Rodin: A Generative Model for Sculpting 3D Digital Avatars Using Diffusion.**
*Tengfei Wang, Bo Zhang, Ting Zhang, Shuyang Gu, Jianmin Bao, Tadas Baltrusaitis, Jingjing Shen, Dong Chen, Fang Wen, Qifeng Chen, Baining Guo.*
CVPR 2023. [[PDF](https://arxiv.org/abs/2212.06135)] [[Project](https://3d-avatar-diffusion.microsoft.com/)]**ECON: Explicit Clothed humans Obtained from Normals.**
*Yuliang Xiu, Jinlong Yang, Xu Cao, Dimitrios Tzionas, Michael J. Black.*
CVPR 2023. [[PDF](https://arxiv.org/abs/)] [[Project](https://xiuyuliang.cn/econ)] [[Code](https://github.com/YuliangXiu/ECON)]**X-Avatar: Expressive Human Avatars.**
*[Kaiyue Shen](https://skype-line.github.io/), Chen Guo, Manuel Kaufmann, Juan Jose Zarate, Julien Valentin, Jie Song, Otmar Hilliges.*
CVPR 2023. [[PDF](https://arxiv.org/abs/2303.04805)] [[Project](https://skype-line.github.io/projects/X-Avatar/)] [[Code](https://github.com/Skype-line/X-Avatar)]**InstantAvatar: Learning Avatars from Monocular Video in 60 Seconds.**
*Tianjian Jiang, Xu Chen, Jie Song, Otmar Hilliges.*
CVPR 2022. [[PDF](https://arxiv.org/abs/2212.10550)] [[Project](https://tijiang13.github.io/InstantAvatar/)] [[Code](https://github.com/tijiang13/InstantAvatar)]**Learning Visibility Field for Detailed 3D Human Reconstruction and Relighting.**
*Ruichen Zheng, Peng Li, Haoqian Wang, Tao Yu.*
CVPR 2023. [[PDF](http://arxiv.org/abs/2304.11900)]**HumanGen: Generating Human Radiance Fields with Explicit Priors.**
*Suyi Jiang, Haoran Jiang, Ziyu Wang, Haimin Luo, Wenzheng Chen, Lan Xu.*
CVPR 2023. [[PDF](https://arxiv.org/abs/2212.05321)]**SHARP: Shape-Aware Reconstruction of People In Loose Clothing.**
*Sai Sagar Jinka, Rohan Chacko, Astitva Srivastava, Avinash Sharma, P.J. Narayanan.*
IJCV 2023. [[PDF](https://arxiv.org/abs/2106.04778)]**Geometry-aware Two-scale PIFu Representation for Human Reconstruction.**
*Zheng Dong, Ke Xu, Ziheng Duan, Hujun Bao, Weiwei Xu, Rynson W.H. Lau.*
NeurIPS 2022. [[PDF](https://arxiv.org/abs/2112.02082)]**TotalSelfScan: Learning Full-body Avatars from Self-Portrait Videos of Faces, Hands, and Bodies.**
*[Junting Dong](https://jtdong.com/), [Qi Fang](https://raypine.github.io/), Yudong Guo, Sida Peng, Qing Shuai, Hujun Bao, Xiaowei Zhou.*
NeurIPS 2022. [[PDF](https://openreview.net/pdf?id=lgj33-O1Ely)] [[Project](https://zju3dv.github.io/TotalSelfScan/)] [[Data](http://jtdong.com/)]**FOF: Learning Fourier Occupancy Field for Monocular Real-time Human Reconstruction.**
*[Qiao Feng](https://fengq1a0.github.io/), [Yebin Liu](http://www.liuyebin.com/), [Yu-Kun Lai](https://users.cs.cf.ac.uk/Yukun.Lai/), [Jingyu Yang](http://seea.tju.edu.cn/info/1015/1608.htm), [Kun Li](http://cic.tju.edu.cn/faculty/likun/).*
NeurIPS 2022. [[PDF](https://arxiv.org/abs/2206.02194)] [[Project](https://cic.tju.edu.cn/faculty/likun/projects/FOF/index.html)] [[Code](https://github.com/fengq1a0/FOF)]Dual-Space NeRF: Learning Animatable Avatars and Scene Lighting in Separate Spaces
Yihao Zhi, Shenhan Qian, Xinhao Yan, Shenghua Gao
3DV 2022. [[PDF](https://arxiv.org/abs/2208.14851)] [[Code](https://github.com/zyhbili/Dual-Space-NeRF)]**Neural Point-based Shape Modeling of Humans in Challenging Clothing.**
*Qianli Ma, Jinlong Yang, Michael J. Black, Siyu Tang.*
3DV 2022. [[PDF](https://arxiv.org/abs/2209.06814)]**HVTR: Hybrid Volumetric-Textural Rendering for Human Avatars.**
*Tao Hu, Tao Yu, Zerong Zheng, He Zhang, Yebin Liu, Matthias Zwicker.*
3DV 2022. [[PDF](https://arxiv.org/abs/2112.10203)] [[Project](https://www.cs.umd.edu/~taohu/hvtr/)]**Human Performance Modeling and Rendering via Neural Animated Mesh.**
*[Fuqiang Zhao](https://zhaofuq.github.io/), Yuheng Jiang, Kaixin Yao, Jiakai Zhang, Liao Wang, Haizhao Dai, Yuhui Zhong, Yingliang Zhang, Minye Wu, Lan Xu, Jingyi Yu.*
SIGGRAPH Asia 2022. [[PDF](https://arxiv.org/abs/2209.08468)] [[Project](https://zhaofuq.github.io/NeuralAM/)]**FloRen: Real-time High-quality Human Performance Rendering via Appearance Flow Using Sparse RGB Cameras.**
*Ruizhi Shao, Liliang Chen, Zerong Zheng, Hongwen Zhang, Yuxiang Zhang, Han Huang, Yandong Guo, Yebin Liu.*
SIGGRAPH Asia 2022. [[PDF](https://dl.acm.org/doi/abs/10.1145/3550469.3555409)]**Occupancy Planes for Single-view RGB-D Human Reconstruction.**
*Xiaoming Zhao, Yuan-Ting Hu, Zhongzheng Ren, Alexander G. Schwing.*
AAAI 2023. [[PDF](https://arxiv.org/abs/2208.02817)] [[Code](https://github.com/Xiaoming-Zhao/oplanes)]**HuMMan: Multi-Modal 4D Human Dataset for Versatile Sensing and Modeling.**
*[Zhongang Cai](https://caizhongang.github.io/), [Daxuan Ren](https://kimren227.github.io/), [Ailing Zeng](https://ailingzeng.site/), [Zhengyu Lin](https://www.linkedin.com/in/zhengyu-lin-a908aba8/), [Tao Yu](https://ytrock.com/), [Wenjia Wang](https://scholar.google.com/citations?user=cVWmlYQAAAAJ&hl), Xiangyu Fan, Yang Gao, Yifan Yu, Liang Pan, Fangzhou Hong, Mingyuan Zhang, Chen Change Loy, Lei Yang, Ziwei Liu.*
ECCV 2022 (Oral). [[PDF](https://arxiv.org/abs/2204.13686)] [[Project](https://caizhongang.github.io/projects/HuMMan/)]**Unsupervised Learning of Efficient Geometry-Aware Neural Articulated Representations.**
*Atsuhiro Noguchi, Xiao Sun, Stephen Lin, Tatsuya Harada.*
ECCV 2022. [[PDF](https://arxiv.org/abs/2204.08839)] [[Project](https://nogu-atsu.github.io/ENARF-GAN/)] [[Code](https://github.com/nogu-atsu/ENARF-GAN)]**NeuMan: Neural Human Radiance Field from a Single Video.**
*Wei Jiang, Kwang Moo Yi, Golnoosh Samei, Oncel Tuzel, Anurag Ranjan.*
ECCV 2022. [[PDF](https://arxiv.org/abs/2203.10157)] [[Code](https://github.com/apple/ml-neuman)]**ARAH: Animatable Volume Rendering of Articulated Human SDFs.**
*[Shaofei Wang](https://taconite.github.io/), [Katja Schwarz](https://katjaschwarz.github.io/), [Andreas Geiger](http://www.cvlibs.net/), [Siyu Tang](https://vlg.inf.ethz.ch/team/Prof-Dr-Siyu-Tang.html).*
ECCV 2022. [[PDF](https://arxiv.org/abs/2210.10036)] [[Project](https://neuralbodies.github.io/arah/)] [[Code](https://github.com/taconite/arah-release)]**DiffuStereo: High Quality Human Reconstruction via Diffusion-based Stereo Using Sparse Cameras.**
*Ruizhi Shao, Zerong Zheng, Hongwen Zhang, Jingxiang Sun, Yebin Liu.*
ECCV (Oral) 2022. [[PDF](https://arxiv.org/abs/2207.08000)] [[Code](https://github.com/DSaurus/DiffuStereo)]**LoRD: Local 4D Implicit Representation for High-Fidelity Dynamic Human Modeling.**
*Boyan Jiang, Xinlin Ren, Mingsong Dou, Xiangyang Xue, Yanwei Fu, Yinda Zhang.*
ECCV 2022. [[PDF](https://arxiv.org/abs/2208.08622)] [[Code](https://boyanjiang.github.io/LoRD/)]**Neural Capture of Animatable 3D Human from Monocular Video.**
*Gusi Te, Xiu Li, Xiao Li, Jinglu Wang, Wei Hu, Yan Lu.*
ECCV 2022. [[PDF](https://arxiv.org/abs/2208.08728)]**The One Where They Reconstructed 3D Humans and Environments in TV Shows.**
*Georgios Pavlakos, Ethan Weber, Matthew Tancik, Angjoo Kanazawa.*
ECCV 2022. [[PDF](https://arxiv.org/abs/2207.14279)] [[Project](http://ethanweber.me/sitcoms3D/)]**UNIF: United Neural Implicit Functions for Clothed Human Reconstruction and Animation.**
*Shenhan Qian, Jiale Xu, Ziwei Liu, Liqian Ma, Shenghua Gao.*
ECCV 2022. [[PDF](https://arxiv.org/abs/2207.09835)]**3D Clothed Human Reconstruction in the Wild.**
*Gyeongsik Moon, Hyeongjin Nam, Takaaki Shiratori, Kyoung Mu Lee.*
ECCV 2022. [[PDF](https://arxiv.org/abs/2207.10053)] [[Project](https://github.com/hygenie1228/ClothWild_RELEASE/blob/main)]**NDF: Neural Deformable Fields for Dynamic Human Modelling.**
*Ruiqi Zhang, Jie Chen.*
ECCV 2022. [[PDF](https://arxiv.org/abs/2207.09193)]**Animatable Volume Rendering of Articulated Human SDFs.**
*[Shaofei Wang](https://taconite.github.io/), [Katja Schwarz](https://katjaschwarz.github.io/), [Andreas Geiger](http://www.cvlibs.net/), [Siyu Tang](https://vlg.inf.ethz.ch/team/Prof-Dr-Siyu-Tang.html).*
ECCV 2022. [[PDF](https://drive.google.com/file/d/10yCrdOadwKNiDQBni23_W03ZwVafkfCJ/view)] [[Project](https://neuralbodies.github.io/arah/)] [[Project](https://github.com/taconite/arah-release)]**Learning Implicit Templates for Point-Based Clothed Human Modeling.**
*Siyou Lin, Hongwen Zhang, Zerong Zheng, Ruizhi Shao, Yebin Liu.*
ECCV 2022. [[PDF](https://arxiv.org/abs/2207.06955)] [[Project](https://jsnln.github.io/fite)] [[Project](https://github.com/jsnln/fite)]**DANBO: Disentangled Articulated Neural Body Representations via Graph Neural Networks.**
*[Shih-Yang Su](https://lemonatsu.github.io/), [Timur Bagautdinov](https://scholar.google.ch/citations?user=oLi7xJ0AAAAJ&hl=en), and [Helge Rhodin](http://helge.rhodin.de/).*
ECCV 2022. [[PDF](https://arxiv.org/abs/2205.01666)] [[Project](https://lemonatsu.github.io/danbo/)] [[Code](https://github.com/LemonATsu/DANBO-pytorch)]**Geometry-Guided Progressive NeRF for Generalizable and Efficient Neural Human Rendering.**
*Mingfei Chen, Jianfeng Zhang, Xiangyu Xu, Lijuan Liu, Jiashi Feng, Shuicheng Yan.*
ECCV 2022. [[PDF](https://arxiv.org/abs/2112.04312)]**AvatarCap: Animatable Avatar Conditioned Monocular Human Volumetric Capture.**
*Zhe Li, Zerong Zheng, Hongwen Zhang, Chaonan Ji, Yebin Liu.*
ECCV 2022. [[PDF](https://arxiv.org/abs/2207.02031)] [[Project](http://www.liuyebin.com/avatarcap/avatarcap.html)]**Authentic Volumetric Avatars From a Phone Scan.**
*[Chen Cao](https://sites.google.com/site/zjucaochen/home), Tomas Simon, Jin Kyu Kim, Gabe Schwartz, Michael Zollhoefer, Shunsuke Saito, Stephen Lombardi, Shih-en Wei, Danielle Belko, Shoou-i Yu, Yaser Sheikh, Jason Saragih.*
SIGGRAPH 2022. [[PDF](https://drive.google.com/file/d/1i4NJKAggS82wqMamCJ1OHRGgViuyoY6R/view?usp=sharing)] [[Project](https://zollhoefer.com/papers/arXiv22_InstantAvatars/page.html)]**HumanNeRF: Efficiently Generated Human Radiance Field from Sparse Inputs.**
*[Fuqiang Zhao](https://zhaofuq.github.io/), Wei Yang, Jiakai Zhang, Pei Lin, Yingliang Zhang, Jingyi Yu, Lan Xu.*
CVPR 2022. [[PDF](https://arxiv.org/pdf/2112.02789.pdf)] [[Project](https://zhaofuq.github.io/humannerf/)]**Photorealistic Monocular 3D Reconstruction of Humans Wearing Clothing.**
*[Thiemo Alldieck](https://scholar.google.com/citations?user=tJlD24EAAAAJ), [Mihai Zanfir](https://scholar.google.com/citations?user=af68sKkAAAAJ), [Cristian Sminchisescu](https://scholar.google.com/citations?user=LHTI1W8AAAAJ).*
CVPR 2022. [[PDF](https://phorhum.github.io/static/assets/alldieck2022phorhum.pdf)] [[Project](https://phorhum.github.io/)]**HumanNeRF: Free-viewpoint Rendering of Moving People from Monocular Video.**
*[Chung-Yi Weng](https://homes.cs.washington.edu/~chungyi/), Brian Curless, Pratul Srinivasan,Jonathan T. Barron, Ira Kemelmacher-Shlizerman.*
CVPR 2022. [[PDF](https://arxiv.org/abs/2201.04127)] [[Project](https://grail.cs.washington.edu/projects/humannerf/)]**H4D: Human 4D Modeling by Learning Neural Compositional Representation.**
*Boyan Jiang, Yinda Zhang, Xingkui Wei, Xiangyang Xue, Yanwei Fu.*
CVPR 2022. [[PDF](https://arxiv.org/abs/2203.01247)]**OcclusionFusion: Occlusion-aware Motion Estimation for Real-time Dynamic 3D Reconstruction.**
*[Wenbin Lin](https://wenbin-lin.github.io/), Chengwei Zheng, Jun-Hai Yong, Feng Xu.*
CVPR 2022. [[PDF](https://arxiv.org/pdf/2203.07977.pdf)] [[Project](https://wenbin-lin.github.io/OcclusionFusion/)]**PINA: Learning a Personalized Implicit Neural Avatar from a Single RGB-D Video Sequence.**
*Zijian Dong, Chen Guo, Jie Song, Xu Chen, Andreas Geiger, Otmar Hilliges.*
CVPR 2022. [[PDF](https://arxiv.org/abs/2203.01754)] [[Project](https://zj-dong.github.io/pina/)]**SelfRecon: Self Reconstruction Your Digital Avatar from Monocular Video.**
*[Boyi Jiang](https://scholar.google.com/citations?user=lTlZV8wAAAAJ&hl=zh-CN), Yang Hong, Hujun Bao, Juyong Zhang.*
CVPR 2022. [[PDF](https://arxiv.org/abs/2201.12792)] [[Project](https://jby1993.github.io/SelfRecon/)]**Fourier PlenOctrees for Dynamic Radiance Field Rendering in Real-time.**
*Liao Wang, Jiakai Zhang, Xinhang Liu, Fuqiang Zhao, Yanshun Zhang, Yingliang Zhang, Minye Wu, Lan Xu, Jingyi Yu.*
CVPR 2022. [[PDF](https://arxiv.org/abs/2202.08614)]**NeuralHOFusion: Neural Volumetric Rendering under Human-object Interactions.**
*Yuheng Jiang, Suyi Jiang, Guoxing Sun, Zhuo Su, Kaiwen Guo, Minye Wu, Jingyi Yu, Lan Xu.*
CVPR 2022. [[PDF](https://arxiv.org/abs/2202.12825)] [[Project](https://nowheretrix.github.io/neuralfusion/)]**JIFF: Jointly-aligned Implicit Face Function for High Quality Single View Clothed Human Reconstruction.**
*[Yukang Cao](https://github.com/yukangcao), [Guanying Chen](https://guanyingc.github.io/), [Kai Han](http://www.hankai.org/), [Wenqi Yang](https://github.com/ywq), [Kwan-Yee K. Wong](http://i.cs.hku.hk/~kykwong/).*
CVPR 2022 (oral). [[PDF](https://arxiv.org/abs/2204.10549)] [[Project](https://yukangcao.github.io/JIFF)]**High-Fidelity Human Avatars from a Single RGB Camera.**
*Hao Zhao, [Jinsong Zhang](https://zhangjinso.github.io/), [Yu-Kun Lai](https://users.cs.cf.ac.uk/Yukun.Lai/), [Zerong Zheng](https://zhengzerong.github.io/), Yingdi Xie, [Yebin Liu](http://www.liuyebin.com/), [Kun Li](http://cic.tju.edu.cn/faculty/likun/).*
CVPR 2022. [[PDF](http://cic.tju.edu.cn/faculty/likun/projects/HF-Avatar/assets/main.pdf)] [[Project](http://cic.tju.edu.cn/faculty/likun/projects/HF-Avatar/index.html)] [[Project](https://github.com/hzhao1997/HF-Avatar)] [[Data](https://drive.google.com/file/d/1qh1dj5ZoUBst_02UJY7IWstQMhb8L5IA/view?usp=sharing)]**ICON: Implicit Clothed humans Obtained from Normals.**
*[Yuliang Xiu](https://ps.is.tuebingen.mpg.de/person/yxiu), [Jinlong Yang](https://ps.is.tuebingen.mpg.de/person/jyang), [Dimitrios Tzionas](https://ps.is.mpg.de/~dtzionas), [Michael J. Black](https://ps.is.tuebingen.mpg.de/person/black).*
CVPR 2022. [[PDF](https://arxiv.org/abs/2112.09127)] [[Code](https://github.com/YuliangXiu/ICON)]**DoubleField: Bridging the Neural Surface and Radiance Fields for High-fidelity Human Rendering.**
*Ruizhi Shao, Hongwen Zhang, He Zhang, Yanpei Cao, Tao Yu, Yebin Liu.*
CVPR 2022. [[PDF](https://arxiv.org/abs/2106.03798)] [[Project](http://www.liuyebin.com/dbfield/dbfield.html)]**Structured Local Radiance Fields for Human Avatar Modeling.**
*[Zerong Zheng](https://zhengzerong.github.io/), Han Huang, Tao Yu, Hongwen Zhang, Yandong Guo, Yebin Liu.*
CVPR 2022. [[PDF](https://arxiv.org/abs/2203.14478)] [[Project](https://liuyebin.com/slrf/slrf.html)] [[Code](https://zhengzerong.github.io/)]**DoubleField: Bridging the Neural Surface and Radiance Fields for High-fidelity Human Reconstruction and Rendering.**
*Ruizhi Shao, [Hongwen Zhang](https://hongwenzhang.github.io/), He Zhang, Mingjia Chen, Yanpei Cao, Tao Yu, Yebin Liu.*
CVPR 2022. [[PDF](https://arxiv.org/abs/2106.03798)] [[Project](http://www.liuyebin.com/dbfield/dbfield.html)]**I M Avatar: Implicit Morphable Head Avatars from Videos.**
*Yufeng Zheng, Victoria Fernández Abrevaya, Xu Chen, Marcel C. Bühler, Michael J. Black, Otmar Hilliges.*
CVPR 2022. [[PDF](https://arxiv.org/abs/2112.07471)]**Surface-Aligned Neural Radiance Fields for Controllable 3D Human Synthesis.**
*Tianhan Xu, Yasuhiro Fujita, Eiichi Matsumoto.*
CVPR 2022. [[PDF](https://arxiv.org/abs/2201.01683)]**Neural Head Avatars from Monocular RGB Videos.**
*[Philip-William Grassal](https://hci.iwr.uni-heidelberg.de/vislearn/people), [Malte Prinzler](https://de.linkedin.com/in/malte-prinzler), [Titus Leistner](https://titus-leistner.de/pages/about-me.html), [Carsten Rother](https://hci.iwr.uni-heidelberg.de/vislearn/people/carsten-rother/), [Matthias Nießner](https://www.niessnerlab.org/members/matthias_niessner/profile.html), [Justus Thies](https://justusthies.github.io/).*
CVPR 2022. [[PDF](https://arxiv.org/abs/2112.01554)] [[Project](https://philgras.github.io/neural_head_avatars/neural_head_avatars.html)]**gDNA: Towards Generative Detailed Neural Avatars.**
*[Xu Chen](https://ait.ethz.ch/people/xu/), [Tianjian Jiang](https://ait.ethz.ch/people/zhengyuf/), [Jie Song](https://ait.ethz.ch/people/song/), [Jinlong Yang](https://is.mpg.de/person/jyang), [Michael J. Black](https://ps.is.mpg.de/~black), [Andreas Geiger](http://www.cvlibs.net/), [Otmar Hilliges](https://ait.ethz.ch/people/hilliges/).*
CVPR 2022. [[PDF](https://arxiv.org/abs/2201.04123)] [[Project](https://ait.ethz.ch/projects/2022/gdna/downloads/)] [[Code](https://github.com/xuchen-ethz/gdna)]**HumanNeRF: Generalizable Neural Human Radiance Field from Sparse Inputs.**
*Fuqiang Zhao, Wei Yang, Jiakai Zhang, Pei Lin, Yingliang Zhang, Jingyi Yu, Lan Xu.*
CVPR 2022. [[PDF](https://arxiv.org/abs/2112.02789)] [[Project](https://zhaofuq.github.io/humannerf/)]**PERGAMO: Personalized 3D Garments from Monocular Video.**
*Andrés Casado-Elvira, Marc Comino Trinidad, Dan Casas.*
CFG 2022. [[PDF](https://arxiv.org/abs/2210.15040)] [[Project](http://mslab.es/projects/PERGAMO/)]**Explicit Clothing Modeling for an Animatable Full-Body Avatar.**
*Donglai Xiang, Fabian Andres Prada, Timur Bagautdinov, Weipeng Xu, Yuan Dong, He Wen, Jessica Hodgins, Chenglei Wu.*
SIGGRAPH Asia 2021. [[PDF](https://arxiv.org/abs/2106.14879)]**Driving-Signal Aware Full-Body Avatars.**
*Timur Bagautdinov, Chenglei Wu, Tomas Simon, Fabian Prada, Takaaki Shiratori, Shih-En Wei, Weipeng Xu, Yaser Sheikh, Jason Saragih.*
TOG 2021. [[PDF](https://arxiv.org/abs/2105.10441)]**High-Fidelity 3D Digital Human Head Creation from RGB-D Selfies.**
*Xiangkai Lin, Yajing Chen, Linchao Bao, Haoxian Zhang, Sheng Wang, Xuefei Zhe, Xinwei Jiang, Jue Wang, Dong Yu, Zhengyou Zhang.*
TOG 2021. [[PDF](https://arxiv.org/abs/2010.05562)] [[Code](https://github.com/tencent-ailab/hifi3dface)] [[Project](https://github.com/tencent-ailab/hifi3dface_projpage)]**Real-time Deep Dynamic Characters.**
*Marc Habermann, Lingjie Liu, Weipeng Xu, Michael Zollhoefer, Gerard Pons-Moll, Christian Theobalt.*
TOG 2021. [[PDF](https://arxiv.org/abs/2105.01794)] [[Project](https://people.mpi-inf.mpg.de/~mhaberma/projects/2021-ddc/)]**Learning Skeletal Articulations with Neural Blend Shapes.**
*Peizhuo Li, Kfir Aberman, Rana Hanocka, Libin Liu, Olga Sorkine-Hornung, Baoquan Chen.*
TOG 2021. [[PDF](https://arxiv.org/abs/2105.02451)] [[Video](https://youtu.be/antc20EFh6k)] [[Project](https://peizhuoli.github.io/neural-blend-shapes/)]**Detailed Avatar Recovery from Single Image.**
*Hao Zhu, Xinxin Zuo, Haotian Yang, Sen Wang, Xun Cao, Ruigang Yang.*
TPAMI 2021. [[PDF](https://arxiv.org/abs/2108.02931)]**RSC-Net: 3D Human Pose, Shape and Texture from Low-Resolution Images and Videos.**
*Xiangyu Xu, Hao Chen, Francesc Moreno-Noguer, Laszlo A. Jeni, Fernando De la Torre.*
TPAMI 2021. [[PDF](https://arxiv.org/abs/2103.06498)] [[Code](https://github.com/xuxy09/RSC-Net)] [[Project](https://sites.google.com/view/xiangyuxu/3d_eccv20)]**A Deeper Look into DeepCap.**
*[Marc Habermann](http://people.mpi-inf.mpg.de/~mhaberma/), [Weipeng Xu](http://people.mpi-inf.mpg.de/~wxu/), [Michael Zollhoefer](https://zollhoefer.com/), [Gerard Pons-Moll](https://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/people/gerard-pons-moll/), [Christian Theobalt](http://www.mpi-inf.mpg.de/~theobalt/).*
TPAMI 2021. [[PDF](https://people.mpi-inf.mpg.de/~mhaberma/projects/2021-tpami-deeperdeepcap/data/paper.pdf)] [[Code](https://people.mpi-inf.mpg.de/~mhaberma/projects/2021-tpami-deeperdeepcap/)] [[Data](https://gvv-assets.mpi-inf.mpg.de/)]**Learning Implicit 3D Representations of Dressed Humans from Sparse Views.**
*Pierre Zins, Yuanlu Xu, Edmond Boyer, Stefanie Wuhrer, Tony Tung.*
3DV 2021. [[PDF](https://arxiv.org/abs/2104.08013)]**PIXIE: Collaborative Regression of Expressive Bodies using Moderation.**
*Yao Feng, Vasileios Choutas, Timo Bolkart, Dimitrios Tzionas, Michael J. Black.*
3DV 2021. [[PDF](https://arxiv.org/pdf/2105.05301.pdf)] [[Project](https://pixie.is.tue.mpg.de/)]**A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose.**
*[Shih-Yang Su](https://lemonatsu.github.io/), [Frank Yu](https://yu-frank.github.io/), [Michael Zollhoefer](https://zollhoefer.com/), [Helge Rhodin](http://helge.rhodin.de/).*
NeurIPS 2021. [[PDF](https://arxiv.org/abs/2102.06199)] [[Project](https://lemonatsu.github.io/ANeRF-Surface-free-Pose-Refinement/)]**Class-agnostic Reconstruction of Dynamic Objects from Videos.**
*[Zhongzheng Ren](https://jason718.github.io), [Xiaoming Zhao](https://xiaoming-zhao.com/index.php), [Alexander G. Schwing](http://www.alexander-schwing.de/).*
NeurIPS 2021. [[PDF](https://arxiv.org/abs/2112.02091)] [[Project](https://jason718.github.io/redo)]**Neural Human Performer: Learning Generalizable Radiance Fields for Human Performance Rendering.**
*Youngjoong Kwon, Dahun Kim, Duygu Ceylan, Henry Fuchs.*
NeurIPS 2021. [[PDF](https://arxiv.org/abs/2109.07448)]**MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images.**
*[Shaofei Wang](https://taconite.github.io/), Marko Mihajlovic, Qianli Ma, Andreas Geiger, Siyu Tang.*
NeurIPS 2021. [[PDF](https://arxiv.org/abs/2106.11944)] [[Project](https://neuralbodies.github.io/metavatar/)] [[Code](https://github.com/taconite/MetaAvatar-release)]**Garment4D: Garment Reconstruction from Point Cloud Sequences.**
*[Fangzhou Hong](https://hongfz16.github.io/), Liang Pan, Zhongang Cai, Ziwei Liu.*
NeurIPS 2021. [[PDF](https://openreview.net/pdf?id=aF60hOEwHP)] [[Project](https://hongfz16.github.io/projects/Garment4D.html)] [[Code](https://github.com/hongfz16/Garment4D)]**Learning Anchored Unsigned Distance Functions with Gradient Direction Alignment for Single-view Garment Reconstruction.**
*Fang Zhao, Wenhao Wang, Shengcai Liao, Ling Shao.*
ICCV 2021. [[PDF](https://arxiv.org/abs/2108.08478)] [[Code](https://github.com/zhaofang0627/AnchorUDF)]**Dynamic Surface Function Networks for Clothed Human Bodies.**
*Andrei Burov, Matthias Nießner, Justus Thies.*
ICCV 2021. [[PDF](https://arxiv.org/abs/2104.03978)] [[Video](https://youtu.be/4wbSi9Sqdm4)] [[Code](https://github.com/andreiburov/DSFN)]**THUNDR: Transformer-based 3D HUmaN Reconstruction with Markers.**
*Mihai Zanfir, Andrei Zanfir, Eduard Gabriel Bazavan, William T. Freeman, Rahul Sukthankar, Cristian Sminchisescu.*
ICCV 2021. [[PDF](https://arxiv.org/abs/2106.09336)]**Neural Articulated Radiance Field.**
*Atsuhiro Noguchi, Xiao Sun, Stephen Lin, Tatsuya Harada.*
ICCV 2021. [[PDF](http://arxiv.org/abs/2104.03110)]**3D Human Texture Estimation From a Single Image With Transformers.**
*Xiangyu Xu, Chen Change Loy.*
ICCV 2021. [[PDF](https://openaccess.thecvf.com/content/ICCV2021/html/Xu_3D_Human_Texture_Estimation_From_a_Single_Image_With_Transformers_ICCV_2021_paper.html)]**ARCH++: Animation-Ready Clothed Human Reconstruction Revisited.**
*Tong He, Yuanlu Xu, Shunsuke Saito, Stefano Soatto, Tony Tung.*
ICCV 2021. [[PDF](https://openaccess.thecvf.com/content/ICCV2021/html/He_ARCH_Animation-Ready_Clothed_Human_Reconstruction_Revisited_ICCV_2021_paper.html)]**DeePSD: Automatic Deep Skinning and Pose Space Deformation for 3D Garment Animation.**
*Hugo Bertiche, Meysam Madadi, Emilio Tylson, Sergio Escalera.*
ICCV 2021. [[PDF](https://openaccess.thecvf.com/content/ICCV2021/html/Bertiche_DeePSD_Automatic_Deep_Skinning_and_Pose_Space_Deformation_for_3D_ICCV_2021_paper.html)]**EgoRenderer: Rendering Human Avatars From Egocentric Camera Images.**
*Tao Hu, Kripasindhu Sarkar, Lingjie Liu, Matthias Zwicker, Christian Theobalt.*
ICCV 2021. [[PDF](https://openaccess.thecvf.com/content/ICCV2021/html/Hu_EgoRenderer_Rendering_Human_Avatars_From_Egocentric_Camera_Images_ICCV_2021_paper.html)]**SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural Implicit Shapes.**
*[Xu Chen](https://ait.ethz.ch/people/xu/), [Yufeng Zheng](https://ait.ethz.ch/people/zhengyuf/), [Michael J. Black](https://ps.is.mpg.de/~black), [Otmar Hilliges](https://ait.ethz.ch/people/hilliges/), [Andreas Geiger](http://www.cvlibs.net/).*
ICCV 2021. [[PDF](https://arxiv.org/abs/2104.03953)] [[Project](https://xuchen-ethz.github.io/snarf/] [[Code](https://github.com/xuchen-ethz/snarf)]**Neural-GIF: Neural Generalized Implicit Functions for Animating People in Clothing.**
*[Garvita Tiwari](https://virtualhumans.mpi-inf.mpg.de/people/Tiwari.html), Nikolaos Sarafianos, [Tony Tung](https://sites.google.com/site/tony2ng/), Gerard Pons-Moll.*
ICCV 2021. [[PDF](https://arxiv.org/abs/2108.08807)] [[Project](https://virtualhumans.mpi-inf.mpg.de/neuralgif/)]**The Power of Points for Modeling Humans in Clothing.**
*[Qianli Ma](https://qianlim.github.io/), Jinlong Yang, Siyu Tang, Michael J. Black.*
ICCV 2021. [[PDF](https://arxiv.org/abs/2109.01137)] [[Project](https://qianlim.github.io/POP)] [[Code](https://github.com/qianlim/POP)]**Learning to Regress Bodies from Images using Differentiable Semantic Rendering.**
*[Sai Kumar Dwivedi](https://ps.is.tuebingen.mpg.de/person/sdwivedi), [Nikos Athanasiou](https://ps.is.tuebingen.mpg.de/employees/nathanasiou), [Muhammed Kocabas](https://ps.is.tuebingen.mpg.de/person/mkocabas), [Michael J. Black](https://ps.is.tuebingen.mpg.de/person/black).*
ICCV 2021. [[PDF](https://arxiv.org/abs/2110.03480)] [[Project](https://dsr.is.tue.mpg.de/)]**imGHUM: Implicit Generative Models of 3D Human Shape and Articulated Pose.**
*Thiemo Alldieck, Hongyi Xu, Cristian Sminchisescu.*
ICCV 2021. [[PDF](https://arxiv.org/abs/2108.10842)]**DeepMultiCap: Performance Capture of Multiple Characters Using Sparse Multiview Cameras.**
*[Yang Zheng](https://y-zheng18.github.io/zy.github.io/), Ruizhi Shao, [Yuxiang Zhang](https://zhangyux15.github.io/), [Zerong Zheng](https://zhengzerong.github.io/), [Tao Yu](https://ytrock.com/), [Yebin Liu](http://www.liuyebin.com/student.html).*
ICCV 2021. [[PDF](http://www.liuyebin.com/dmc/assets/main.pdf)] [[Project](http://www.liuyebin.com/dmc/dmc.html)]**Animatable Neural Radiance Fields for Human Body Modeling.**
*[Sida Peng](https://pengsida.net/), Junting Dong, Qianqian Wang, Shangzhan Zhang, Qing Shuai, Hujun Bao, Xiaowei Zhou.*
ICCV 2021. [[PDF](https://arxiv.org/abs/2105.02872)] [[Project](https://zju3dv.github.io/animatable_nerf/)] [[Code](https://github.com/zju3dv/animatable_nerf)]**Animatable Neural Radiance Fields for Modeling Dynamic Human Bodies.**
*Sida Peng, Junting Dong, Qianqian Wang, Shangzhan Zhang, Qing Shuai, Xiaowei Zhou, Hujun Bao.*
ICCV 2021. [[PDF](https://arxiv.org/pdf/2203.08133.pdf)] [[Project](https://zju3dv.github.io/animatable_nerf/)]**Function4D: Real-time Human Volumetric Capture from Very Sparse Consumer RGBD Sensors.**
*Tao Yu, Zerong Zheng, Kaiwen Guo, Pengpeng Liu, Qionghai Dai, Yebin Liu.*
CVPR 2021 (oral). [[PDF](http://www.liuyebin.com/Function4D/assets/Function4D.pdf)] [[Project](http://www.liuyebin.com/Function4D/Function4D.html)] [[THuman2.0 Dataset](https://github.com/ytrock/THuman2.0-Dataset)]**POSEFusion: Pose-guided Selective Fusion for Single-view Human Volumetric Capture.**
*[Zhe Li](https://lizhe00.github.io/), Tao Yu, Zerong Zheng, Kaiwen Guo, Yebin Liu.*
CVPR 2021. [[PDF](https://arxiv.org/abs/2103.15331)] [[Project](http://www.liuyebin.com/posefusion/posefusion.html)]**SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks.**
*Shunsuke Saito, Jinlong Yang, Qianli Ma, Michael J. Black.*
CVPR 2021. [[PDF](https://arxiv.org/abs/2104.03313)] [[Code](https://www.cs.cmu.edu/~aayushb/SST/)]**Normalized Avatar Synthesis Using StyleGAN and Perceptual Refinement.**
*Huiwen Luo, Koki Nagano, Han-Wei Kung, [Qingguo Xu](https://qingguo-xu.com/), Zejian Wang, Lingyu Wei, Liwen Hu, Hao Li.*
CVPR 2021. [[PDF](http://arxiv.org/abs/2106.11423)]**DeepSurfels: Learning Online Appearance Fusion.**
*Marko Mihajlovic, Silvan Weder, Marc Pollefeys, Martin R. Oswald.*
CVPR 2021. [[PDF](https://arxiv.org/abs/2012.14240)] [[Project](https://onlinereconstruction.github.io/DeepSurfels)] [[Code](https://github.com/onlinereconstruction/deep_surfels)]**Semi-Supervised Synthesis of High-Resolution Editable Textures for 3D Humans.**
*Bindita Chaudhuri, Nikolaos Sarafianos, Linda Shapiro, Tony Tung.*
CVPR 2021. [[PDF](http://arxiv.org/abs/2103.17266)]**StereoPIFu: Depth Aware Clothed Human Digitization via Stereo Vision.**
*Yang Hong, [Juyong Zhang](http://staff.ustc.edu.cn/~juyong/), Boyi Jiang, [Yudong Guo](https://yudongguo.github.io/), Ligang Liu, Hujun Bao.*
CVPR 2021. [[PDF](https://arxiv.org/abs/2104.05289)] [[Project](https://crishy1995.github.io/StereoPIFuProject/)] [[Code](https://github.com/CrisHY1995/StereoPIFu_Code)]**LASR: Learning Articulated Shape Reconstruction from a Monocular Video.**
*Gengshan Yang, Deqing Sun, Varun Jampani, Daniel Vlasic, Forrester Cole, Huiwen Chang, Deva Ramanan, William T. Freeman, Ce Liu.*
CVPR 2021. [[PDF](https://arxiv.org/abs/2105.02976)] [[Project](https://lasr-google.github.io/)]**Multi-person Implicit Reconstruction from a Single Image.**
*Armin Mustafa, Akin Caliskan, Lourdes Agapito, Adrian Hilton.*
CVPR 2021. [[PDF](https://arxiv.org/abs/2104.09283)]**StylePeople: A Generative Model of Fullbody Human Avatars.**
*Artur Grigorev, Karim Iskakov, Anastasia Ianina, Renat Bashirov, Ilya Zakharkin, Alexander Vakhitov, Victor Lempitsky.*
CVPR 2021. [[PDF](https://arxiv.org/abs/2104.08363)]**SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements.**
*Qianli Ma, Shunsuke Saito, Jinlong Yang, Siyu Tang, Michael J. Black.*
CVPR 2021. [[PDF](https://arxiv.org/abs/2104.07660)] [[Project](https://qianlim.github.io/SCALE)]**Semi-supervised Synthesis of High-Resolution Editable Textures for 3D Humans.**
*Bindita Chaudhuri, Nikolaos Sarafianos, Linda Shapiro, Tony Tung.*
CVPR 2021. [[PDF](https://arxiv.org/abs/2103.17266)]**Monocular Real-time Full Body Capture with Inter-part Correlations.**
*[Yuxiao Zhou](https://calciferzh.github.io/), [Marc Habermann](https://people.mpi-inf.mpg.de/~mhaberma/), Ikhsanul Habibie, Ayush Tewari, [Christian Theobalt](https://people.mpi-inf.mpg.de/~theobalt/), [Feng Xu](http://xufeng.site/).*
CVPR 2021. [[PDF](https://arxiv.org/abs/2012.06087)]**SMPLicit: Topology-aware Generative Model for Clothed People.**
*Enric Corona, Albert Pumarola, Guillem Alenyà, Gerard Pons-Moll, Francesc Moreno-Noguer.*
CVPR 2021. [[PDF](https://arxiv.org/abs/2103.06871)] [[Project](http://www.iri.upc.edu/people/ecorona/smplicit/)]**A Deep Emulator for Secondary Motion of 3D Characters.**
*Mianlun Zheng, Yi Zhou, Duygu Ceylan, Jernej Barbic.*
CVPR 2021. [[PDF](https://arxiv.org/abs/2103.01261)]**PVA: Pixel-aligned Volumetric Avatars.**
*Amit Raj, Michael Zollhoefer, Tomas Simon, Jason Saragih, Shunsuke Saito, James Hays, Stephen Lombardi.*
CVPR 2021. [[PDF](https://arxiv.org/abs/2101.02697)] [[Project](https://volumetric-avatars.github.io/)]**ANR: Articulated Neural Rendering for Virtual Avatars.**
*Amit Raj, Julian Tanke, James Hays, Minh Vo, Carsten Stoll, Christoph Lassner.*
CVPR 2021. [[PDF](https://arxiv.org/abs/2012.12890)] [[Project](https://anr-avatars.github.io/)]**SMPLpix: Neural Avatars from 3D Human Models.**
*Sergey Prokudin, Michael J. Black, Javier Romero.*
WACV 2021. [[PDF](https://arxiv.org/abs/2008.06872)] [[Code](https://github.com/sergeyprokudin/smplpix)]**Synthetic Training for Monocular Human Mesh Recovery.**
*Yu Sun, Qian Bao, Wu Liu, Wenpeng Gao, Yili Fu, Chuang Gan, Tao Mei.*
BMVC 2020. [[PDF](https://arxiv.org/abs/2010.14036)]**Realistic Virtual Humans from Smartphone Videos.**
*Stephan Wenninger, Jascha Achenbach, Andrea Bartl, Marc Erich Latoschik, [Mario Botsch](https://ls7-gv.cs.tu-dortmund.de/).*
ACM VRST 2020 (Best Paper Award). [[PDF](https://ls7-gv.cs.tu-dortmund.de/downloads/publications/2020/vrst20.pdf)] [[Video](https://ls7-gv.cs.tu-dortmund.de/downloads/publications/2020/vrst20.mp4)] [[Talk](https://youtu.be/Mm318gs_fb8)]**LoopReg: Self-supervised Learning of Implicit Surface Correspondences, Pose and Shape for 3D Human Mesh Registration.**
*Bharat Lal Bhatnagar, Cristian Sminchisescu, Christian Theobalt, Gerard Pons-Moll.*
NeurIPS 2020. [[PDF](https://arxiv.org/abs/2010.12447)] [[Code](https://github.com/bharat-b7/LoopReg)]**3D Multi-bodies: Fitting Sets of Plausible 3D Human Models to Ambiguous Image Data.**
*Benjamin Biggs, Sébastien Ehrhadt, Hanbyul Joo, Benjamin Graham, Andrea Vedaldi, David Novotny.*
NeurIPS 2020. [[PDF](https://arxiv.org/abs/2011.00980)]**HPBTT: Human Parsing Based Texture Transfer from Single Image to 3D Human via Cross-View Consistency.**
*Fang Zhao, Shengcai Liao, Kaihao Zhang, Ling Shao.*
NeurIPS 2020. [[PDF](https://papers.nips.cc/paper/2020/file/a516a87cfcaef229b342c437fe2b95f7-Paper.pdf)] [[Code](https://github.com/zhaofang0627/HPBTT)]**Neural3D: Light-weight Neural Portrait Scanning via Context-aware Correspondence Learning.**
*Xin Suo, Minye Wu, Yanshun Zhang, Yanshun Zhang, Yingliang Zhang, Yingliang Zhang, Lan Xu, Qiang Hu, Jingyi Yu.*
ACM MM 2020. [[PDF](https://dl.acm.org/doi/abs/10.1145/3394171.3413734)]**3DBooSTeR: 3D Body Shape and Texture Recovery.**
*Alexandre Saint, Anis Kacem, Kseniya Cherenkova, Djamila Aouada.*
ECCV 2020 Workshop. [[PDF](https://arxiv.org/abs/2010.12670)]**BCNet: Learning Body and Cloth Shape from A Single Image.**
*Boyi Jiang, Juyong Zhang, Yang Hong, Jinhao Luo, Ligang Liu, Hujun Bao.*
ECCV 2020. [[PDF](https://arxiv.org/abs/2004.00214)]**MonoPort: Monocular Real-Time Volumetric Performance Capture.**
*Ruilong Li, Yuliang Xiu, Shunsuke Saito, Zeng Huang, Kyle Olszewski, Hao Li.*
ECCV 2020. [[PDF](https://arxiv.org/abs/2007.13988)] [[Code](https://github.com/Project-Splinter/MonoPort)]**3D Human Shape and Pose from a Single Low-Resolution Image with Self-Supervised Learning.**
*Xiangyu Xu, Hao Chen, Francesc Moreno-Noguer, Laszlo A. Jeni, Fernando De la Torre.*
ECCV 2020. [[PDF](https://arxiv.org/abs/2007.13666)] [[Project](https://sites.google.com/view/xiangyuxu/3d_eccv20)]**BLSM: A Bone-Level Skinned Model of the Human Mesh.**
*Haoyang Wang, Riza Alp Güler, Iasonas Kokkinos, George Papandreou, Stefanos Zafeiriou.*
ECCV 2020. [[pdf](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123500001.pdf)] [[Supplement](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123500001-supp.zip)]**STAR: Sparse Trained Articulated Human Body Regressor.**
*Ahmed A. A. Osman, Timo Bolkart, Michael J. Black.*
ECCV 2020. [[PDF](https://arxiv.org/abs/2008.08535)] [[Project](http://star.is.tue.mpg.de/)]**I2L-MeshNet: Image-to-Lixel Prediction Network for Accurate 3D Human Pose and Mesh Estimation from a Single RGB Image.**
*Gyeongsik Moon, Kyoung Mu Lee.*
ECCV 2020. [[PDF](https://arxiv.org/abs/2008.03713)] [[Code](https://github.com/mks0601/I2L-MeshNet_RELEASE)]**Appearance Consensus Driven Self-Supervised Human Mesh Recovery.**
*Jogendra Nath Kundu, Mugalodi Rakesh, Varun Jampani, Rahul Mysore Venkatesh, R. Venkatesh Babu.*
ECCV 2020. [[PDF](https://arxiv.org/abs/2008.01341)]**TexMesh: Reconstructing Detailed Human Texture and Geometry from RGB-D Video.**
*Tiancheng Zhi, Christoph Lassner, Tony Tung, Carsten Stoll, Srinivasa G. Narasimhan, Minh Vo.*
ECCV 2020. [[PDF](https://arxiv.org/abs/2008.00158)]**NormalGAN: Learning Detailed 3D Human from a Single RGB-D Image.**
*Lizhen Wang, Xiaochen Zhao, Tao Yu, Songtao Wang, [Yebin Liu](http://liuyebin.com/).*
ECCV 2020. [[PDF](https://arxiv.org/abs/2007.15340)] [[Project](http://www.liuyebin.com/NormalGan/normalgan.html)]**Reconstructing NBA Players.**
*[Luyang Zhu](https://homes.cs.washington.edu/~lyzhu/), [Konstantinos Rematas](http://www.krematas.com/), [Brian Curless](https://homes.cs.washington.edu/~curless/), [Steve Seitz](https://homes.cs.washington.edu/~seitz/), [Ira Kemelmacher-Shlizerman](https://sites.google.com/view/irakemelmacher/home).*
ECCV 2020. [[PDF](https://arxiv.org/abs/2007.13303)] [[Code](https://github.com/luyangzhu/NBA-Players)] [[Project](http://grail.cs.washington.edu/projects/nba_players/)]**RobustFusion: Human Volumetric Capture with Data-driven Visual Cues using a RGBD Camera.**
*Zhuo Su, [Xu Lan](https://www.xu-lan.com/), Zerong Zheng, Tao Yu, Yebin Liu, Lu Fang.*
ECCV 2020. [[PDF](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123490239.pdf)]**IPNet: Combining Implicit Function Learning and Parametric Models for 3D Human Reconstruction.**
*Bharat Lal Bhatnagar, Cristian Sminchisescu, Christian Theobalt, [Gerard Pons-Moll](http://virtualhumans.mpi-inf.mpg.de/publications.html).*
ECCV 2020. [[PDF](https://arxiv.org/abs/2007.11432)] [[Project](http://virtualhumans.mpi-inf.mpg.de/ipnet)] [[Code](https://github.com/bharat-b7/IPNet)]**Neural Articulated Shape Approximation.**
*Boyang Deng, JP Lewis, Timothy Jeruzalski, [Gerard Pons-Moll](http://virtualhumans.mpi-inf.mpg.de/publications.html), Geoffrey Hinton, Mohammad Norouzi, Andrea Tagliasacchi.*
ECCV 2020. [[PDF](http://virtualhumans.mpi-inf.mpg.de/papers/NASA20/NASA.pdf)]**SIZER: A Dataset and Model for Parsing 3D Clothing and Learning Size Sensitive 3D Clothing.**
*Garvita Tiwari, Bharat Lal Bhatnagar, Tony Tung, [Gerard Pons-Moll](http://virtualhumans.mpi-inf.mpg.de/publications.html).*
ECCV 2020. [[PDF](https://arxiv.org/abs/2007.11610)] [[Data](https://nextcloud.mpi-klsb.mpg.de/index.php/s/nx6wK6BJFZCTF8C)] [[Code](https://github.com/garvita-tiwari/sizer)] [[Project](https://virtualhumans.mpi-inf.mpg.de/sizer/)]**TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style.**
*Chaitanya Patel, Zhouyingcheng Liao, Gerard Pons-Moll.*
CVPR 2020. [[PDF](https://arxiv.org/abs/2003.04583)] [[Code](https://github.com/chaitanya100100/TailorNet)] [[Project](https://virtualhumans.mpi-inf.mpg.de/tailornet/)] [[Data](https://github.com/zycliao/TailorNet_dataset)]**Geo-PIFu: Geometry and Pixel Aligned Implicit Functions for Single-view Human Reconstruction.**
*Tong He, John Collomosse, Hailin Jin, Stefano Soatto.*
NeurIPS 2020. [[PDF](https://arxiv.org/abs/2006.08072)] [[Code](https://github.com/simpleig/Geo-PIFu)]**Self-Supervised Human Depth Estimation from Monocular Videos.**
*Feitong Tan, Hao Zhu, Zhaopeng Cui, Siyu Zhu, Marc Pollefeys, Ping Tan.*
CVPR 2020. [[PDF](https://arxiv.org/abs/2005.03358)]**ARCH: Animatable Reconstruction of Clothed Humans.**
*Zeng Huang, Yuanlu Xu, Christoph Lassner, Hao Li, Tony Tung.*
CVPR 2020. [[PDF](https://arxiv.org/abs/2004.04572)]**pix2surf: Learning to Transfer Texture from Clothing Images to 3D Humans.**
*[Aymen Mir](https://virtualhumans.mpi-inf.mpg.de/people/Mir.html), Thiemo Alldieck, Gerard Pons-Moll.*
CVPR 2020. [[PDF](https://arxiv.org/abs/2003.02050)] [[Code](https://github.com/aymenmir1/pix2surf)]**PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization.**
*[Shunsuke Saito](http://www-scf.usc.edu/~saitos/), [Tomas Simon](https://scholar.google.com/citations?user=7aabHgsAAAAJ), [Jason Saragih](https://scholar.google.com/citations?hl=en&user=ss-IvjMAAAAJ), [Hanbyul Joo](https://jhugestar.github.io/).*
CVPR 2020. [[PDF](https://arxiv.org/abs/2004.00452)] [[Project](https://shunsukesaito.github.io/PIFuHD)] [[Code](https://github.com/facebookresearch/pifuhd)]**DeepCap: Monocular Human Performance Capture Using Weak Supervision.**
*Marc Habermann, Weipeng Xu, Michael Zollhoefer, erard Pons-Moll, Christian Theobalt.*
CVPR 2020. [[PDF](https://gvv.mpi-inf.mpg.de/projects/2020-cvpr-deepcap/data/paper.pdf)] [[Project](https://gvv.mpi-inf.mpg.de/projects/2020-cvpr-deepcap/)]**EventCap: Monocular 3D Capture of High-Speed Human Motions using an Event Camera.**
*Lan XU, Weipeng Xu, Vladislav Golyanik, Marc Habermann, Lu Fang, Christian Theobalt.*
CVPR 2020. [[PDF](https://gvv.mpi-inf.mpg.de/projects/2020-cvpr-eventcap/data/paper.pdf)] [[Project](https://gvv.mpi-inf.mpg.de/projects/2020-cvpr-eventcap/)]**DeepDeform: Learning Non-rigid RGB-D Reconstruction with Semi-supervised Data.**
*Aljaž Božič, Michael Zollhöfer, Christian Theobalt, Matthias Nießner.*
CVPR 2020. [[PDF](https://arxiv.org/abs/1912.04302)] [[Project](https://pure.mpg.de/pubman/faces/ViewItemOverviewPage.jsp?itemId=item_3187203)] [[Code](https://github.com/AljazBozic/DeepDeform)] [[DeepDeform Benchmark](http://kaldir.vc.in.tum.de/deepdeform_benchmark)]**Implicit Functions in Feature Space for 3D Shape Reconstruction and Completion.**
*Julian Chibane, [Thiemo Alldieck](https://graphics.tu-bs.de/people/alldieck), Gerard Pons-Moll.*
CVPR 2020. [[PDF](https://virtualhumans.mpi-inf.mpg.de/papers/chibane20ifnet/chibane20ifnet.pdf)]**DeepCap: Monocular Human Performance Capture Using Weak Supervision.**
*Marc Habermann, Weipeng Xu, Michael and Zollhoefer, Gerard Pons-Moll, Christian Theobalt.*
CVPR 2020. [[PDF](https://virtualhumans.mpi-inf.mpg.de/)]**TetraTSDF: 3D Human Reconstruction from A Single Image with A Tetrahedral Outer Shell.**
*Hayato Onizuka, Zehra Hayirci, Diego Thomas, Akihiro Sugimoto, Hideaki Uchiyama, Rin-ichiro Taniguchi.*
CVPR 2020. [[PDF](https://arxiv.org/abs/2004.10534)] [[Code](https://github.com/diegothomas/TetraTSDF)]**Multi-View Consistency Loss for Improved Single-Image 3D Reconstruction of Clothed People.**
*Akin Caliskan, Adrian Hilton.*
ACCV 2020. [[PDF](https://arxiv.org/abs/2009.14162)] [[Code](https://github.com/akcalakcal/Multi_View_Consistent_Single_Image_3D_Human_Reconstruction)]**PaMIR: Parametric Model-Conditioned Implicit Representation for Image-based Human Reconstruction.**
*Zerong Zheng, Tao Yu, Yebin Liu, Qionghai Dai.*
TPAMI 2020. [[PDF](http://www.liuyebin.com/pamir/assets/revised_paper.pdf)] [[Project](http://www.liuyebin.com/pamir/pamir.html)]**MulayCap: Multi-layer Human Performance Capture Using A Monocular Video Camera.**
*Zhaoqi Su, Weilin Wan, [Tao Yu](https://ytrock.com/), [Lingjie Liu](https://lingjie0206.github.io/), Lu Fang, Wenping Wang and Yebin Liu.*
TVCG 2020. [[PDF](http://www.liuyebin.com/MulayCap/MulayCap_files/MulayCap.pdf)] [[Project](http://www.liuyebin.com/MulayCap/MulayCap.html)]**Disentangled Human Body Embedding Based on Deep Hierarchical Neural Network.**
*Boyi Jiang, Juyong Zhang, Jianfei Cai, Jianmin Zheng.*
TVCG 2020. [[PDF](https://arxiv.org/abs/1905.05622)]**SparseFusion: Dynamic Human Avatar Modeling from Sparse RGBD Images.**
*Xinxin Zuo, Sen Wang, Jiangbin Zheng, Weiwei Yu, Minglun Gong, Ruigang Yang, Li Cheng.*
TMM 2020. [[PDF](https://arxiv.org/abs/2006.03630)]**UnstructuredFusion: Real-time 4D Geometry and Texture Reconstruction using Commercial RGBD Cameras.**
*Lan Xu, Zhuo Su, Lei Han, Tao Yu, Yebin Liu, Lu Fang.*
TPAMI 2019. [[PDF](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8708933)]**PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization.**
*Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Morishima, Angjoo Kanazawa, Hao Li.*
ICCV 2019. [[PDF](https://arxiv.org/pdf/1905.05172.pdf)] [[Project](shunsukesaito.github.io/PIFu)] [[Code](https://github.com/shunsukesaito/PIFu)]**Multi-Garment Net_Learning to Dress 3D people from Images.**
*Bharat Lal Bhatnagar, Garvita Tiwari, Christian Theobalt, Gerard Pons-Moll.*
ICCV 2019. [[Code](https://github.com/bharat-b7/MultiGarmentNetwork)] [[PDF](https://virtualhumans.mpi-inf.mpg.de/papers/bhatnagar2019mgn/bhatnagar2019mgn.pdf)]**Learning to Reconstruct People in Clothing from a Single RGB Camera.**
*Thiemo Alldieck, Marcus Magnor, Bharat Lal Bhatnagar, Christian Theobalt, Gerard Pons-Moll.*
ICCV 2019. [[PDF](https://arxiv.org/abs/1903.05885)] [[Code](https://github.com/thmoa/octopus)]**A Neural Network for Detailed Human Depth Estimation From a Single Image.**
*Sicong Tang, Feitong Tan, Kelvin Cheng, Zhaoyang Li, Siyu Zhu, Ping Tan.*
ICCV 2019. [[PDF](http://openaccess.thecvf.com/content_ICCV_2019/papers/Tang_A_Neural_Network_for_Detailed_Human_Depth_Estimation_From_a_ICCV_2019_paper.pdf)]**TexturePose: Supervising Human Mesh Estimation with Texture Consistency.**
*[Georgios Pavlakos](https://www.seas.upenn.edu/~pavlakos/), [Nikos Kolotouros](https://www.seas.upenn.edu/~nkolot/), [Kostas Daniilidis](http://www.cis.upenn.edu/~kostas/).*
ICCV 2019. [[PDF](https://arxiv.org/abs/1910.11322)] [[Project](https://www.seas.upenn.edu/~pavlakos/projects/texturepose/)] [[Code](https://github.com/geopavlakos/TexturePose)]**3DPeople: Modeling the Geometry of Dressed Humans.**
*Albert Pumarola, Jordi Sanchez, Gary P. T. Choi, Alberto Sanfeliu, Francesc Moreno-Noguer.*
ICCV 2019. [[PDF](https://arxiv.org/abs/1904.04571)] [[3DPeople-Dataset](https://github.com/albertpumarola/3DPeople-Dataset)]**DeepHuman: 3D Human Reconstruction from a Single Image.**
*Zerong Zheng, Tao Yu, Yixuan Wei, Qionghai Dai, Yebin Liu.*
ICCV 2019. [[PDF](https://arxiv.org/abs/1904.04571)] [[Project](http://www.liuyebin.com/deephuman/deephuman.html)]**Tex2Shape: Detailed Full Human Body Geometry From a Single Image.**
*Thiemo Alldieck, Gerard Pons-Moll, Christian Theobalt, Marcus Magnor.*
CVPR 2019. [[PDF](https://arxiv.org/abs/1903.06473)] [[Code](https://github.com/thmoa/tex2shape)]**SimulCap : Single-View Human Performance Capture with Cloth Simulation.**
*Tao Yu, Zerong Zheng, Yuan Zhong, Jianhui Zhao, Qionghai Dai, Gerard Pons-moll, Yebin Liu.*
CVPR 2019. [[PDF](http://www.liuyebin.com/simulcap/assets/SimulCap.pdf)] [[Project](http://www.liuyebin.com/simulcap/simulcap.html)]**SiCloPe: Silhouette-Based Clothed People.**
*Ryota Natsume, [Shunsuke Saito](http://www-scf.usc.edu/~saitos/), Zeng Huang, Weikai Chen, Chongyang Ma, Hao Li, Shigeo Morishima.*
CVPR 2019. [[PDF](http://openaccess.thecvf.com/content_CVPR_2019/papers/Natsume_SiCloPe_Silhouette-Based_Clothed_People_CVPR_2019_paper.pdf)]**Textured Neural Avatars.**
*Aliaksandra Shysheya, Egor Zakharov, Kara-Ali Aliev, Renat Bashirov, Egor Burkov, Karim Iskakov, Aleksei Ivakhnenko, Yury Malkov, Igor Pasechnik, Dmitry Ulyanov, Alexander Vakhitov, Victor Lempitsky.*
CVPR 2019 (oral). [[PDF](https://arxiv.org/abs/1905.08776)] [[Project](https://saic-violet.github.io/texturedavatar/)]**MonoClothCap: Towards Temporally Coherent Clothing Capture from Monocular RGB Video.**
*[Donglai Xiang](https://xiangdonglai.github.io/), Fabian Prada, Chenglei Wu, Jessica Hodgins.*
3DV 2020 (Best Paper Honorable Mention, Oral). [[PDF](https://arxiv.org/abs/2009.10711)]**360-Degree Textures of People in Clothing from a Single Image.**
*Verica Lazova, Eldar Insafutdinov, Gerard Pons-Moll.*
3DV 2019. [[PDF](https://arxiv.org/pdf/1908.07117.pdf)] [[Project](http://virtualhumans.mpi-inf.mpg.de/360tex/)]**Detailed Human Avatars from Monocular Video.**
*Thiemo Alldieck, Marcus Magnor, Weipeng Xu, Christian Theobalt, Gerard Pons-Moll.*
3DV 2018. [[PDF](https://arxiv.org/abs/1808.01338)] [[Code](https://github.com/thmoa/semantic_human_texture_stitching)]**BodyFusion: Real-time Capture of Human Motion and Surface Geometry Using a Single Depth Camera.**
*[Tao Yu](https://ytrock.com/), [Kaiwen Guo](http://www.guokaiwen.com/), [Feng Xu](http://feng-xu.com/), Yuan Dong, Zhaoqi Su, Jianhui Zhao, Jianguo Li, Qionghai Dai, Yebin Liu.*
ICCV 2017. [[PDF](http://liuyebin.com/bodyfusion/bodyfusion_files/BdyFu_ICC7.pdf)] [[Project]()]## Cloth Modelling, Draping, Simulation, and Dressing
**4D-DRESS: A 4D Dataset of Real-world Human Clothing with Semantic Annotations.**
*[Wenbo Wang](https://wenbwa.github.io), [Hsuan-I Ho](https://ait.ethz.ch/people/hohs), [Chen Guo](https://ait.ethz.ch/people/cheguo), [Boxiang Rong](https://ribosome-rbx.github.io), [Artur Grigorev](https://ait.ethz.ch/people/agrigorev), [Jie Song](https://ait.ethz.ch/people/song), [Juan Jose Zarate](https://ait.ethz.ch/people/jzarate), [Otmar Hilliges](https://ait.ethz.ch/people/hilliges).*
CVPR 2024 (Highlight). [[PDF](https://arxiv.org/abs/2404.18630)] [[Project](https://eth-ait.github.io/4d-dress/)] [[Data](https://4d-dress.ait.ethz.ch/)] [[Code](https://github.com/eth-ait/4d-dress)]**A Generative Multi-Resolution Pyramid and Normal-Conditioning 3D Cloth Draping.**
*Hunor Laczkó, Meysam Madadi, Sergio Escalera, Jordi Gonzalez.*
WACV 2024. [[PDF](https://arxiv.org/abs/2311.02700)]**Towards Multi-Layered 3D Garments Animation.**
*Yidi Shao, Chen Change Loy, Bo Dai.*
ICCV 2023. [[PDF](https://arxiv.org/pdf/2305.10418.pdf)] [[Project](https://mmlab-ntu.github.io/project/layersnet/index.html)]**REC-MV: REconstructing 3D Dynamic Cloth from Monocular Videos.**
*[Lingteng Qiu](https://lingtengqiu.github.io/), [Guanying Chen](https://guanyingc.github.io/), Jiapeng Zhou, [Mutian Xu](https://mutianxu.github.io/), Junle Wang, [Xiaoguang Han](https://mypage.cuhk.edu.cn/academics/hanxiaoguang/).*
CVPR 2023. [[PDF](http://arxiv.org/abs/2305.14236)] [[Project](https://lingtengqiu.github.io/2023/REC-MV/)] [[Code](https://github.com/GAP-LAB-CUHK-SZ/REC-MV)]**HOOD: Hierarchical Graphs for Generalized Modelling of Clothing Dynamics.**
*[Artur Grigorev](https://dolorousrtur.github.io/), [Bernhard Thomaszewski](https://n.ethz.ch/~bthomasz/index.html), [Michael J. Black](https://ps.is.mpg.de/~black), [Otmar Hilliges](https://ait.ethz.ch/people/hilliges/).*
CVPR 2023. [[PDF](https://arxiv.org/abs/2212.07242)] [[Project](https://dolorousrtur.github.io/hood/)] [[Code](https://github.com/Dolorousrtur/HOOD)]**Deep Deformation Detail Synthesis for Thin Shell Models.**
*Lan Chen, Lin Gao, Jie Yang, Shibiao Xu, Juntao Ye, Xiaopeng Zhang, Yu-Kun Lai.*
CGF 2023. [[PDF](https://arxiv.org/abs/2102.11541)]**Motion Guided Deep Dynamic 3D Garments.**
*[Meng Zhang](https://mengzephyr.com/), [Duygu Ceylan](https://research.adobe.com/person/duygu-ceylan/), [Niloy J. Mitra](http://www0.cs.ucl.ac.uk/staff/n.mitra/).*
SIGGRAPH Asia 2022. [[PDF](https://arxiv.org/pdf/2209.11449.pdf)] [[Project](http://geometry.cs.ucl.ac.uk/projects/2022/MotionDeepGarment/)]**Predicting Loose-Fitting Garment Deformations Using Bone-Driven Motion Networks.**
*Xiaoyu Pan, Jiaming Mai, Xinwei Jiang, Dongxue Tang, Jingxiang Li, Tianjia Shao, Kun Zhou, Xiaogang Jin, Dinesh Manocha.*
SIGGRAPH 2022. [[PDF](https://arxiv.org/abs/2205.01355)] [[Code](https://github.com/non-void/VirtualBones)]**DiffCloth: Differentiable Cloth Simulation with Dry Frictional Contact.**
*[Yifei Li](https://people.csail.mit.edu/liyifei/), [Tao Du](https://people.csail.mit.edu/taodu/), [Kui Wu](https://people.csail.mit.edu/kuiwu/), [Jie Xu](http://people.csail.mit.edu/jiex/), [Wojciech Matusik](https://cdfg.csail.mit.edu/wojciech).*
TOG 2022. [[PDF](https://arxiv.org/abs/2106.05306)] [[Project](https://people.csail.mit.edu/liyifei/publication/diffcloth-differentiable-cloth-simulator/)] [[Code](https://github.com/omegaiota/DiffCloth)]**DIG: Draping Implicit Garment over the Human Body.**
*Ren Li, Benoît Guillard, Edoardo Remelli, Pascal Fua.*
ACCV 2022. [[PDF](https://arxiv.org/abs/2209.10845)]**SNUG: Self-Supervised Neural Dynamic Garments.**
*Igor Santesteban, Miguel A. Otaduy, Dan Casas.*
CVPR 2022 (Oral). [[PDF](https://arxiv.org/abs/2204.02219)] [[Project](http://mslab.es/projects/SNUG/)] [[Code](https://github.com/isantesteban/snug)]**Registering Explicit to Implicit: Towards High-Fidelity Garment mesh Reconstruction from Single Images.**
*Heming Zhu, Lingteng Qiu, Yuda Qiu, Xiaoguang Han.*
CVPR 2022. [[PDF](https://arxiv.org/abs/2203.15007)] [[Project](https://kv2000.github.io/2022/03/28/reef/)]**Self-Supervised Collision Handling via Generative 3D Garment Models for Virtual Try-On.**
*Igor Santesteban, Nils Thuerey, Miguel A. Otaduy, Dan Casas.*
CVPR 2021. [[PDF](http://arxiv.org/abs/2105.06462)]**Dynamic Neural Garments.**
*[Meng Zhang](https://mengzephyr.com/), [Duygu Ceylan](http://www.duygu-ceylan.com/), [Tuanfeng Wang](http://geometry.cs.ucl.ac.uk/tuanfeng/), [Niloy J. Mitra](http://www0.cs.ucl.ac.uk/staff/n.mitra/).*
TOG 2021. [[PDF](https://arxiv.org/)] [[Project](http://geometry.cs.ucl.ac.uk/projects/2021/DynamicNeuralGarments/)] [[Code](https://github.com/MengZephyr/DynamicNeuralGarments)]**PBNS: Physically Based Neural Simulation for Unsupervised Outfit Pose Space Deformation.**
*Hugo Bertiche, Meysam Madadi, Sergio Escalera.*
SIGGRAPH Asia 2021. [[PDF](https://arxiv.org/abs/2012.11310)] [[Project](https://hbertiche.github.io/PBNS/)] [[Code](https://github.com/hbertiche/PBNS)]**Deep Detail Enhancement for Any Garment.**
*Meng Zhang, Tuanfeng Wang, Duygu Ceylan, Niloy J. Mitra.*
Eurographics 2021. [[PDF](https://arxiv.org/pdf/2008.04367.pdf)]**Deep Physics-aware Inference of Cloth Deformation for Monocular Human Performance Capture.**
*Yue Li, Marc Habermann, Bernhard Thomaszewski, Stelian Coros, Thabo Beeler, Christian Theobalt.*
3DV 2021. [[PDF](https://arxiv.org/abs/2011.12866)]**Neural Non-Rigid Tracking.**
*Aljaž Božič, Pablo Palafox, Michael Zollhöfer, Angela Dai, Justus Thies, Matthias Nießner.*
NeurIPS 2020. [[PDF](https://arxiv.org/abs/2006.13240)]**SIZER: A Dataset and Model for Parsing 3D Clothing and Learning Size Sensitive 3D Clothing.**
*Garvita Tiwari, Bharat Lal Bhatnagar, Tony Tung, Gerard Pons-Moll.*
ECCV 2020. [[PDF](https://arxiv.org/abs/2007.11610)]**GAN-based Garment Generation Using Sewing Pattern Images.**
*Yu Shen, Junbang Liang, Ming C. Lin.*
ECCV 2020. [[PDF](http://cs.umd.edu/~liangjb/docs/ICCV2019.pdf)] [[Project](https://gamma.umd.edu/researchdirections/virtualtryon/garmentgeneration)] [[Code](https://github.com/williamljb/HumanMultiView)]**Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction from Single Images.**
*Heming Zhu, Yu Cao, Hang Jin, Weikai Chen, Dong Du, Zhangye Wang, Shuguang Cui, Xiaoguang Han.*
ECCV 2020. [[PDF](https://arxiv.org/abs/2003.12753)] [[Project](https://kv2000.github.io/2020/03/25/deepFashion3DRevisited/)]**GarNet++: Improving Fast and Accurate Static3D Cloth Draping by Curvature Loss.**
*Erhan Gundogdu, Victor Constantin, Shaifali Parashar, Amrollah Seifoddini, Minh Dang, Mathieu Salzmann, Pascal Fua.*
TPAMI 2020. [[PDF](https://arxiv.org/abs/2007.10867)]**Homogenized Yarn-Level Cloth.**
*[Georg Sperl](https://pub.ist.ac.at/~gsperl/), Rahul Narain, Chris Wojtan.*
SIGGRAPH (TOG) 2020. [[PDF](http://pub.ist.ac.at/group_wojtan/projects/2020_Sperl_HYLC/2020_HYLC_paper.pdf)] [[Project](http://visualcomputing.ist.ac.at/publications/2020/HYLC/)] [[Data & Code](http://pub.ist.ac.at/group_wojtan/projects/2020_Sperl_HYLC/2020_HYLC_data_code.zip)]**CLOTH3D: Clothed 3D Humans.**
*Hugo Bertiche, Meysam Madadi, Sergio Escalera.*
ECCV 2020. [[PDF](https://arxiv.org/abs/1912.02792)]**CAPE: Learning to Dress 3D People in Generative Clothing.**
*[Qianli Ma](https://ps.is.tue.mpg.de/person/qma), Jinlong Yang, Anurag Ranjan, Sergi Pujades, Gerard Pons-Moll, Siyu Tang, [Michael J. Black](https://ps.is.tuebingen.mpg.de/person/black).*
CVPR 2020. [[PDF](https://arxiv.org/abs/1907.13615v2)] [[Project](http://ps.is.mpg.de/publications/cape-cvpr-20)]**Learning to Dress 3D People in Generative Clothing.**
*Qianli Ma, Jinlong Yang, Anurag Ranjan, Sergi Pujades, Gerard Pons-Moll, Siyu Tang, Michael J. Black.*
CVPR 2020. [[PDF](https://arxiv.org/abs/1907.13615)]**The Virtual Tailor: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style.**
*Chaitanya Patel, Zhouyingcheng Liao, Gerard Pons-Moll.*
CVPR 2020. [[PDF](https://arxiv.org/abs/2003.04583)]**Learning to Transfer Texture from Clothing Images to 3D Humans.**
*Aymen Mir, Thiemo Alldieck, Gerard Pons-Moll.*
CVPR 2020. [[PDF](https://arxiv.org/abs/2003.02050)] [[Code](https://github.com/aymenmir1/pix2surf)]**GarNet: A Two-Stream Network for Fast and Accurate 3D Cloth Draping.**
*[Erhan Gundogdu](https://egundogdu.github.io/), Victor Constantin, Amrollah Seifoddini, Minh Dang, Mathieu Salzmann, Pascal Fua.*
ICCV 2019. [[PDF](https://arxiv.org/abs/1811.10983)] [[Supplementary Material](https://www.epfl.ch/labs/cvlab/wp-content/uploads/2019/04/GarNet_supplementary.pdf)] [[Project](https://cvlab.epfl.ch/research/garment-simulation/garnet/)] [[Dataset](https://drive.switch.ch/index.php/s/7mAk9SoZ7J4uokt)]**Multi-Garment Net: Learning to Dress 3D People from Images.**
*Bharat Lal Bhatnagar, Garvita Tiwari, Christian Theobalt, [Gerard Pons-Moll](https://www.mpi-inf.mpg.de/departments/computer-vision-and-machine-learning/people/gerard-pons-moll/)(REAL VIRTUAL HUMANS, MPII).*
ICCV 2019. [[PDF](https://arxiv.org/abs/1908.06903)]**DRAPE: DRessing Any PErson.**
*Peng Guan, Loretta Reiss, David A. Hirshberg, Alexander Weiss, Michael J. Black.*
TOG 2012. [[PDF](https://dl.acm.org/citation.cfm?doid=2185520.2185531)] [[Project](https://ps.is.tue.mpg.de/research_projects/drape-dressing-any-person)]## Human Image and Video Generation
**Gaussian Shell Maps for Efficient 3D Human Generation.**
*Rameen Abdal, Wang Yifan, Zifan Shi, Yinghao Xu, Ryan Po, Zhengfei Kuang, Qifeng Chen, Dit-Yan Yeung, Gordon Wetzstein.*
CVPR 2024. [[PDF](http://arxiv.org/abs/2311.17857)] [[Project](https://rameenabdal.github.io/GaussianShellMaps/)]**HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion.**
*Xian Liu, Jian Ren, Aliaksandr Siarohin, Ivan Skorokhodov, Yanyu Li, Dahua Lin, Xihui Liu, Ziwei Liu, Sergey Tulyakov.*
ICLR 2024. [[PDF](http://arxiv.org/abs/2310.08579)] [[Project](https://snap-research.github.io/HyperHuman/)]**VeRi3D: Generative Vertex-based Radiance Fields for 3D Controllable Human Image Synthesis.**
*Xinya Chen, Jiaxin Huang, Yanrui Bin, Lu Yu, Yiyi Liao.*
ICCV 2023. [[PDF](http://arxiv.org/abs/2309.04800)]**UnitedHuman: Harnessing Multi-Source Data for High-Resolution Human Generation.**
*Jianglin Fu, Shikai Li, Yuming Jiang, Kwan-Yee Lin, Wayne Wu, Ziwei Liu.*
ICCV 2023. [[PDF](https://arxiv.org/abs/2309.14335)] [[Project](https://unitedhuman.github.io/)] [[Github](https://github.com/UnitedHuman/UnitedHuman)]**Text2Performer: Text-Driven Human Video Generation.**
*Yuming Jiang, Shuai Yang, Tong Liang Koh, Wayne Wu, Chen Change Loy, Ziwei Liu.*
ICCV 2023. [[PDF](https://arxiv.org/pdf/2304.08483.pdf)] [[Project](https://yumingj.github.io/projects/Text2Performer.html)] [[Code](https://github.com/yumingj/Text2Performer)]**Text-guided 3D Human Generation from 2D Collections.**
*Tsu-Jui Fu, Wenhan Xiong, Yixin Nie, Jingyu Liu, Barlas Oğuz, William Yang Wang.*
EMNLP 2023 (Findings). [[PDF](http://arxiv.org/abs/2305.14312)] [[Project](https://text-3dh.github.io/)]**Cross Attention Based Style Distribution for Controllable Person Image Synthesis.**
*Xinyue Zhou, Mingyu Yin, Xinyuan Chen, Li Sun, Changxin Gao, Qingli Li.*
ECCV 2022. [[PDF](https://arxiv.org/abs/2208.00712)]**StyleGAN-Human: A Data-Centric Odyssey of Human Generation.**
*Jianglin Fu, Shikai Li, [Yuming Jiang](https://yumingj.github.io/), [Kwan-Yee Lin](https://kwanyeelin.github.io/), [Chen Qian](https://scholar.google.com/citations?user=AerkT0YAAAAJ&hl=zh-CN), [Chen Change Loy](https://www.mmlab-ntu.com/person/ccloy/), [Wayne Wu](https://dblp.org/pid/50/8731.html), [Ziwei Liu](https://liuziwei7.github.io/).*
ECCV 2022. [[PDF](https://arxiv.org/abs/2204.11823)] [[Code](https://youtu.be/nIrb9hwsdcI)] [[Project](https://stylegan-human.github.io/)] [[Colab Demo](https://colab.research.google.com/drive/1sgxoDM55iM07FS54vz9ALg1XckiYA2On)] [[Hugging Face Demo](https://huggingface.co/spaces/hysts/StyleGAN-Human)]**Text2Human: Text-Driven Controllable Human Image Generation.**
*Yuming Jiang, Shuai Yang, Haonan Qiu, Wayne Wu, Chen Change Loy, Ziwei Liu.*
SIGGRAPH 2022. [[PDF](https://arxiv.org/abs/2205.15996)] [[Code](https://github.com/yumingj/Text2Human)] [[Project](https://yumingj.github.io/projects/Text2Human.html)]**Self-Supervised Correlation Mining Network for Person Image Generation.**
*Zijian Wang, Xingqun Qi, Kun Yuan, Muyi Sun.*
CVPR 2022. [[PDF](https://openaccess.thecvf.com/content/CVPR2022/html/Wang_Self-Supervised_Correlation_Mining_Network_for_Person_Image_Generation_CVPR_2022_paper.html)]**BodyGAN: General-Purpose Controllable Neural Human Body Generation.**
*Chaojie Yang, Hanhui Li, Shengjie Wu, Shengkai Zhang, Haonan Yan, Nianhong Jiao, Jie Tang, Runnan Zhou, Xiaodan Liang, Tianxiang Zheng.*
CVPR 2022. [[PDF](https://openaccess.thecvf.com/content/CVPR2022/html/Yang_BodyGAN_General-Purpose_Controllable_Neural_Human_Body_Generation_CVPR_2022_paper.html)]**InsetGAN for Full-Body Image Generation.**
*[Anna Frühstück](https://afruehstueck.github.io/), [Krishna Kumar Singh](http://krsingh.cs.ucdavis.edu/), [Eli Shechtman](https://research.adobe.com/person/eli-shechtman/), [Niloy J. Mitra](https://research.adobe.com/person/niloy-mitra/), [Peter Wonka](http://peterwonka.net/), [Jingwan Lu](https://research.adobe.com/person/jingwan-lu/).*
CVPR 2022. [[PDF](https://arxiv.org/abs/2112.07200)] [[Project](http://afruehstueck.github.io/insetgan)]**Neural Texture Extraction and Distribution for Controllable Person Image Synthesis.**
*Yurui Ren, Xiaoqing Fan, Ge Li, Shan Liu, Thomas H. Li.*
CVPR 2022 (oral). [[PDF](https://arxiv.org/abs/2203.10496)] [[Code](https://github.com/RenYurui/Neural-Texture-Extraction-Distribution)]**Exploring Dual-task Correlation for Pose Guided Person Image Generation.**
*Pengze Zhang, Lingxiao Yang, Jianhuang Lai, Xiaohua Xie.*
CVPR 2022. [[PDF](https://arxiv.org/abs/2203.02910)]**Structure-Transformed Texture-Enhanced Network for Person Image Synthesis.**
*Munan Xu, Yuanqi Chen, Shan Liu, Thomas H. Li, Ge Li.*
ICCV 2021. [[PDF](https://openaccess.thecvf.com/content/ICCV2021/html/Xu_Structure-Transformed_Texture-Enhanced_Network_for_Person_Image_Synthesis_ICCV_2021_paper.html)]**PISE: Person Image Synthesis and Editing with Decoupled GAN.**
*[Jinsong Zhang](https://zhangjinso.github.io/), [Kun Li](http://cic.tju.edu.cn/faculty/likun/), [Yu-Kun Lai](http://users.cs.cf.ac.uk/Yukun.Lai/), [Jingyu Yang](http://tju.iirlab.org/).*
CVPR 2021. [[PDF](https://arxiv.org/abs/2103.04023)] [[Project](http://cic.tju.edu.cn/faculty/likun/projects/PISE/)] [[Code](https://github.com/Zhangjinso/PISE)]**XingGAN for Person Image Generation.**
*[Hao Tang](http://disi.unitn.it/~hao.tang/), Song Bai, Li Zhang, Philip H. S. Torr, Nicu Sebe.*
ECCV 2020. [[Code](https://github.com/Ha0Tang/XingGAN)]**Progressive Pose Attention Transfer for Person Image Generation.**
*Zhen Zhu, Tengteng Huang, Baoguang Shi, Miao Yu, Bofei Wang, Xiang Bai.*
CVPR 2019 (oral). [[PDF](https://arxiv.org/abs/1904.03349)] [[Code](https://github.com/tengteng95/Pose-Transfer.git)]**Generating Person Images with Appearance-aware Pose Stylizer.**
*Siyu Huang, Haoyi Xiong, Zhi-Qi Cheng, Qingzhong Wang, Xingran Zhou, Bihan Wen, Jun Huan, Dejing Dou.*
IJCAI 2020. [[PDF](https://arxiv.org/abs/2007.09077)] [[Code](https://github.com/siyuhuang/PoseStylizer)]## Image-Based Virtual Try-On
[[Awesome Virtual Try-on (VTON)](https://github.com/minar09/awesome-virtual-try-on)]
**FashionTex: Controllable Virtual Try-on with Text and Texture.**
*Anran Lin, Nanxuan Zhao, Shuliang Ning, Yuda Qiu, Baoyuan Wang, Xiaoguang Han.*
SIGGRAPH 2023. [[PDF](http://arxiv.org/abs/2305.04451)]**TryOnDiffusion: A Tale of Two UNets.**
*[Luyang Zhu](https://homes.cs.washington.edu/~lyzhu/), [Dawei Yang](http://www-personal.umich.edu/~ydawei/), [Tyler Zhu](https://research.google/people/TylerZhu/), [Fitsum Reda](https://fitsumreda.github.io/), [William Chan](http://williamchan.ca/), [Chitwan Saharia](https://scholar.google.co.in/citations?user=JApued4AAAAJ&hl=en), [Mohammad Norouzi](https://norouzi.github.io/), [Ira Kemelmacher-Shlizerman](https://www.irakemelmacher.com/).*
CVPR 2023. [[PDF](https://arxiv.org/abs/2306.08276)] [[Project](https://tryondiffusion.github.io/)]**High-Resolution Virtual Try-On with Misalignment and Occlusion-Handled Conditions.**
*Sangyun Lee, Gyojung Gu, Sunghyun Park, Seunghwan Choi, Jaegul Choo.*
ECCV 2022. [[PDF](https://arxiv.org/pdf/2206.14180.pdf)] [[Project](https://koo616.github.io/HR-VITON/)] [[Code](https://github.com/sangyun884/HR-VITON)]**Single Stage Virtual Try-on via Deformable Attention Flows.**
*Shuai Bai, Huiling Zhou, Zhikang Li, Chang Zhou, Hongxia Yang.*
ECCV 2022. [[PDF](https://arxiv.org/abs/2207.09161)]**MGN: A Regional Mask Guided Network for Parser-free Virtual Try-on.**
*Chao Lin, Zhao Li, Sheng Zhou, Shichang Hu, Jialun Zhang, Linhao Luo, Jiarun Zhang, Longtao Huang, Yuan He.*
IJCAI 2022. [[PDF](https://arxiv.org/abs/2204.11258)]**ClothFormer: Taming Video Virtual Try-on in All Module.**
*Jianbin Jiang, Tan Wang, He Yan, Junhui Liu.*
CVPR 2022 (oral). [[PDF](https://arxiv.org/abs/2204.07154)] [[Project](https://arxiv.org/abs/2204.12151)]**Style-Based Global Appearance Flow for Virtual Try-On.**
*Sen He, Yi-Zhe Song, Tao Xiang.*
CVPR 2022. [[PDF](https://arxiv.org/abs/2204.01046)]**Dressing in the Wild by Watching Dance Videos.**
*Xin Dong, Fuwei Zhao, Zhenyu Xie, Xijin Zhang, Daniel K. Du, Min Zheng, Xiang Long, Xiaodan Liang, Jianchao Yang.*
CVPR 2022. [[PDF](https://arxiv.org/abs/2203.15320)] [[Project](https://awesome-wflow.github.io/)]**CIT: Cloth Interactive Transformer for Virtual Try-On.**
*[Bin Ren](https://scholar.google.com/citations?user=Md9maLYAAAAJ&hl=en), Hao Tang, Fanyang Meng, Runwei Ding, Ling Shao, Philip H.S. Torr, Nicu Sebe.*
CVPR 2022. [[PDF](https://arxiv.org/abs/2104.05519)] [[Code](https://github.com/Amazingren/CIT)]**Weakly Supervised High-Fidelity Clothing Model Generation.**
*Ruili Feng, Cheng Ma, Chengji Shen, Xin Gao, Zhenjiang Liu, Xiaobo Li, Kairi Ou, Zhengjun Zha.*
CVPR 2022. [[PDF](https://arxiv.org/abs/2112.07200)]**WAS-VTON: Warping Architecture Search for Virtual Try-on Network.**
*Zhenyu Xie, Xujie Zhang, Fuwei Zhao, Haoye Dong, Michael C. Kampffmeyer, Haonan Yan, Xiaodan Liang.*
ACM MM 2021. [[PDF](https://arxiv.org/abs/2108.00386)]**Towards Scalable Unpaired Virtual Try-On via Patch-Routed Spatially-Adaptive GAN.**
*Zhenyu Xie, Zaiyu Huang, Fuwei Zhao, Haoye Dong, Michael Kampffmeyer, Xiaodan Liang.*
NeurIPS 2021. [[PDF](https://arxiv.org/abs/2111.10544)] [[Code](https://github.com/xiezhy6/PASTA-GAN)]**ZFlow: Gated Appearance Flow-based Virtual Try-on with 3D Priors.**
*Ayush Chopra, Rishabh Jain, Mayur Hemani, Balaji Krishnamurthy.*
ICCV 2021. [[PDF](https://arxiv.org/abs/2109.07001)]*Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-On and Outfit Editing.*
*Aiyu Cui, Daniel McKee, Svetlana Lazebnik.*
ICCV 2021. [[PDF](https://openaccess.thecvf.com/content/ICCV2021/html/Cui_Dressing_in_Order_Recurrent_Person_Image_Generation_for_Pose_Transfer_ICCV_2021_paper.html)]**FashionMirror: Co-Attention Feature-Remapping Virtual Try-On With Sequential Template Poses.**
*Chieh-Yun Chen, Ling Lo, Pin-Jui Huang, Hong-Han Shuai, Wen-Huang Cheng.*
ICCV 2021. [[PDF](https://openaccess.thecvf.com/content/ICCV2021/html/Chen_FashionMirror_Co-Attention_Feature-Remapping_Virtual_Try-On_With_Sequential_Template_Poses_ICCV_2021_paper.html)]**M3D-VTON: A Monocular-to-3D Virtual Try-On Network.**
*Fuwei Zhao, Zhenyu Xie, Michael Kampffmeyer, Haoye Dong, Songfang Han, Tianxiang Zheng, Tao Zhang, Xiaodan Liang.*
ICCV 2021. [[PDF](https://arxiv.org/abs/2108.05126)]**Shape Controllable Virtual Try-on for Underwear Models.**
*Xin Gao, Zhenjiang Liu, Zunlei Feng, Chengji Shen, Kairi Ou, Haihong Tang, Mingli Song.*
ACM MM 2021. [[PDF](https://arxiv.org/abs/2107.13156)]**TryOnGAN: Body-aware Try-on via Layered Interpolation.**
*[Kathleen M Lewis](https://katiemlewis.github.io/), [Srivatsan Varadharajan](https://www.linkedin.com/in/srivatsan-varadharajan-9a570818), [Ira Kemelmacher-Shlizerman](https://www.irakemelmacher.com/).*
TOG 2021. [[PDF](https://arxiv.org/abs/2101.02285)] [[Project](https://tryongan.github.io/tryongan/)] [[Demo](https://tryongan.github.io/tryongan/demo_rewrite.html)]**VOGUE: Try-On by StyleGAN Interpolation Optimization.**
*[Kathleen M Lewis](https://katiemlewis.github.io/), [Srivatsan Varadharajan](https://www.linkedin.com/in/srivatsan-varadharajan-9a570818), [Ira Kemelmacher-Shlizerman](https://sites.google.com/view/irakemelmacher/home).*
CVPR 2021. [[PDF](https://arxiv.org/abs/2101.02285)] [[Project](https://vogue-try-on.github.io/)] [[Code](https://github.com/Charmve/VOGUE-Try-On)]**Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing.**
*Aiyu Cui, Daniel McKee, Svetlana Lazebnik.*
CVPR 2021. [[PDF](https://arxiv.org/abs/2104.07021)] [[Code](https://github.com/cuiaiyu/dressing-in-order)]**Toward Accurate and Realistic Outfits Visualization With Attention to Details.**
*Kedan Li, Min Jin Chong, Jeffrey Zhang, Jingen Liu.*
CVPR 2021. [[PDF](https://arxiv.org/abs/2106.06593)] [[Demo](https://revery.ai/demo.html)]**VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization.**
*Seunghwan Choi, Sunghyun Park, Minsoo Lee, Jaegul Choo.*
CVPR 2021. [[PDF](https://arxiv.org/abs/2103.16874)]**HumanGAN: A Generative Model of Humans Images.**
*Kripasindhu Sarkar, Lingjie Liu, Vladislav Golyanik, Christian Theobalt.*
CVPR 2021. [[PDF](https://arxiv.org/pdf/2103.06902.pdf)] [[Project](http://gvv.mpi-inf.mpg.de/projects/HumanGAN/)]**DCTON: Disentangled Cycle Consistency for Highly-realistic Virtual Try-On.**
*Chongjian Ge, Yibing Song, Yuying Ge, Han Yang, Wei Liu, and Ping Luo.*
CVPR 2021. [[PDF](https://arxiv.org/abs/2103.09479)] [[Code](https://github.com/ChongjianGE/DCTON)]**PF-AFN: Parser-Free Virtual Try-on via Distilling Appearance Flows.**
*Yuying Ge, Yibing Song, Ruimao Zhang, Chongjian Ge, Wei Liu, Ping Luo.*
CVPR 2021. [[PDF](https://arxiv.org/abs/2103.04559)] [[Code](https://github.com/geyuying/PF-AFN)]**Template-Free Try-on Image Synthesis via Semantic-guided Optimization.**
*Chien-Lung Chou, Chieh-Yun Chen, Chia-Wei Hsieh, Hong-Han Shuai, Jiaying Liu, Wen-Huang Cheng.*
TNNLS 2021. [[PDF](https://arxiv.org/abs/2102.03503)]**Unpaired Person Image Generation with Semantic Parsing Transformation.**
*Sijie Song, Wei Zhang, Jiaying Liuv, Zongming Guo, Tao Mei.*
TPAMI 2020. [[PDF](https://arxiv.org/abs/1912.06324)] [[CVPR 2019](https://arxiv.org/abs/1904.03379)] [[TPAMI 2020](https://ieeexplore.ieee.org/document/9085915/authors#authors)] [[Code](https://github.com/JDAI-CV/Down-to-the-Last-Detail-Virtual-Try-on-with-Detail-Carving)]**Do Not Mask What You Do Not Need to Mask: a Parser-Free Virtual Try-On.**
*Thibaut Issenhuth, Jérémie Mary, Clément Calauzènes.*
ECCV 2020. [[PDF](https://arxiv.org/abs/2007.02721)]**Fully Convolutional Graph Neural Networks for Parametric Virtual Try-On.**
*Raquel Vidaurre, Igor Santesteban, Elena Garces, Dan Casas.*
ACM MM 2020. [[PDF](https://arxiv.org/abs/2009.04592)] [[Project](http://mslab.es/projects/FullyConvolutionalGraphVirtualTryOn)]**Towards Photo-Realistic Virtual Try-On by Adaptively Generating Preserving Image Content.**
*Han Yang, Ruimao Zhang, Xiaobao Guo, Wei Liu, Wangmeng Zuo, Ping Luo.*
CVPR 2020. [[PDF](https://arxiv.org/abs/2003.05863)] [[Code](https://github.com/switchablenorms/DeepFashion_Try_On)]**PSGAN: Pose and Expression Robust Spatial-Aware GAN for Customizable Makeup Transfer.**
*Wentao Jiang, Si Liu, Chen Gao, Jie Cao, Ran He, Jiashi Feng, Shuicheng Yan.*
CVPR 2020. [[PDF](https://arxiv.org/abs/1909.06956)]**GarmentGAN: Photo-realistic Adversarial Fashion Transfer.**
*Amir Hossein Raffiee, Michael Sollami.*
ICPR 2020. [[PDF](https://arxiv.org/abs/2003.01894)]**SieveNet: A Unified Framework for Robust Image-Based Virtual Try-On.**
*Surgan Jandial, Ayush Chopra, Kumar Ayush, Mayur Hemani, Abhijeet Kumar, Balaji Krishnamurthy.*
WACV 2020. [[PDF](https://arxiv.org/abs/2001.06265)] [[Code](https://github.com/levindabhi/SieveNet)]**TailorGAN: Making User-Defined Fashion Designs.**
*Lele Chen, Justin Tian, Guo Li, Cheng-Haw Wu, Erh-Kan King, Kuan-Ting Chen, Shao-Hang Hsieh.*
WACV 2020. [[PDF](https://arxiv.org/abs/2001.06427)] [[Code](https://github.com/gli-27/TailorGAN)]**ClothFlow: A Flow-Based Model for Clothed Person Generation.**
*Xintong Han, Xiaojun Hu, Weilin Huang, Matthew R. Scott.*
ICCV 2019. [[PDF](https://openaccess.thecvf.com/content_ICCV_2019/html/Han_ClothFlow_A_Flow-Based_Model_for_Clothed_Person_Generation_ICCV_2019_paper.html)]**VTNFP: An Image-Based Virtual Try-On Network With Body and Clothing Feature Preservation.**
*Ruiyun Yu, Xiaoqi Wang, Xiaohui Xie.*
ICCV 2019. [[PDF](https://openaccess.thecvf.com/content_ICCV_2019/html/Yu_VTNFP_An_Image-Based_Virtual_Try-On_Network_With_Body_and_Clothing_ICCV_2019_paper.html)]**Toward Characteristic-Preserving Image-based Virtual Try-On Network.**
*Bochao Wang, Huabin Zheng, Xiaodan Liang, Yimin Chen, Liang Lin, Meng Yang.*
ECCV 2018. [[PDF](https://arxiv.org/abs/1807.07688)]## Human Body Reshaping
**Structure-Aware Flow Generation for Human Body Reshaping.**
*[Jianqiang Ren](https://github.com/JianqiangRen), Yuan Yao, Biwen Lei, Miaomiao Cui, Xuansong Xie.*
CVPR 2022. [[PDF](https://arxiv.org/abs/2203.04670)] [[Code](https://github.com/JianqiangRen/FlowBasedBodyReshaping)]**Real-Time Reshaping of Humans.**
*Michal Richter, Kiran Varanasi, Nils Hasler, Christian Theobalt.*
International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission, 2012. [[PDF](https://vcai.mpi-inf.mpg.de/files/3DimPVT/human_reshape.pdf)]**MovieReshape: Tracking and Reshaping of Humans in Videos.**
*Arjun Jain, Thorsten Thormählen, Hans-Peter Seidel, Christian Theobalt.*
SIGGRAPH Asia 2010. [[PDF](http://www.mpi-inf.mpg.de/resources/MovieReshape/MovieReshape.pdf)] [[Project](https://resources.mpi-inf.mpg.de/MovieReshape/)]**Parametric reshaping of human bodies in images.**
*Shizhe Zhou, Hongbo Fu, Ligang Liu, Daniel Cohen-Or, Xiaoguang Han.*
SIGGRAPH 2010. [[PDF](https://dl.acm.org/doi/10.1145/1833349.1778863)]### Scene context-aware Human Body Generation
**Putting People in Their Place: Affordance-Aware Human Insertion Into Scenes.**
*Sumith Kulal, Tim Brooks, Alex Aiken, Jiajun Wu, Jimei Yang, Jingwan Lu, Alexei A. Efros, Krishna Kumar Singh.*
CVPR 2023. [[PDF](http://arxiv.org/abs/2304.14406)] [[Project](https://sumith1896.github.io/affordance-insertion/)]**Generating 3D People in Scenes Without People.**
*Yan Zhang, Mohamed Hassan, Heiko Neumann, Michael J. Black, Siyu Tang.*
CVPR 2020. [[PDF](https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_Generating_3D_People_in_Scenes_Without_People_CVPR_2020_paper.html)] [[Code](https://github.com/yz-cnsdqz/PSI-release)]### Human Mesh Recovery
**GLAMR: Global Occlusion-Aware Human Mesh Recovery with Dynamic Cameras.**
*[Ye Yuan](https://www.ye-yuan.com/), [Umar Iqbal](http://www.umariqbal.info/), [Pavlo Molchanov](https://research.nvidia.com/person/pavlo-molchanov/), [Kris Kitani](http://www.cs.cmu.edu/~kkitani/), [Jan Kautz](https://jankautz.com/).*
CVPR 2022 (Oral). [[PDF](https://arxiv.org/abs/2112.01524)] [[Project](https://nvlabs.github.io/GLAMR)] [[Code](https://github.com/NVlabs/GLAMR)]**Shapy: Accurate 3D Body Shape Regression Using Metric and Semantic Attributes.**
*Vasileios Choutas, Lea Muller, Chun-Hao P. Huang, Siyu Tang, Dimitrios Tzionas, Michael J. Black.*
CVPR 2022. [[PDF](http://arxiv.org/abs/2206.07036)] [[Project](https://shapy.is.tue.mpg.de/)] [[Code](https://github.com/muelea/shapy)]**PoseScript: 3D Human Poses from Natural Language.**
*Ginger Delmas, Philippe Weinzaepfel, Thomas Lucas, Francesc Moreno-Noguer, Grégory Rogez.*
ECCV 2022. [[PDF](http://arxiv.org/abs/2210.11795)] [[Code](https://github.com/naver/posescript)]### Human-Centric Perception
**Versatile Multi-Modal Pre-Training for Human-Centric Perception.**
*[Fangzhou Hong](https://hongfz16.github.io/), [Liang Pan](), [Zhongang Cai](), [Ziwei Liu](https://liuziwei7.github.io/).*
CVPR 2022. [[PDF](https://arxiv.org/abs/2203.13815)] [[Code](https://github.com/hongfz16/HCMoCo)] [[Project](https://hongfz16.github.io/projects/HCMoCo.html;)]### Challenge and workshop
- KDD Workshop on Fashion. [[2019]](https://kddfashion2019.mybluemix.net/) [[2018]](https://kddfashion2018.mybluemix.net/) [[2017]](https://kddfashion2017.mybluemix.net/) [[2016]](http://kddfashion2016.mybluemix.net/)
- Workshop on Computer Vision for Fashion, Art and Design. [[CVPR 2020]](https://sites.google.com/view/cvcreative2020) [[ICCV 2019]](https://sites.google.com/view/cvcreative/home) [[ECCV 2018]](https://sites.google.com/view/eccvfashion/) [[ICCV 2017]](https://sites.google.com/zalando.de/cvf-iccv2017/home?authuser=0)
- NeurlPS workshop on Machine Learning for Creativity and Design. [[2019]](https://neurips2019creativity.github.io/) [[2018]](https://nips2018creativity.github.io/) [[2017]](https://nips2017creativity.github.io/)
- SIGIR Workshop On eCommerce. [[2019]](https://sigir-ecom.github.io/index.html) [[2018]](https://sigir-ecom.github.io/ecom2018/index.html) [[2017]](http://sigir-ecom.weebly.com/)
- CVPR Deep Learning for Content Creation Tutorial. [[2019]](https://nvlabs.github.io/dl-for-content-creation/)
- iMaterialist Fashion Challenge. [[CVPR 2019]](https://sites.google.com/view/fgvc6/competitions/imat-fashion-2019)
- iDesigner Challenge. [[CVPR 2019]](https://sites.google.com/view/fgvc6/competitions/idesigner-2019)
- FashionGen Challenge. [[ICCV 2019, ECCV 2018]](https://fashion-gen.com/)
- JD AI Fashion Challenge. [[ChinaMM 2018]](https://fashion-challenge.github.io/)
- Alibaba FashionAI Global Challenge. [[Tianchi]](http://fashionai.alibaba.com/)
- Artificial Intelligence on Fashion and Textile Conference. [[AIFT 2018]](https://www.polyu.edu.hk/itc/aift2018/)
- Fashion IQ Challenge. [[CVPR 2020]](https://sites.google.com/view/cvcreative2020/fashion-iq?authuser=0) [[ICCV 2019]](https://sites.google.com/view/lingir/fashion-iq)
- DeepFashion2 Challenge. [[CVPR 2020]](https://sites.google.com/view/cvcreative2020/deepfashion2?authuser=0) [[ICCV 2019]](https://sites.google.com/view/cvcreative/deepfashion2)### Datasets
- WildAvatar (2024). [[Website]](https://arxiv.org/pdf/2407.02165)
- Fashionpedia. [[Website]](https://fashionpedia.github.io/home/index.html)
- DeepFashion2 Dataset. [[Website]]()
- DeepFashion Dataset. [[Website]](http://mmlab.ie.cuhk.edu.hk/projects/DeepFashion.html)
- FashionGen. [[Website]](https://fashion-gen.com/)
- FashionAI. [[Tianchi]](http://fashionai.alibaba.com/datasets/?spm=a2c22.11190735.991137.8.501b6d83ilPJsX)
- TaobaoClothMatch. [[Tianchi]](TaobaoClothMatch)
- Fashion-MNIST. [[Fashion-MNIST]](https://github.com/zalandoresearch/fashion-mnist)
- Fashion IQ. [[Website]](https://www.spacewu.com/posts/fashion-iq/)
- NTURGBD-Parsing-4K Dataset. [[Website](https://github.com/hongfz16/HCMoCo)]## Garment Design
**Knitting 4D Garment with Elasticity Controlled for Body Motion.**
*[Zishun Liu](https://github.com/zishun), Xingjian Han, Yuchen Zhang, Xiangjia Chen, Yukun Lai, Eugeni L. Doubrovski, Emily Whiting, Charlie C.L. Wang.*
TOG 2021. [[PDF](https://1drv.ms/b/s!AsZuzkRqeoh6gsUNhZKQ0bRp-BDaJQ?e=XIegdJ)] [[Project](https://zishun.github.io/projects/Knitting4D/)]**Garment Design with Generative Adversarial Networks.**
*Chenxi Yuan, Mohsen Moghaddam.*
KDD workshop on AdvML 2020. [[PDF](https://arxiv.org/abs/2007.10947)]## Fashion Style Influences
**From Paris to Berlin: Discovering Fashion Style Influences Around the World.**
*Ziad Al-Halah, Kristen Grauman.*
CVPR 2020. [[PDF](https://arxiv.org/abs/2004.01316)]**From Culture to Clothing: Discovering the World Events Behind A Century of Fashion Images.**
*Wei-Lin Hsiao, Kristen Grauman.*
ICCV 2021. [[PDF](https://arxiv.org/abs/2102.01690)]## Team and People
- [Real Virtual Humans](http://virtualhumans.mpi-inf.mpg.de/), **Max Planck Institute for Informatics**, by [Gerard Pons-Moll](http://virtualhumans.mpi-inf.mpg.de/people/pons-moll.html).
- [Perceiving Systems, Tubingen Compus](http://ps.is.tuebingen.mpg.de/), **Max Planck Institute for Intelligent Systems**, by [Michael Black](http://ps.is.tuebingen.mpg.de/person/black).
- [Visual Computer Lab](https://niessnerlab.org/opening.html), **Technical University Munich (TUM)**, by [Prof. Dr. Matthias Nießner](https://niessnerlab.org/members/matthias_niessner/profile.html) and his [team](https://niessnerlab.org/team.html).
- [Broadband Network and Digital Media Lab](http://www.liuyebin.com/student.html), Department of Automation, **Tsinghua University**, by [Yebin Liu](http://www.liuyebin.com/).
- [Vision and Graphics Lab](https://ict.usc.edu/), **University of Southern California**, by [Hao Li](http://hao-li.com/Hao_Li/Hao_Li_-_publications.html).## Dataset
- `SMPL`. To download the [SMPL-X](https://smpl-x.is.tue.mpg.de/), [SMPL+H](http://mano.is.tue.mpg.de/) and SMPL ([Male and Female](http://smpl.is.tue.mpg.de/), [Gender Neural Model](http://smplify.is.tue.mpg.de/)) model, go to this project website and register to get access to the downloads section. [[Code](https://github.com/vchoutas/smplx#loading-smpl-x-smplh-and-smpl)]
- `THUmanDataset`. [THUman](https://github.com/ZhengZerong/DeepHuman/tree/master/THUmanDataset) is a 3D real-world human model dataset containing approximately 7000 models.
- `AGORA`. AGORA, proposed at CVPR 2021 [paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Patel_AGORA_Avatars_in_Geography_Optimized_for_Regression_Analysis_CVPR_2021_paper.pdf), consists of 4240 scans spanning more than 350 unique subjects, all paired with SMPL-X fits.
## Applications
### Fitness Training
**AIFit: Automatic 3D Human-Interpretable Feedback Models for Fitness Training.**
*Mihai Fieraru, Mihai Zanfir, Silviu-Cristian Pirlea, Vlad Olaru, Cristian Sminchisescu.*
CVPR 2021. [[PDF](https://openaccess.thecvf.com/content/CVPR2021/papers/Fieraru_AIFit_Automatic_3D_Human-Interpretable_Feedback_Models_for_Fitness_Training_CVPR_2021_paper.pdf)] [[Project](http://vision.imar.ro/fit3d/)]