Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/yuliangxiu/icon
[CVPR'22] ICON: Implicit Clothed humans Obtained from Normals
https://github.com/yuliangxiu/icon
3d-reconstruction animation avatar-generator cloth-simulation computer-graphics computer-vision human-pose-estimation implicit-functions mesh-deformation metaverse normal-maps pifu pifuhd pose-estimation pytorch smpl smpl-body smpl-model smplx virtual-humans
Last synced: 3 days ago
JSON representation
[CVPR'22] ICON: Implicit Clothed humans Obtained from Normals
- Host: GitHub
- URL: https://github.com/yuliangxiu/icon
- Owner: YuliangXiu
- License: other
- Created: 2021-11-29T20:42:47.000Z (about 3 years ago)
- Default Branch: master
- Last Pushed: 2023-11-23T16:44:57.000Z (about 1 year ago)
- Last Synced: 2025-01-11T20:06:23.670Z (10 days ago)
- Topics: 3d-reconstruction, animation, avatar-generator, cloth-simulation, computer-graphics, computer-vision, human-pose-estimation, implicit-functions, mesh-deformation, metaverse, normal-maps, pifu, pifuhd, pose-estimation, pytorch, smpl, smpl-body, smpl-model, smplx, virtual-humans
- Language: Python
- Homepage: https://icon.is.tue.mpg.de
- Size: 152 MB
- Stars: 1,616
- Watchers: 42
- Forks: 220
- Open Issues: 50
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
ICON: Implicit Clothed humans Obtained from Normals
Yuliang Xiu
·
Jinlong Yang
·
Dimitrios Tzionas
·
Michael J. Black
CVPR 2022
## News :triangular_flag_on_post:
- [2022/12/15] ICON belongs to the past, [ECON](https://github.com/YuliangXiu/ECON) is the future!
- [2022/09/12] Apply [KeypointNeRF](https://markomih.github.io/KeypointNeRF/) on ICON, quantitative numbers in [evaluation](docs/evaluation.md#benchmark-train-on-thuman20-test-on-cape)
- [2022/07/30] are both available
- [2022/07/26] New cloth-refinement module is released, try `-loop_cloth`
- [2022/06/13] ETH Zürich students from 3DV course create an add-on for [garment-extraction](docs/garment-extraction.md)
- [2022/05/16] BEV is supported as optional HPS by Yu Sun, see [commit #060e265](https://github.com/YuliangXiu/ICON/commit/060e265bd253c6a34e65c9d0a5288c6d7ffaf68e)
- [2022/05/15] Training code is released, please check [Training Instruction](docs/training.md)
- [2022/04/26] HybrIK (SMPL) is supported as optional HPS by Jiefeng Li, see [commit #3663704](https://github.com/YuliangXiu/ICON/commit/36637046dcbb5667cdfbee3b9c91b934d4c5dd05)
- [2022/03/05] PIXIE (SMPL-X), PARE (SMPL), PyMAF (SMPL) are all supported as optional HPS
Table of Contents
## Who needs ICON?
- If you want to **Train & Evaluate** on **PIFu / PaMIR / ICON** using your own data, please check [dataset.md](./docs/dataset.md) to prepare dataset, [training.md](./docs/training.md) for training, and [evaluation.md](./docs/evaluation.md) for benchmark evaluation.
- Given a raw RGB image, you could get:
- image (png):
- segmented human RGB
- normal maps of body and cloth
- pixel-aligned normal-RGB overlap
- mesh (obj):
- SMPL-(X) body from _PyMAF, PIXIE, PARE, HybrIK, BEV_
- 3D clothed human reconstruction
- 3D garments (requires 2D mask)
- video (mp4):
- self-rotated clothed human| ![Intermediate Results](assets/intermediate_results.png) |
| :-------------------------------------------------------------: |
| _ICON's intermediate results_ |
| ![Iterative Refinement](assets/refinement.gif) |
| _ICON's SMPL Pose Refinement_ |
| _![Final Results](assets/overlap.gif)_ |
| _Image -- overlapped normal prediction -- ICON -- refined ICON_ |
| ![3D Garment](assets/garment.gif) |
| _3D Garment extracted from ICON using 2D mask_ |
## Instructions
- See [docs/installation.md](docs/installation.md) to install all the required packages and setup the models
- See [docs/dataset.md](docs/dataset.md) to synthesize the train/val/test dataset from THuman2.0
- See [docs/training.md](docs/training.md) to train your own model using THuman2.0
- See [docs/evaluation.md](docs/evaluation.md) to benchmark trained models on CAPE testset
- Add-on: [Garment Extraction from Fashion Images](docs/garment-extraction.md), supported by ETH Zürich students as 3DV course project.
## Running Demo
```bash
cd ICON# model_type:
# "pifu" reimplemented PIFu
# "pamir" reimplemented PaMIR
# "icon-filter" ICON w/ global encoder (continous local wrinkles)
# "icon-nofilter" ICON w/o global encoder (correct global pose)
# "icon-keypoint" ICON w/ relative-spatial encoding (insight from KeypointNeRF)python -m apps.infer -cfg ./configs/icon-filter.yaml -gpu 0 -in_dir ./examples -out_dir ./results -export_video -loop_smpl 100 -loop_cloth 200 -hps_type pixie
```
## More Qualitative Results
| ![Comparison](assets/compare.gif) |
| :----------------------------------------------------------: |
| _Comparison with other state-of-the-art methods_ |
| ![extreme](assets/normal-pred.png) |
| _Predicted normals on in-the-wild images with extreme poses_ |
## Citation
```bibtex
@inproceedings{xiu2022icon,
title = {{ICON}: {I}mplicit {C}lothed humans {O}btained from {N}ormals},
author = {Xiu, Yuliang and Yang, Jinlong and Tzionas, Dimitrios and Black, Michael J.},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {13296-13306}
}
```## Acknowledgments
We thank [Yao Feng](https://ps.is.mpg.de/person/yfeng), [Soubhik Sanyal](https://ps.is.mpg.de/person/ssanyal), [Qianli Ma](https://ps.is.mpg.de/person/qma), [Xu Chen](https://ait.ethz.ch/people/xu/), [Hongwei Yi](https://ps.is.mpg.de/person/hyi), [Chun-Hao Paul Huang](https://ps.is.mpg.de/person/chuang2), and [Weiyang Liu](https://wyliu.com/) for their feedback and discussions, [Tsvetelina Alexiadis](https://ps.is.mpg.de/person/talexiadis) for her help with the AMT perceptual study, [Taylor McConnell](https://ps.is.mpg.de/person/tmcconnell) for her voice over, [Benjamin Pellkofer](https://is.mpg.de/person/bpellkofer) for webpage, and [Yuanlu Xu](https://web.cs.ucla.edu/~yuanluxu/)'s help in comparing with ARCH and ARCH++.
Special thanks to [Vassilis Choutas](https://ps.is.mpg.de/person/vchoutas) for sharing the code of [bvh-distance-queries](https://github.com/YuliangXiu/bvh-distance-queries)
Here are some great resources we benefit from:
- [MonoPortDataset](https://github.com/Project-Splinter/MonoPortDataset) for Data Processing
- [PaMIR](https://github.com/ZhengZerong/PaMIR), [PIFu](https://github.com/shunsukesaito/PIFu), [PIFuHD](https://github.com/facebookresearch/pifuhd), and [MonoPort](https://github.com/Project-Splinter/MonoPort) for Benchmark
- [SCANimate](https://github.com/shunsukesaito/SCANimate) and [AIST++](https://github.com/google/aistplusplus_api) for Animation
- [rembg](https://github.com/danielgatis/rembg) for Human Segmentation
- [PyTorch-NICP](https://github.com/wuhaozhe/pytorch-nicp) for normal-based non-rigid refinement
- [smplx](https://github.com/vchoutas/smplx), [PARE](https://github.com/mkocabas/PARE), [PyMAF](https://github.com/HongwenZhang/PyMAF), [PIXIE](https://github.com/YadiraF/PIXIE), [BEV](https://github.com/Arthur151/ROMP), and [HybrIK](https://github.com/Jeff-sjtu/HybrIK) for Human Pose & Shape Estimation
- [CAPE](https://github.com/qianlim/CAPE) and [THuman](https://github.com/ZhengZerong/DeepHuman/tree/master/THUmanDataset) for Dataset
- [PyTorch3D](https://github.com/facebookresearch/pytorch3d) for Differential RenderingSome images used in the qualitative examples come from [pinterest.com](https://www.pinterest.com/).
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No.860768 ([CLIPE Project](https://www.clipe-itn.eu)).
## Contributors
Kudos to all of our amazing contributors! ICON thrives through open-source. In that spirit, we welcome all kinds of contributions from the community.
_Contributor avatars are randomly shuffled._
---
## License
This code and model are available for non-commercial scientific research purposes as defined in the [LICENSE](LICENSE) file. By downloading and using the code and model you agree to the terms in the [LICENSE](LICENSE).
## Disclosure
MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. While MJB was a part-time employee of Amazon during this project, his research was performed solely at, and funded solely by, the Max Planck Society.
## Contact
For more questions, please contact [email protected]
For commercial licensing, please contact [email protected]