Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/YuliangXiu/ECON
[CVPR'23, Highlight] ECON: Explicit Clothed humans Optimized via Normal integration
https://github.com/YuliangXiu/ECON
3d-reconstruction avatar-generator computer-graphics computer-vision digital-twins metaverse normal-maps pifu pifuhd smpl-body smplx virtual-humans
Last synced: about 1 month ago
JSON representation
[CVPR'23, Highlight] ECON: Explicit Clothed humans Optimized via Normal integration
- Host: GitHub
- URL: https://github.com/YuliangXiu/ECON
- Owner: YuliangXiu
- License: other
- Created: 2022-10-02T11:33:33.000Z (about 2 years ago)
- Default Branch: master
- Last Pushed: 2024-09-17T21:59:40.000Z (3 months ago)
- Last Synced: 2024-10-29T15:29:19.481Z (about 2 months ago)
- Topics: 3d-reconstruction, avatar-generator, computer-graphics, computer-vision, digital-twins, metaverse, normal-maps, pifu, pifuhd, smpl-body, smplx, virtual-humans
- Language: Python
- Homepage: https://xiuyuliang.cn/econ
- Size: 222 MB
- Stars: 1,102
- Watchers: 30
- Forks: 107
- Open Issues: 38
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
ECON: Explicit Clothed humans Optimized via Normal integration
Yuliang Xiu
·
Jinlong Yang
·
Xu Cao
·
Dimitrios Tzionas
·
Michael J. Black
CVPR 2023 (Highlight)
ECON is designed for "Human digitization from a color image", which combines the best properties of implicit and explicit representations, to infer high-fidelity 3D clothed humans from in-the-wild images, even with **loose clothing** or in **challenging poses**. ECON also supports **multi-person reconstruction** and **SMPL-X based animation**.
| **HuggingFace Demo** | **Google Colab** | **Blender Add-on** | **Windows** | **Docker** |
| :------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| | | | | |
| | | | | |## Applications
| ![SHHQ](assets/SHHQ.gif) | ![crowd](assets/crowd.gif) |
| :--------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------------------------------------: |
| "3D guidance" for [SHHQ Dataset](https://github.com/stylegan-human/StyleGAN-Human) | multi-person reconstruction w/ occlusion |
| ![Blender](assets/blender-demo.gif) | ![Animation](assets/animation.gif) |
| "All-in-One" [Blender add-on](https://github.com/kwan3854/CEB_ECON) | SMPL-X based Animation ([Instruction](https://github.com/YuliangXiu/ECON#animation-with-smpl-x-sequences-econ--hybrik-x)) |
## News :triangular_flag_on_post:
- [2024/09/16] 🌟 Bending leg issues [[1](https://github.com/YuliangXiu/ECON/issues/133),[2](https://github.com/YuliangXiu/ECON/issues/5),[3](https://github.com/YuliangXiu/ICON/issues/68),[4](https://github.com/huangyangyi/TeCH/issues/14)] get resolved with [Sapiens](https://rawalkhirodkar.github.io/sapiens/), details in [Bending legs](https://github.com/YuliangXiu/ECON/blob/master/docs/tricks.md#bending-legs).
- [2023/08/19] We released [TeCH](https://huangyangyi.github.io/TeCH/), which extends ECON with full texture support.
- [2023/06/01] [Lee Kwan Joong](https://github.com/kwan3854) updates a Blender Addon ([Github](https://github.com/kwan3854/CEB_ECON), [Tutorial](https://youtu.be/SDVfCeaI4AY)).
- [2023/04/16] is ready to use!
- [2023/02/27] ECON got accepted by CVPR 2023 as Highlight (top 10%)!
- [2023/01/12] [Carlos Barreto](https://twitter.com/carlosedubarret/status/1613252471035494403) creates a Blender Addon ([Download](https://carlosedubarreto.gumroad.com/l/CEB_ECON), [Tutorial](https://youtu.be/sbWZbTf6ZYk)).
- [2023/01/08] [Teddy Huang](https://github.com/Teddy12155555) creates [install-with-docker](docs/installation-docker.md) for ECON .
- [2023/01/06] [Justin John](https://github.com/justinjohn0306) and [Carlos Barreto](https://github.com/carlosedubarreto) creates [install-on-windows](docs/installation-windows.md) for ECON .
- [2022/12/22] is now available, created by [Aron Arzoomand](https://github.com/AroArz).
- [2022/12/15] Both demo and arXiv are available.## Key idea: d-BiNI
d-BiNI jointly optimizes front-back 2.5D surfaces such that: (1) high-frequency surface details agree with normal maps, (2) low-frequency surface variations, including discontinuities, align with SMPL-X surfaces, and (3) front-back 2.5D surface silhouettes are coherent with each other.
| Front-view | Back-view | Side-view |
| :----------------------: | :---------------------: | :-----------------------: |
| ![](assets/front-45.gif) | ![](assets/back-45.gif) | ![](assets/double-90.gif) |Please consider cite BiNI if it also helps on your project
```bibtex
@inproceedings{cao2022bilateral,
title={Bilateral normal integration},
author={Cao, Xu and Santo, Hiroaki and Shi, Boxin and Okura, Fumio and Matsushita, Yasuyuki},
booktitle={Computer Vision--ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23--27, 2022, Proceedings, Part I},
pages={552--567},
year={2022},
organization={Springer}
}
```
Table of Contents
## Instructions
- See [installion doc for Docker](docs/installation-docker.md) to run a docker container with pre-built image for ECON demo
- See [installion doc for Windows](docs/installation-windows.md) to install all the required packages and setup the models on _Windows_
- See [installion doc for Ubuntu](docs/installation-ubuntu.md) to install all the required packages and setup the models on _Ubuntu_
- See [magic tricks](docs/tricks.md) to know a few technical tricks to further improve and accelerate ECON
- See [testing](docs/testing.md) to prepare the testing data and evaluate ECON## Demos
- ### Quick Start
```bash
# For single-person image-based reconstruction (w/ l visualization steps, 1.8min)
python -m apps.infer -cfg ./configs/econ.yaml -in_dir ./examples -out_dir ./results# For multi-person image-based reconstruction (see config/econ.yaml)
python -m apps.infer -cfg ./configs/econ.yaml -in_dir ./examples -out_dir ./results -multi# To generate the demo video of reconstruction results
python -m apps.multi_render -n```
- ### Animation with SMPL-X sequences (ECON + [HybrIK-X](https://github.com/Jeff-sjtu/HybrIK#smpl-x))
```bash
# 1. Use HybrIK-X to estimate SMPL-X pose sequences from input video
# 2. Rig ECON's reconstruction mesh, to be compatible with SMPL-X's parametrization (-dress for dress/skirts).
# 3. Animate with SMPL-X pose sequences obtained from HybrIK-X, getting _motion.npz
# 4. Render the frames with Blender (rgb-partial texture, normal-normal colors), and combine them to get final videopython -m apps.avatarizer -n
python -m apps.animation -n -m# Note: to install missing python packages into Blender
# blender -b --python-expr "__import__('pip._internal')._internal.main(['install', 'moviepy'])"wget https://download.is.tue.mpg.de/icon/econ_empty.blend
blender -b --python apps.blender_dance.py -- normal 10 > /tmp/NULL
```Please consider cite HybrIK-X if it also helps on your project
```bibtex
@article{li2023hybrik,
title={HybrIK-X: Hybrid Analytical-Neural Inverse Kinematics for Whole-body Mesh Recovery},
author={Li, Jiefeng and Bian, Siyuan and Xu, Chao and Chen, Zhicun and Yang, Lixin and Lu, Cewu},
journal={arXiv preprint arXiv:2304.05690},
year={2023}
}
```- ### Gradio Demo
We also provide a UI for testing our method that is built with gradio. This demo also supports pose&prompt guided human image generation! Running the following command in a terminal will launch the demo:
```bash
git checkout main
python app.py
```This demo is also hosted on HuggingFace Space
- ### Full Texture Generation
#### Method 1: ECON+TEXTure
Please firstly follow the [TEXTure's installation](https://github.com/YuliangXiu/TEXTure#installation-floppy_disk) to setup the env of TEXTure.
```bash
# generate required UV atlas
python -m apps.avatarizer -n -uv# generate new texture using TEXTure
git clone https://github.com/YuliangXiu/TEXTure
cd TEXTure
ln -s ../ECON/results/econ/cache
python -m scripts.run_texture --config_path=configs/text_guided/avatar.yaml
```Then check `./experiments//mesh` for the results.
Please consider cite TEXTure if it also helps on your project
```bibtex
@article{richardson2023texture,
title={Texture: Text-guided texturing of 3d shapes},
author={Richardson, Elad and Metzer, Gal and Alaluf, Yuval and Giryes, Raja and Cohen-Or, Daniel},
journal={ACM Transactions on Graphics (TOG)},
publisher={ACM New York, NY, USA},
year={2023}
}
```#### Method 2: TeCH
Please check out our new paper, *TeCH: Text-guided Reconstruction of Lifelike Clothed Humans* ([Page](https://huangyangyi.github.io/TeCH/), [Code](https://github.com/huangyangyi/TeCH))
Please consider cite TeCH if it also helps on your project
```bibtex
@inproceedings{huang2024tech,
title={{TeCH: Text-guided Reconstruction of Lifelike Clothed Humans}},
author={Huang, Yangyi and Yi, Hongwei and Xiu, Yuliang and Liao, Tingting and Tang, Jiaxiang and Cai, Deng and Thies, Justus},
booktitle={International Conference on 3D Vision (3DV)},
year={2024}
}
```
## More Qualitative Results
| ![OOD Poses](assets/OOD-poses.jpg) |
| :------------------------------------: |
| _Challenging Poses_ |
| ![OOD Clothes](assets/OOD-outfits.jpg) |
| _Loose Clothes_ |
## Citation
```bibtex
@inproceedings{xiu2023econ,
title = {{ECON: Explicit Clothed humans Optimized via Normal integration}},
author = {Xiu, Yuliang and Yang, Jinlong and Cao, Xu and Tzionas, Dimitrios and Black, Michael J.},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2023},
}
```
## Acknowledgments
We thank [Lea Hering](https://is.mpg.de/person/lhering) and [Radek Daněček](https://is.mpg.de/person/rdanecek) for proof reading, [Yao Feng](https://ps.is.mpg.de/person/yfeng), [Haven Feng](https://is.mpg.de/person/hfeng), and [Weiyang Liu](https://wyliu.com/) for their feedback and discussions, [Tsvetelina Alexiadis](https://ps.is.mpg.de/person/talexiadis) for her help with the AMT perceptual study.
Here are some great resources we benefit from:
- [ICON](https://github.com/YuliangXiu/ICON) for SMPL-X Body Fitting
- [BiNI](https://github.com/hoshino042/bilateral_normal_integration) for Bilateral Normal Integration
- [MonoPortDataset](https://github.com/Project-Splinter/MonoPortDataset) for Data Processing, [MonoPort](https://github.com/Project-Splinter/MonoPort) for fast implicit surface query
- [rembg](https://github.com/danielgatis/rembg) for Human Segmentation
- [Sapiens](https://rawalkhirodkar.github.io/sapiens/) for normal estimation
- [MediaPipe](https://google.github.io/mediapipe/getting_started/python.html) for full-body landmark estimation
- [PyTorch-NICP](https://github.com/wuhaozhe/pytorch-nicp) for non-rigid registration
- [smplx](https://github.com/vchoutas/smplx), [PyMAF-X](https://www.liuyebin.com/pymaf-x/), [PIXIE](https://github.com/YadiraF/PIXIE) for Human Pose & Shape Estimation
- [CAPE](https://github.com/qianlim/CAPE) and [THuman](https://github.com/ZhengZerong/DeepHuman/tree/master/THUmanDataset) for Dataset
- [PyTorch3D](https://github.com/facebookresearch/pytorch3d) for Differential RenderingSome images used in the qualitative examples come from [pinterest.com](https://www.pinterest.com/).
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No.860768 ([CLIPE Project](https://www.clipe-itn.eu)).
## Contributors
Kudos to all of our amazing contributors! ECON thrives through open-source. In that spirit, we welcome all kinds of contributions from the community.
_Contributor avatars are randomly shuffled._
---
## License
This code and model are available for non-commercial scientific research purposes as defined in the [LICENSE](LICENSE) file. By downloading and using the code and model you agree to the terms in the [LICENSE](LICENSE).
## Disclosure
MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. While MJB is a part-time employee of Meshcapade, his research was performed solely at, and funded solely by, the Max Planck Society.
## Contact
For technical questions, please contact [email protected]
For commercial licensing, please contact [email protected]