Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/choyingw/synergynet
3DV 2021: Synergy between 3DMM and 3D Landmarks for Accurate 3D Facial Geometry
https://github.com/choyingw/synergynet
2d-3d 3d 3d-face-alignment 3d-face-reconstruction 3dv2021 3dvision computer-graphics computer-vision deep-neural-networks facial-keypoints facial-landmarks head-pose-estimation pytorch
Last synced: about 4 hours ago
JSON representation
3DV 2021: Synergy between 3DMM and 3D Landmarks for Accurate 3D Facial Geometry
- Host: GitHub
- URL: https://github.com/choyingw/synergynet
- Owner: choyingw
- License: mit
- Created: 2021-10-20T04:51:29.000Z (about 3 years ago)
- Default Branch: main
- Last Pushed: 2023-03-28T06:38:54.000Z (over 1 year ago)
- Last Synced: 2023-10-20T18:51:35.450Z (about 1 year ago)
- Topics: 2d-3d, 3d, 3d-face-alignment, 3d-face-reconstruction, 3dv2021, 3dvision, computer-graphics, computer-vision, deep-neural-networks, facial-keypoints, facial-landmarks, head-pose-estimation, pytorch
- Language: Jupyter Notebook
- Homepage:
- Size: 39.2 MB
- Stars: 318
- Watchers: 19
- Forks: 51
- Open Issues: 19
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
#
SynergyNet
3DV 2021: Synergy between 3DMM and 3D Landmarks for Accurate 3D Facial GeometryCho-Ying Wu, Qiangeng Xu, Ulrich Neumann, CGIT Lab at University of Souther California
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/synergy-between-3dmm-and-3d-landmarks-for/face-alignment-on-aflw)](https://paperswithcode.com/sota/face-alignment-on-aflw?p=synergy-between-3dmm-and-3d-landmarks-for)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/synergy-between-3dmm-and-3d-landmarks-for/head-pose-estimation-on-aflw2000)](https://paperswithcode.com/sota/head-pose-estimation-on-aflw2000?p=synergy-between-3dmm-and-3d-landmarks-for)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/synergy-between-3dmm-and-3d-landmarks-for/face-alignment-on-aflw2000-3d)](https://paperswithcode.com/sota/face-alignment-on-aflw2000-3d?p=synergy-between-3dmm-and-3d-landmarks-for)[paper] [video] [project page]
News [Jul 10, 2022]: Add simplified api for getting 3d landmarks, face mesh, and face pose in only one line. See "Simplified API" It's convenient if you simply want to plug in this method in your work.
News: Add Colab demo
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1q9HRLA3wGxz4IFIseZFK1maOyH0wutYk?usp=sharing)News: Our new work [Cross-Modal Perceptionist] is accepted to CVPR 2022, which is based on this SynergyNet project.
##
Advantages:+1: SOTA on all 3D facial alignment, face orientation estimation, and 3D face modeling.
:+1: Fast inference with 3000fps on a laptop RTX 2080.
:+1: Simple implementation with only widely used operations.(This project is built/tested on Python 3.8 and PyTorch 1.9 on a compatible GPU)
##
Single Image Inference Demo1. Clone
```git clone https://github.com/choyingw/SynergyNet```
```cd SynergyNet ```
2. Use conda
```conda create --name SynergyNet```
```conda activate SynergyNet```
3. Install pre-requisite common packages
```PyTorch 1.9 (should also be compatiable with 1.0+ versions), Torchvision, Opencv, Scipy, Matplotlib, Cython ```
4. Download data [here] and
[here]. Extract these data under the repo root.These data are processed from [3DDFA] and [FSA-Net].
Download pretrained weights [here]. Put the model under 'pretrained/'
5. Compile Sim3DR and FaceBoxes:
```cd Sim3DR```
```./build_sim3dr.sh```
```cd ../FaceBoxes```
```./build_cpu_nms.sh```
```cd ..```
6. Inference
```python singleImage.py -f img```
The default inference requires a compatible GPU to run. If you would like to run on a CPU, please comment the .cuda() and load the pretrained weights into cpu.
##
Simplified APIWe provide a simple API for convenient usage if you want to plug in this method into your work.
```python
import cv2
from synergy3DMM import SynergyNet
model = SynergyNet()
I = cv2.imread()
# get landmark [[y, x, z], 68 (points)], mesh [[y, x, z], 53215 (points)], and face pose (Euler angles [yaw, pitch, roll] and translation [y, x, z])
lmk3d, mesh, pose = model.get_all_outputs(I)
```
We provide a simple script in singleImage_simple.pyWe also provide a setup.py file. Run
pip install -e .
You can dofrom synergy3DMM import SynergyNet
in other directory. Note that [3dmm_data] and [pretrained weight] (Put the model under 'pretrained/') need to be present.##
Benchmark Evaluation1. Follow Single Image Inference Demo: Step 1-4
2. Benchmarking
```python benchmark.py -w pretrained/best.pth.tar```
Print-out results and visualization fo first-50 examples are stored under 'results/' (see 'demo/' for some pre-generated samples as references) are shown.
Updates: Best head pose estimation [pretrained model] (Mean MAE: 3.31) that is better than number reported in paper (3.35). Use -w to load different pretrained models.
##
Training1. Follow Single Image Inference Demo: Step 1-4.
2. Download training data from [3DDFA]: train_aug_120x120.zip and extract the zip file under the root folder (Containing about 680K images).
3.
```bash train_script.sh```4. Please refer to train_script for hyperparameters, such as learning rate, epochs, or GPU device. The default settings take ~19G on a 3090 GPU and about 6 hours for training. If your GPU is less than this size, please decrease the batch size and learning rate proportionally.
##
Textured Artistic Face Meshes1. Follow Single Image Inference Demo: Step 1-5.
2. Download artistic faces data [here], which are from [AF-Dataset]. Download our predicted UV maps [here] by UV-texture GAN. Extract them under the root folder.
3.
```python artistic.py -f art-all --png```(whole folder)
```python artistic.py -f art-all/122.png```(single image)
Note that this artistic face dataset contains many different level/style face abstration. If a testing image is close to real, the result is much better than those of highly abstract samples.
##
Textured Real Face Renderings1. Follow Single Image Inference Demo: Step 1-5.
2. Download our predicted UV maps and real face images for AFLW2000-3D [here] by UV-texture GAN. Extract them under the root folder.
3.
```python uv_texture_realFaces.py -f texture_data/real --png``` (whole folder)```python uv_texture_realFaces.py -f texture_data/real/image00002_real_A.png``` (single image)
The results (3D meshes and renderings) are stored under 'inference_output'
##
More ResultsWe show a comparison with [DECA] using the top-3 largest roll angle samples in AFLW2000-3D.
Facial alignemnt on AFLW2000-3D (NME of facial landmarks):
Face orientation estimation on AFLW2000-3D (MAE of Euler angles):
Results on artistic faces:
**Related Project**
[Cross-Modal Perceptionist] (analysis on relation for voice and 3D face)
**Bibtex**
If you find our work useful, please consider to cite our work
@INPROCEEDINGS{wu2021synergy,
author={Wu, Cho-Ying and Xu, Qiangeng and Neumann, Ulrich},
booktitle={2021 International Conference on 3D Vision (3DV)},
title={Synergy between 3DMM and 3D Landmarks for Accurate 3D Facial Geometry},
year={2021}
}**Acknowledgement**
The project is developed on [3DDFA] and [FSA-Net]. Thank them for their wonderful work. Thank [3DDFA-V2] for the face detector and rendering codes.