{"id":13958391,"url":"https://github.com/yfeng95/DECA","last_synced_at":"2025-07-20T23:31:36.497Z","repository":{"id":37246175,"uuid":"253947972","full_name":"yfeng95/DECA","owner":"yfeng95","description":"DECA: Detailed Expression Capture and Animation (SIGGRAPH 2021)","archived":false,"fork":false,"pushed_at":"2023-07-23T11:15:26.000Z","size":23507,"stargazers_count":2160,"open_issues_count":158,"forks_count":426,"subscribers_count":40,"default_branch":"master","last_synced_at":"2024-11-21T03:32:47.420Z","etag":null,"topics":["3d","alignment","depth","face","flame","model","reconstruction"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/yfeng95.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2020-04-08T00:51:49.000Z","updated_at":"2024-11-21T03:13:33.000Z","dependencies_parsed_at":"2023-02-01T06:15:57.131Z","dependency_job_id":"4c9cfe7f-8dec-4d42-8c7d-77d4b7a9151a","html_url":"https://github.com/yfeng95/DECA","commit_stats":null,"previous_names":["yadiraf/deca"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/yfeng95%2FDECA","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/yfeng95%2FDECA/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/yfeng95%2FDECA/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/yfeng95%2FDECA/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/yfeng95","download_url":"https://codeload.github.com/yfeng95/DECA/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":226845016,"owners_count":17691142,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["3d","alignment","depth","face","flame","model","reconstruction"],"created_at":"2024-08-08T13:01:31.437Z","updated_at":"2024-11-28T01:32:00.081Z","avatar_url":"https://github.com/yfeng95.png","language":"Python","readme":"# DECA: Detailed Expression Capture and Animation (SIGGRAPH2021)\n\n\u003cp align=\"center\"\u003e \n\u003cimg src=\"TestSamples/teaser/results/teaser.gif\"\u003e\n\u003c/p\u003e\n\u003cp align=\"center\"\u003einput image, aligned reconstruction, animation with various poses \u0026 expressions\u003cp align=\"center\"\u003e\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/YadiraF/DECA/blob/master/Detailed_Expression_Capture_and_Animation.ipynb?authuser=1)\n\nThis is the official Pytorch implementation of DECA. \n\nDECA reconstructs a 3D head model with detailed facial geometry from a single input image. The resulting 3D head model can be easily animated. Please refer to the [arXiv paper](https://arxiv.org/abs/2012.04012) for more details.\n\nThe main features:\n\n* **Reconstruction:** produces head pose, shape, detailed face geometry, and lighting information from a single image.\n* **Animation:** animate the face with realistic wrinkle deformations.\n* **Robustness:** tested on facial images in unconstrained conditions.  Our method is robust to various poses, illuminations and occlusions. \n* **Accurate:** state-of-the-art 3D face shape reconstruction on the [NoW Challenge](https://ringnet.is.tue.mpg.de/challenge) benchmark dataset.\n  \n## Getting Started\nClone the repo:\n  ```bash\n  git clone https://github.com/YadiraF/DECA\n  cd DECA\n  ```  \n\n### Requirements\n* Python 3.7 (numpy, skimage, scipy, opencv)  \n* PyTorch \u003e= 1.6 (pytorch3d)  \n* face-alignment (Optional for detecting face)  \n  You can run \n  ```bash\n  pip install -r requirements.txt\n  ```\n  Or use virtual environment by runing \n  ```bash\n  bash install_conda.sh\n  ```\n  For visualization, we use our rasterizer that uses pytorch JIT Compiling Extensions. If there occurs a compiling error, you can install [pytorch3d](https://github.com/facebookresearch/pytorch3d/blob/master/INSTALL.md) instead and set --rasterizer_type=pytorch3d when running the demos.\n\n### Usage\n1. Prepare data   \n    run script: \n    ```bash\n    bash fetch_data.sh\n    ```\n    \u003c!-- or manually download data form [FLAME 2020 model](https://flame.is.tue.mpg.de/download.php) and [DECA trained model](https://drive.google.com/file/d/1rp8kdyLPvErw2dTmqtjISRVvQLj6Yzje/view?usp=sharing), and put them in ./data  --\u003e  \n    (Optional for Albedo)   \n    follow the instructions for the [Albedo model](https://github.com/TimoBolkart/BFM_to_FLAME) to get 'FLAME_albedo_from_BFM.npz', put it into ./data\n\n2. Run demos  \n    a. **reconstruction**  \n    ```bash\n    python demos/demo_reconstruct.py -i TestSamples/examples --saveDepth True --saveObj True\n    ```   \n    to visualize the predicted 2D landmanks, 3D landmarks (red means non-visible points), coarse geometry, detailed geometry, and depth.   \n    \u003cp align=\"center\"\u003e   \n    \u003cimg src=\"Doc/images/id04657-PPHljWCZ53c-000565_inputs_inputs_vis.jpg\"\u003e\n    \u003c/p\u003e  \n    \u003cp align=\"center\"\u003e   \n    \u003cimg src=\"Doc/images/IMG_0392_inputs_vis.jpg\"\u003e\n    \u003c/p\u003e  \n    You can also generate an obj file (which can be opened with Meshlab) that includes extracted texture from the input image.  \n\n    Please run `python demos/demo_reconstruct.py --help` for more details. \n\n    b. **expression transfer**   \n    ```bash\n    python demos/demo_transfer.py\n    ```   \n    Given an image, you can reconstruct its 3D face, then animate it by tranfering expressions from other images. \n    Using Meshlab to open the detailed mesh obj file, you can see something like that:\n    \u003cp align=\"center\"\u003e \n    \u003cimg src=\"Doc/images/soubhik.gif\"\u003e\n    \u003c/p\u003e  \n    (Thank Soubhik for allowing me to use his face ^_^)   \n    \n    Note that, you need to set '--useTex True' to get full texture.   \n\n    c. for the [teaser gif](https://github.com/YadiraF/DECA/results/teaser.gif) (**reposing** and **animation**)\n    ```bash\n    python demos/demo_teaser.py \n    ``` \n    \n    More demos and training code coming soon.\n\n## Evaluation\nDECA (ours) achieves 9% lower mean shape reconstruction error on the [NoW Challenge](https://ringnet.is.tue.mpg.de/challenge) dataset compared to the previous state-of-the-art method.  \nThe left figure compares the cumulative error of our approach and other recent methods (RingNet and Deng et al. have nearly identitical performance, so their curves overlap each other). Here we use point-to-surface distance as the error metric, following the NoW Challenge.  \n\u003cp align=\"left\"\u003e \n\u003cimg src=\"Doc/images/DECA_evaluation_github.png\"\u003e\n\u003c/p\u003e\n\nFor more details of the evaluation, please check our [arXiv paper](https://arxiv.org/abs/2012.04012). \n\n## Training\n1. Prepare Training Data\n\n    a. Download image data  \n    In DECA, we use [VGGFace2](https://arxiv.org/pdf/1710.08092.pdf), [BUPT-Balancedface](http://www.whdeng.cn/RFW/Trainingdataste.html) and [VoxCeleb2](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox2.html)  \n\n    b. Prepare label  \n    [FAN](https://github.com/1adrianb/2D-and-3D-face-alignment) to predict 68 2D landmark  \n    [face_segmentation](https://github.com/YuvalNirkin/face_segmentation) to get skin mask  \n\n    c. Modify dataloader   \n    Dataloaders for different datasets are in decalib/datasets, use the right path for prepared images and labels. \n\n2. Download face recognition trained model  \n    We use the model from [VGGFace2-pytorch](https://github.com/cydonia999/VGGFace2-pytorch) for calculating identity loss,\n    download [resnet50_ft](https://drive.google.com/file/d/1A94PAAnwk6L7hXdBXLFosB_s0SzEhAFU/view),\n    and put it into ./data  \n\n3. Start training\n\n    Train from scratch: \n    ```bash\n    python main_train.py --cfg configs/release_version/deca_pretrain.yml \n    python main_train.py --cfg configs/release_version/deca_coarse.yml \n    python main_train.py --cfg configs/release_version/deca_detail.yml \n    ```\n    In the yml files, write the right path for 'output_dir' and 'pretrained_modelpath'.  \n    You can also use [released model](https://drive.google.com/file/d/1rp8kdyLPvErw2dTmqtjISRVvQLj6Yzje/view) as pretrained model, then ignor the pretrain step.\n\n## Related works:  \n* for better emotion prediction: [EMOCA](https://github.com/radekd91/emoca)  \n* for better skin estimation: [TRUST](https://github.com/HavenFeng/TRUST)\n\n## Citation\nIf you find our work useful to your research, please consider citing:\n```\n@inproceedings{DECA:Siggraph2021,\n  title={Learning an Animatable Detailed {3D} Face Model from In-The-Wild Images},\n  author={Feng, Yao and Feng, Haiwen and Black, Michael J. and Bolkart, Timo},\n  journal = {ACM Transactions on Graphics, (Proc. SIGGRAPH)}, \n  volume = {40}, \n  number = {8}, \n  year = {2021}, \n  url = {https://doi.org/10.1145/3450626.3459936} \n}\n```\n\n\u003c!-- ## Notes\n1. Training code will also be released in the future. --\u003e\n\n## License\nThis code and model are available for non-commercial scientific research purposes as defined in the [LICENSE](https://github.com/YadiraF/DECA/blob/master/LICENSE) file.\nBy downloading and using the code and model you agree to the terms in the [LICENSE](https://github.com/YadiraF/DECA/blob/master/LICENSE). \n\n## Acknowledgements\nFor functions or scripts that are based on external sources, we acknowledge the origin individually in each file.  \nHere are some great resources we benefit:  \n- [FLAME_PyTorch](https://github.com/soubhiksanyal/FLAME_PyTorch) and [TF_FLAME](https://github.com/TimoBolkart/TF_FLAME) for the FLAME model  \n- [Pytorch3D](https://pytorch3d.org/), [neural_renderer](https://github.com/daniilidis-group/neural_renderer), [SoftRas](https://github.com/ShichenLiu/SoftRas) for rendering  \n- [kornia](https://github.com/kornia/kornia) for image/rotation processing  \n- [face-alignment](https://github.com/1adrianb/face-alignment) for cropping   \n- [FAN](https://github.com/1adrianb/2D-and-3D-face-alignment) for landmark detection\n- [face_segmentation](https://github.com/YuvalNirkin/face_segmentation) for skin mask\n- [VGGFace2-pytorch](https://github.com/cydonia999/VGGFace2-pytorch) for identity loss  \n\nWe would also like to thank other recent public 3D face reconstruction works that allow us to easily perform quantitative and qualitative comparisons :)  \n[RingNet](https://github.com/soubhiksanyal/RingNet), \n[Deep3DFaceReconstruction](https://github.com/microsoft/Deep3DFaceReconstruction/blob/master/renderer/rasterize_triangles.py), \n[Nonlinear_Face_3DMM](https://github.com/tranluan/Nonlinear_Face_3DMM),\n[3DDFA-v2](https://github.com/cleardusk/3DDFA_V2),\n[extreme_3d_faces](https://github.com/anhttran/extreme_3d_faces),\n[facescape](https://github.com/zhuhao-nju/facescape)\n\u003c!-- 3DMMasSTN, DenseReg, 3dmm_cnn, vrn, pix2vertex --\u003e\n","funding_links":[],"categories":["人像\\姿势\\3D人脸","Python"],"sub_categories":["网络服务_其他"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fyfeng95%2FDECA","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fyfeng95%2FDECA","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fyfeng95%2FDECA/lists"}