{"id":13636948,"url":"https://github.com/Rubikplayer/flame-fitting","last_synced_at":"2025-04-19T08:33:43.168Z","repository":{"id":38219380,"uuid":"111953122","full_name":"Rubikplayer/flame-fitting","owner":"Rubikplayer","description":"Example code for the FLAME 3D head model. The code demonstrates how to sample 3D heads from the model, fit the model to 3D keypoints and 3D scans.","archived":false,"fork":false,"pushed_at":"2023-02-16T02:19:52.000Z","size":10610,"stargazers_count":702,"open_issues_count":29,"forks_count":107,"subscribers_count":25,"default_branch":"master","last_synced_at":"2024-08-02T00:23:20.632Z","etag":null,"topics":["3d-face-alignment","3d-model","3d-reconstruction","chumpy","computer-graphics","computer-vision","face","face-alignment","face-model","flame","flame-fitting","flame-model","morphable-model","smpl-x"],"latest_commit_sha":null,"homepage":"http://flame.is.tue.mpg.de/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Rubikplayer.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2017-11-24T20:23:41.000Z","updated_at":"2024-08-01T06:16:25.000Z","dependencies_parsed_at":"2023-01-31T12:00:52.082Z","dependency_job_id":"0b637377-3975-4693-a305-b755984a7a12","html_url":"https://github.com/Rubikplayer/flame-fitting","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Rubikplayer%2Fflame-fitting","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Rubikplayer%2Fflame-fitting/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Rubikplayer%2Fflame-fitting/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Rubikplayer%2Fflame-fitting/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Rubikplayer","download_url":"https://codeload.github.com/Rubikplayer/flame-fitting/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":223795259,"owners_count":17204136,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["3d-face-alignment","3d-model","3d-reconstruction","chumpy","computer-graphics","computer-vision","face","face-alignment","face-model","flame","flame-fitting","flame-model","morphable-model","smpl-x"],"created_at":"2024-08-02T00:01:08.204Z","updated_at":"2024-11-09T06:31:03.659Z","avatar_url":"https://github.com/Rubikplayer.png","language":"Python","readme":"# FLAME: Articulated Expressive 3D Head Model\n\nThis is an official [FLAME](http://flame.is.tue.mpg.de/) repository. \n\nWe also provide [Tensorflow FLAME](https://github.com/TimoBolkart/TF_FLAME) and [PyTorch FLAME](https://github.com/HavenFeng/photometric_optimization) frameworks, and code to [convert from Basel Face Model to FLAME](https://github.com/TimoBolkart/BFM_to_FLAME).\n\n\u003cp align=\"center\"\u003e \n\u003cimg src=\"gifs/model_variations.gif\"\u003e\n\u003c/p\u003e\n\nFLAME is a lightweight and expressive generic head model learned from over 33,000 of accurately aligned 3D scans. FLAME combines a linear identity shape space (trained from head scans of 3800 subjects) with an articulated neck, jaw, and eyeballs, pose-dependent corrective blendshapes, and additional global expression blendshapes. For details please see the [scientific publication](https://ps.is.tuebingen.mpg.de/uploads_file/attachment/attachment/400/paper.pdf)\n\n```\nLearning a model of facial shape and expression from 4D scans\nTianye Li*, Timo Bolkart*, Michael J. Black, Hao Li, and Javier Romero\nACM Transactions on Graphics (Proc. SIGGRAPH Asia) 2017\n```\nand the [supplementary video](https://youtu.be/36rPTkhiJTM).\n\nThis codebase demonstrates\n * **Sampling:** Load and evaluate FLAME model for random parameters\n * **Landmark fitting:** Fit FLAME to 3D landmarks\n * **Scan fitting:** Fit FLAME to a 3D scan\n\n\u003cp align=\"center\"\u003e \n\u003cimg src=\"gifs/fitting_scan.gif\" width=\"60%\"\u003e\n\u003c/p\u003e\n\n### Set-up\n\nThe code has been tested with Python 3.6.9.\n\nClone the git project:\n```\ngit clone https://github.com/Rubikplayer/flame-fitting.git\n```\n\nInstall pip and virtualenv\n\n```\nsudo apt-get install python3-pip python3-venv\n```\n\nSet up virtual environment:\n```\nmkdir \u003cyour_home_dir\u003e/.virtualenvs\npython3 -m venv \u003cyour_home_dir\u003e/.virtualenvs/flame-fitting\n```\n\nActivate virtual environment:\n```\ncd flame-fitting\nsource \u003cyour_home_dir\u003e/.virtualenvs/flame-fitting/bin/activate\n```\n\nMake sure your pip version is up-to-date:\n```\npip install -U pip\n```\n\nSome requirements can be installed using:\n```\npip install -r requirements.txt\n```\n\nInstall mesh processing libraries from [MPI-IS/mesh](https://github.com/MPI-IS/mesh) within the virtual environment.\n\nThe scan-to-mesh distance used for fitting a scan depends on Eigen. Either download Eigen from [here](http://eigen.tuxfamily.org/index.php?title=Main_Page) OR clone the repository:\n```\ngit clone https://gitlab.com/libeigen/eigen.git\n```\nAfter downloading Eigen, you need to compile the code in the directory 'sbody/alignment/mesh_distance'. To do this go to the directory:\n```\ncd sbody/alignment/mesh_distance\n```\nEdit the file setup.py to set EIGEN_DIR to the location of Eigen. Then type:\n```\nmake\n```\n\n### Data\n\nTo download the FLAME model, sign up and agree to the model license under [MPI-IS/FLAME](https://flame.is.tue.mpg.de/downloads). Then run following script: \n```\n./fetch_FLAME.sh\n```\n\n### Demo\n\n * Load and evaluate FLAME model: `hello_world.py`\n * Fit FLAME to 3D landmarks: `fit_lmk3d.py`\n * Fit FLAME to a 3D scan: `fit_scan.py`\n\nFitting a scan requires scan and FLAME model to be in the same local coordiante systems. The `fit_scan.py` script provides different options by specifying the variable `scale_unit` to convert from Meters [m] (default), Centimeters [cm], or Milimieters [mm]. Please specify the right unit when running `fit_scan.py`. If the unit of the measurement unit is unknown, choose `scale_unit = 'NA'`.\n\n### Landmarks\n\n\u003cp align=\"center\"\u003e \n\u003cimg src=\"data/landmarks_51_annotated.png\" width=\"50%\"\u003e\n\u003c/p\u003e\n\nThe provided demos fit FLAME to 3D landmarks or to a scan, using 3D landmarks for initialization and during fitting. Both demos use the shown 51 landmarks. Providing the landmarks in the exact order is essential. The landmarks can for instance be obtained with [MeshLab](https://www.meshlab.net/) using the PickPoints module. PickPoints outputs a .pp file containing the selected points. The .pp file can be loaded with the provided 'load_picked_points(fname)' function in fitting/landmarks.py.\n \n### Citing\n\nWhen using this code in a scientific publication, please cite FLAME \n```\n@article{FLAME:SiggraphAsia2017,\n  title = {Learning a model of facial shape and expression from {4D} scans},\n  author = {Li, Tianye and Bolkart, Timo and Black, Michael. J. and Li, Hao and Romero, Javier},\n  journal = {ACM Transactions on Graphics, (Proc. SIGGRAPH Asia)},\n  volume = {36},\n  number = {6},\n  year = {2017},\n  url = {https://doi.org/10.1145/3130800.3130813}\n}\n```\n\n### License\n\nThe FLAME model is under a Creative Commons Attribution license. By using this code, you acknowledge that you have read the terms and conditions (https://flame.is.tue.mpg.de/modellicense.html), understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not use the code. You further agree to cite the FLAME paper when reporting results with this model.\n\n### Supported projects\n\nVisit the [FLAME-Universe](https://github.com/TimoBolkart/FLAME-Universe) for an overview of FLAME-based projects. \n\nFLAME supports several projects such as\n* [CoMA: Convolutional Mesh Autoencoders](https://github.com/anuragranj/coma)\n* [RingNet: 3D Face Shape and Expression Reconstruction from an Image without 3D Supervision](https://github.com/soubhiksanyal/RingNet)\n* [VOCA: Voice Operated Character Animation](https://github.com/TimoBolkart/voca)\n* [Expressive Body Capture: 3D Hands, Face, and Body from a Single Image](https://github.com/vchoutas/smplify-x)\n* [ExPose: Monocular Expressive Body Regression through Body-Driven Attention](https://github.com/vchoutas/expose)\n* [GIF: Generative Interpretable Faces](https://github.com/ParthaEth/GIF)\n* [DECA: Detailed Expression Capture and Animation](https://github.com/YadiraF/DECA)\n\nFLAME is part of [SMPL-X: : A new joint 3D model of the human body, face and hands together](https://github.com/vchoutas/smplx).\n\n### Acknowledgement\n\nCode in `smpl_webuser` originates from [SMPL Python code](http://smpl.is.tue.mpg.de/), and code in `sbody` originates from [SMALR](https://github.com/silviazuffi/smalr_online). We thank the authors for pushing these code packages. \n","funding_links":[],"categories":["Faces:"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FRubikplayer%2Fflame-fitting","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FRubikplayer%2Fflame-fitting","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FRubikplayer%2Fflame-fitting/lists"}