{"id":13405659,"url":"https://github.com/TencentARC/GFPGAN","last_synced_at":"2025-03-14T10:31:11.867Z","repository":{"id":37503105,"uuid":"349321229","full_name":"TencentARC/GFPGAN","owner":"TencentARC","description":"GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration.","archived":false,"fork":false,"pushed_at":"2024-04-02T16:39:30.000Z","size":5467,"stargazers_count":34910,"open_issues_count":354,"forks_count":5781,"subscribers_count":500,"default_branch":"master","last_synced_at":"2024-06-12T15:53:11.030Z","etag":null,"topics":["deep-learning","face-restoration","gan","gfpgan","image-restoration","pytorch","super-resolution"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/TencentARC.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-03-19T06:18:20.000Z","updated_at":"2024-06-18T10:49:25.135Z","dependencies_parsed_at":"2023-02-16T01:01:25.368Z","dependency_job_id":"b21e1bc0-0096-4297-b79b-b4193a194acc","html_url":"https://github.com/TencentARC/GFPGAN","commit_stats":{"total_commits":104,"total_committers":12,"mean_commits":8.666666666666666,"dds":"0.10576923076923073","last_synced_commit":"2eac2033893ca7f427f4035d80fe95b92649ac56"},"previous_names":[],"tags_count":15,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TencentARC%2FGFPGAN","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TencentARC%2FGFPGAN/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TencentARC%2FGFPGAN/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TencentARC%2FGFPGAN/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/TencentARC","download_url":"https://codeload.github.com/TencentARC/GFPGAN/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":243561952,"owners_count":20311204,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","face-restoration","gan","gfpgan","image-restoration","pytorch","super-resolution"],"created_at":"2024-07-30T19:02:07.620Z","updated_at":"2025-03-14T10:31:11.860Z","avatar_url":"https://github.com/TencentARC.png","language":"Python","readme":"\u003cp align=\"center\"\u003e\n  \u003cimg src=\"assets/gfpgan_logo.png\" height=130\u003e\n\u003c/p\u003e\n\n## \u003cdiv align=\"center\"\u003e\u003cb\u003e\u003ca href=\"README.md\"\u003eEnglish\u003c/a\u003e | \u003ca href=\"README_CN.md\"\u003e简体中文\u003c/a\u003e\u003c/b\u003e\u003c/div\u003e\n\n\u003cdiv align=\"center\"\u003e\n\u003c!-- \u003ca href=\"https://twitter.com/_Xintao_\" style=\"text-decoration:none;\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/17445847/187162058-c764ced6-952f-404b-ac85-ba95cce18e7b.png\" width=\"4%\" alt=\"\" /\u003e\n\u003c/a\u003e --\u003e\n\n[![download](https://img.shields.io/github/downloads/TencentARC/GFPGAN/total.svg)](https://github.com/TencentARC/GFPGAN/releases)\n[![PyPI](https://img.shields.io/pypi/v/gfpgan)](https://pypi.org/project/gfpgan/)\n[![Open issue](https://img.shields.io/github/issues/TencentARC/GFPGAN)](https://github.com/TencentARC/GFPGAN/issues)\n[![Closed issue](https://img.shields.io/github/issues-closed/TencentARC/GFPGAN)](https://github.com/TencentARC/GFPGAN/issues)\n[![LICENSE](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/TencentARC/GFPGAN/blob/master/LICENSE)\n[![python lint](https://github.com/TencentARC/GFPGAN/actions/workflows/pylint.yml/badge.svg)](https://github.com/TencentARC/GFPGAN/blob/master/.github/workflows/pylint.yml)\n[![Publish-pip](https://github.com/TencentARC/GFPGAN/actions/workflows/publish-pip.yml/badge.svg)](https://github.com/TencentARC/GFPGAN/blob/master/.github/workflows/publish-pip.yml)\n\u003c/div\u003e\n\n1. :boom: **Updated** online demo: [![Replicate](https://img.shields.io/static/v1?label=Demo\u0026message=Replicate\u0026color=blue)](https://replicate.com/tencentarc/gfpgan). Here is the [backup](https://replicate.com/xinntao/gfpgan).\n1. :boom: **Updated** online demo: [![Huggingface Gradio](https://img.shields.io/static/v1?label=Demo\u0026message=Huggingface%20Gradio\u0026color=orange)](https://huggingface.co/spaces/Xintao/GFPGAN)\n1. [Colab Demo](https://colab.research.google.com/drive/1sVsoBd9AjckIXThgtZhGrHRfFI6UUYOo) for GFPGAN \u003ca href=\"https://colab.research.google.com/drive/1sVsoBd9AjckIXThgtZhGrHRfFI6UUYOo\"\u003e\u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"google colab logo\"\u003e\u003c/a\u003e; (Another [Colab Demo](https://colab.research.google.com/drive/1Oa1WwKB4M4l1GmR7CtswDVgOCOeSLChA?usp=sharing) for the original paper model)\n\n\u003c!-- 3. Online demo: [Replicate.ai](https://replicate.com/xinntao/gfpgan) (may need to sign in, return the whole image)\n4. Online demo: [Baseten.co](https://app.baseten.co/applications/Q04Lz0d/operator_views/8qZG6Bg) (backed by GPU, returns the whole image)\n5. We provide a *clean* version of GFPGAN, which can run without CUDA extensions. So that it can run in **Windows** or on **CPU mode**. --\u003e\n\n\u003e :rocket: **Thanks for your interest in our work. You may also want to check our new updates on the *tiny models* for *anime images and videos* in [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN/blob/master/docs/anime_video_model.md)** :blush:\n\nGFPGAN aims at developing a **Practical Algorithm for Real-world Face Restoration**.\u003cbr\u003e\nIt leverages rich and diverse priors encapsulated in a pretrained face GAN (*e.g.*, StyleGAN2) for blind face restoration.\n\n:question: Frequently Asked Questions can be found in [FAQ.md](FAQ.md).\n\n:triangular_flag_on_post: **Updates**\n\n- :white_check_mark: Add [RestoreFormer](https://github.com/wzhouxiff/RestoreFormer) inference codes.\n- :white_check_mark: Add [V1.4 model](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth), which produces slightly more details and better identity than V1.3.\n- :white_check_mark: Add **[V1.3 model](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth)**, which produces **more natural** restoration results, and better results on *very low-quality* / *high-quality* inputs. See more in [Model zoo](#european_castle-model-zoo), [Comparisons.md](Comparisons.md)\n- :white_check_mark: Integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See [Gradio Web Demo](https://huggingface.co/spaces/akhaliq/GFPGAN).\n- :white_check_mark: Support enhancing non-face regions (background) with [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN).\n- :white_check_mark: We provide a *clean* version of GFPGAN, which does not require CUDA extensions.\n- :white_check_mark: We provide an updated model without colorizing faces.\n\n---\n\nIf GFPGAN is helpful in your photos/projects, please help to :star: this repo or recommend it to your friends. Thanks:blush:\nOther recommended projects:\u003cbr\u003e\n:arrow_forward: [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN): A practical algorithm for general image restoration\u003cbr\u003e\n:arrow_forward: [BasicSR](https://github.com/xinntao/BasicSR): An open-source image and video restoration toolbox\u003cbr\u003e\n:arrow_forward: [facexlib](https://github.com/xinntao/facexlib): A collection that provides useful face-relation functions\u003cbr\u003e\n:arrow_forward: [HandyView](https://github.com/xinntao/HandyView): A PyQt5-based image viewer that is handy for view and comparison\u003cbr\u003e\n\n---\n\n### :book: GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior\n\n\u003e [[Paper](https://arxiv.org/abs/2101.04061)] \u0026emsp; [[Project Page](https://xinntao.github.io/projects/gfpgan)] \u0026emsp; [Demo] \u003cbr\u003e\n\u003e [Xintao Wang](https://xinntao.github.io/), [Yu Li](https://yu-li.github.io/), [Honglun Zhang](https://scholar.google.com/citations?hl=en\u0026user=KjQLROoAAAAJ), [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ\u0026hl=en) \u003cbr\u003e\n\u003e Applied Research Center (ARC), Tencent PCG\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://xinntao.github.io/projects/GFPGAN_src/gfpgan_teaser.jpg\"\u003e\n\u003c/p\u003e\n\n---\n\n## :wrench: Dependencies and Installation\n\n- Python \u003e= 3.7 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html))\n- [PyTorch \u003e= 1.7](https://pytorch.org/)\n- Option: NVIDIA GPU + [CUDA](https://developer.nvidia.com/cuda-downloads)\n- Option: Linux\n\n### Installation\n\nWe now provide a *clean* version of GFPGAN, which does not require customized CUDA extensions. \u003cbr\u003e\nIf you want to use the original model in our paper, please see [PaperModel.md](PaperModel.md) for installation.\n\n1. Clone repo\n\n    ```bash\n    git clone https://github.com/TencentARC/GFPGAN.git\n    cd GFPGAN\n    ```\n\n1. Install dependent packages\n\n    ```bash\n    # Install basicsr - https://github.com/xinntao/BasicSR\n    # We use BasicSR for both training and inference\n    pip install basicsr\n\n    # Install facexlib - https://github.com/xinntao/facexlib\n    # We use face detection and face restoration helper in the facexlib package\n    pip install facexlib\n\n    pip install -r requirements.txt\n    python setup.py develop\n\n    # If you want to enhance the background (non-face) regions with Real-ESRGAN,\n    # you also need to install the realesrgan package\n    pip install realesrgan\n    ```\n\n## :zap: Quick Inference\n\nWe take the v1.3 version for an example. More models can be found [here](#european_castle-model-zoo).\n\nDownload pre-trained models: [GFPGANv1.3.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth)\n\n```bash\nwget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth -P experiments/pretrained_models\n```\n\n**Inference!**\n\n```bash\npython inference_gfpgan.py -i inputs/whole_imgs -o results -v 1.3 -s 2\n```\n\n```console\nUsage: python inference_gfpgan.py -i inputs/whole_imgs -o results -v 1.3 -s 2 [options]...\n\n  -h                   show this help\n  -i input             Input image or folder. Default: inputs/whole_imgs\n  -o output            Output folder. Default: results\n  -v version           GFPGAN model version. Option: 1 | 1.2 | 1.3. Default: 1.3\n  -s upscale           The final upsampling scale of the image. Default: 2\n  -bg_upsampler        background upsampler. Default: realesrgan\n  -bg_tile             Tile size for background sampler, 0 for no tile during testing. Default: 400\n  -suffix              Suffix of the restored faces\n  -only_center_face    Only restore the center face\n  -aligned             Input are aligned faces\n  -ext                 Image extension. Options: auto | jpg | png, auto means using the same extension as inputs. Default: auto\n```\n\nIf you want to use the original model in our paper, please see [PaperModel.md](PaperModel.md) for installation and inference.\n\n## :european_castle: Model Zoo\n\n| Version | Model Name  | Description |\n| :---: | :---:        |     :---:      |\n| V1.3 | [GFPGANv1.3.pth](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth) | Based on V1.2; **more natural** restoration results; better results on very low-quality / high-quality inputs. |\n| V1.2 | [GFPGANCleanv1-NoCE-C2.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth) | No colorization; no CUDA extensions are required. Trained with more data with pre-processing. |\n| V1 | [GFPGANv1.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/GFPGANv1.pth) | The paper model, with colorization. |\n\nThe comparisons are in [Comparisons.md](Comparisons.md).\n\nNote that V1.3 is not always better than V1.2. You may need to select different models based on your purpose and inputs.\n\n| Version | Strengths  | Weaknesses |\n| :---: | :---:        |     :---:      |\n|V1.3 |  ✓ natural outputs\u003cbr\u003e ✓better results on very low-quality inputs \u003cbr\u003e ✓ work on relatively high-quality inputs \u003cbr\u003e✓ can have repeated (twice) restorations | ✗ not very sharp \u003cbr\u003e ✗ have a slight change on identity |\n|V1.2 |  ✓ sharper output \u003cbr\u003e ✓ with beauty makeup | ✗ some outputs are unnatural |\n\nYou can find **more models (such as the discriminators)** here: [[Google Drive](https://drive.google.com/drive/folders/17rLiFzcUMoQuhLnptDsKolegHWwJOnHu?usp=sharing)], OR [[Tencent Cloud 腾讯微云](https://share.weiyun.com/ShYoCCoc)]\n\n## :computer: Training\n\nWe provide the training codes for GFPGAN (used in our paper). \u003cbr\u003e\nYou could improve it according to your own needs.\n\n**Tips**\n\n1. More high quality faces can improve the restoration quality.\n2. You may need to perform some pre-processing, such as beauty makeup.\n\n**Procedures**\n\n(You can try a simple version ( `options/train_gfpgan_v1_simple.yml`) that does not require face component landmarks.)\n\n1. Dataset preparation: [FFHQ](https://github.com/NVlabs/ffhq-dataset)\n\n1. Download pre-trained models and other data. Put them in the `experiments/pretrained_models` folder.\n    1. [Pre-trained StyleGAN2 model: StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/StyleGAN2_512_Cmul1_FFHQ_B12G4_scratch_800k.pth)\n    1. [Component locations of FFHQ: FFHQ_eye_mouth_landmarks_512.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/FFHQ_eye_mouth_landmarks_512.pth)\n    1. [A simple ArcFace model: arcface_resnet18.pth](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/arcface_resnet18.pth)\n\n1. Modify the configuration file `options/train_gfpgan_v1.yml` accordingly.\n\n1. Training\n\n\u003e python -m torch.distributed.launch --nproc_per_node=4 --master_port=22021 gfpgan/train.py -opt options/train_gfpgan_v1.yml --launcher pytorch\n\n## :scroll: License and Acknowledgement\n\nGFPGAN is released under Apache License Version 2.0.\n\n## BibTeX\n\n    @InProceedings{wang2021gfpgan,\n        author = {Xintao Wang and Yu Li and Honglun Zhang and Ying Shan},\n        title = {Towards Real-World Blind Face Restoration with Generative Facial Prior},\n        booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},\n        year = {2021}\n    }\n\n## :e-mail: Contact\n\nIf you have any question, please email `xintao.wang@outlook.com` or `xintaowang@tencent.com`.\n","funding_links":[],"categories":["Python","Tools","Image Quality Assessment","Image Restoration","Uncategorized","Training","人像\\姿势\\3D人脸","AI应用","Libs With Online Books","\u003c/\u003e Developer Libraries \u0026 Models","工具","Repos"],"sub_categories":["Creative Uses of Generative AI Image Synthesis Tools","Uncategorized","Content Restoration","网络服务_其他","Philosophy","图片相关"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FTencentARC%2FGFPGAN","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FTencentARC%2FGFPGAN","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FTencentARC%2FGFPGAN/lists"}