{"id":13586201,"url":"https://github.com/PaddlePaddle/PaddleGAN","last_synced_at":"2025-04-07T14:34:13.086Z","repository":{"id":37701471,"uuid":"273214029","full_name":"PaddlePaddle/PaddleGAN","owner":"PaddlePaddle","description":"PaddlePaddle GAN library, including lots of interesting applications like First-Order motion transfer,  Wav2Lip, picture repair, image editing, photo2cartoon, image style transfer, GPEN, and so on.","archived":false,"fork":false,"pushed_at":"2024-07-03T15:05:24.000Z","size":167503,"stargazers_count":7882,"open_issues_count":128,"forks_count":1246,"subscribers_count":108,"default_branch":"develop","last_synced_at":"2024-10-28T18:35:36.928Z","etag":null,"topics":["animeganv2","basicvsrplusplus","cyclegan","edvr","first-order-motion-model","gan","gpen","image-editing","image-generation","motion-transfer","photo2cartoon","pix2pix","psgan","realsr","resolution","stylegan2","super-resolution","wav2lip"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/PaddlePaddle.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2020-06-18T10:56:01.000Z","updated_at":"2024-10-27T23:43:19.000Z","dependencies_parsed_at":"2024-01-07T16:31:42.094Z","dependency_job_id":"e9595444-a706-4bed-b2e3-fc2bd1ecdb5d","html_url":"https://github.com/PaddlePaddle/PaddleGAN","commit_stats":{"total_commits":442,"total_committers":54,"mean_commits":8.185185185185185,"dds":0.8122171945701357,"last_synced_commit":"0927444b54427f29f94bcd85c42423368089d45c"},"previous_names":[],"tags_count":3,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PaddlePaddle%2FPaddleGAN","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PaddlePaddle%2FPaddleGAN/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PaddlePaddle%2FPaddleGAN/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PaddlePaddle%2FPaddleGAN/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/PaddlePaddle","download_url":"https://codeload.github.com/PaddlePaddle/PaddleGAN/tar.gz/refs/heads/develop","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":223285198,"owners_count":17119858,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["animeganv2","basicvsrplusplus","cyclegan","edvr","first-order-motion-model","gan","gpen","image-editing","image-generation","motion-transfer","photo2cartoon","pix2pix","psgan","realsr","resolution","stylegan2","super-resolution","wav2lip"],"created_at":"2024-08-01T15:05:23.590Z","updated_at":"2025-04-07T14:34:13.080Z","avatar_url":"https://github.com/PaddlePaddle.png","language":"Python","readme":"\nEnglish | [简体中文](./README_cn.md)\n\n# PaddleGAN\n\nPaddleGAN provides developers with high-performance implementation of classic and SOTA Generative Adversarial Networks, and supports developers to quickly build, train and deploy GANs for academic, entertainment and industrial usage.\n\nGAN-Generative Adversarial Network, was praised by \"the Father of Convolutional Networks\"  **Yann LeCun (Yang Likun)**  as **[One of the most interesting ideas in the field of computer science in the past decade]**. It's the one research area in deep learning that AI researchers are most concerned about.\n\n\u003cdiv align='center'\u003e\n  \u003cimg src='./docs/imgs/ppgan.jpg'\u003e\n\u003c/div\u003e\n\n[![License](https://img.shields.io/badge/license-Apache%202-red.svg)](LICENSE)![python version](https://img.shields.io/badge/python-3.6+-orange.svg)\n\n## 🎪 Hot Activities\n\n- 2021.4.15~4.22\n\n  GAN 7 Days Course Camp: Baidu Senior Research Developers help you learn the basic and advanced GAN knowledge in 7 days!\n\n  **Courses videos and related materials: https://aistudio.baidu.com/aistudio/course/introduce/16651**\n\n## 🚀 Recent Updates\n\n- 👶 **Young or Old？：[StyleGAN V2 Face Editing](./docs/en_US/tutorials/styleganv2editing.md)-Time Machine！** 👨‍🦳\n  - **[Online Toturials](https://aistudio.baidu.com/aistudio/projectdetail/3251280?channelType=0\u0026channel=0)**\n  \u003cdiv align='center'\u003e\n    \u003cimg src='https://user-images.githubusercontent.com/48054808/146649047-765ec085-0a2c-4c88-9527-744836448651.gif' width='200'/\u003e\n  \u003c/div\u003e\n\n- 🔥 **Latest Release: [PP-MSVSR](./docs/en_US/tutorials/video_super_resolution.md)** 🔥\n    - **Video Super Resolution SOTA models**\n  \u003cdiv align='center'\u003e\n    \u003cimg src='https://user-images.githubusercontent.com/48054808/144848981-00c6ad21-0702-4381-9544-becb227ed9f0.gif' width='300'/\u003e\n  \u003c/div\u003e\n\n- 😍 **Boy or Girl？：[StyleGAN V2 Face Editing](./docs/en_US/tutorials/styleganv2editing.md)-Changing genders！** 😍\n  - **[Online Toturials](https://aistudio.baidu.com/aistudio/projectdetail/2565277?contributionType=1)**\n  \u003cdiv align='center'\u003e\n    \u003cimg src='https://user-images.githubusercontent.com/48054808/141226707-58bd661e-2102-4fb7-8e18-c794a6b59ee8.gif' width='300'/\u003e\n  \u003c/div\u003e\n\n- 👩‍🚀 **A Space Odyssey ：[LapStyle](./docs/zh_CN/tutorials/lap_style.md) image translation take you travel around the universe**👨‍🚀\n\n  - **[Online Toturials](https://aistudio.baidu.com/aistudio/projectdetail/2343740?contributionType=1)**\n\n    \u003cdiv align='center'\u003e\n      \u003cimg src='https://user-images.githubusercontent.com/48054808/133392621-9a552c46-841b-4fe4-bb24-7b0cbf86616c.gif' width='250'/\u003e\n      \u003cimg src='https://user-images.githubusercontent.com/48054808/133392630-c5329c4c-bc10-406e-a853-812a2b1f0fa6.gif' width='250'/\u003e\n      \u003cimg src='https://user-images.githubusercontent.com/48054808/133392652-f4811b1e-0676-4402-808b-a4c96c611368.gif' width='250'/\u003e\n    \u003c/div\u003e\n\n- 🧙‍♂️ **Latest Creative Project：create magic/dynamic profile for your student ID in Hogwarts** 🧙‍♀️\n\n  - **[Online Toturials](https://aistudio.baidu.com/aistudio/projectdetail/2288888?channelType=0\u0026channel=0)**\n\n    \u003cdiv align='center'\u003e\n      \u003cimg src='https://ai-studio-static-online.cdn.bcebos.com/da1c51844ac048aa8d4fa3151be95215eee75d8bb488409d92ec17285b227c2c' width='200'/\u003e\n    \u003c/div\u003e\n\n- **💞 Add Face Morphing function💞 : you can perfectly merge any two faces and make the new face get any facial expressions!**\n\n  - Tutorials: https://aistudio.baidu.com/aistudio/projectdetail/2254031\n\n    \u003cdiv align='center'\u003e\n      \u003cimg src='https://user-images.githubusercontent.com/48054808/128299870-66a73bb3-57a4-4985-aadc-8ddeab048145.gif' width='200'/\u003e\n    \u003c/div\u003e\n\n- **Publish a new version of First Oder Motion model by having two impressive features:**\n  - High resolution 512x512\n  - Face Enhancement\n  - Tutorials: https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/tutorials/motion_driving.md\n\n- **New image translation ability--transfer photo into oil painting style:**\n\n  - Complete tutorials for deployment: https://github.com/wzmsltw/PaintTransformer\n\n    \u003cdiv align='center'\u003e\n      \u003cimg src='https://user-images.githubusercontent.com/48054808/129904830-8b87e310-ea51-4aff-b29b-88920ee82447.png' width='500'/\u003e\n    \u003c/div\u003e\n\n## Document Tutorial\n\n#### **Installation**\n\n* Environment dependence:\n   - PaddlePaddle \u003e= 2.1.0\n   - Python \u003e= 3.6\n   - CUDA \u003e= 10.1\n* [Full installation tutorial](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/zh_CN/install.md)\n\n#### **Starter Tutorial**\n\n- [Quick start](./docs/en_US/get_started.md)\n- [Data Preparation](./docs/en_US/data_prepare.md)\n- [Instruction of APIs](./docs/en_US/apis/apps.md)\n- [Instruction of Config Files](./docs/en_US/config_doc.md)\n\n## Model Tutorial\n\n* [Pixel2Pixel](./docs/en_US/tutorials/pix2pix_cyclegan.md)\n* [CycleGAN](./docs/en_US/tutorials/pix2pix_cyclegan.md)\n* [LapStyle](./docs/en_US/tutorials/lap_style.md)\n* [PSGAN](./docs/en_US/tutorials/psgan.md)\n* [First Order Motion Model](./docs/en_US/tutorials/motion_driving.md)\n* [FaceParsing](./docs/en_US/tutorials/face_parse.md)\n* [AnimeGANv2](./docs/en_US/tutorials/animegan.md)\n* [U-GAT-IT](./docs/en_US/tutorials/ugatit.md)\n* [Photo2Cartoon](./docs/en_US/tutorials/photo2cartoon.md)\n* [Wav2Lip](./docs/en_US/tutorials/wav2lip.md)\n* [Single Image Super Resolution(SISR)](./docs/en_US/tutorials/single_image_super_resolution.md)\n  * Including: RealSR, ESRGAN, LESRCNN, PAN, DRN\n* [Video Super Resolution(VSR)](./docs/en_US/tutorials/video_super_resolution.md)\n  * Including: ⭐ PP-MSVSR ⭐, EDVR, BasicVSR, BasicVSR++\n* [StyleGAN2](./docs/en_US/tutorials/styleganv2.md)\n* [Pixel2Style2Pixel](./docs/en_US/tutorials/pixel2style2pixel.md)\n* [StarGANv2](docs/en_US/tutorials/starganv2.md)\n* [MPR Net](./docs/en_US/tutorials/mpr_net.md)\n* [FaceEnhancement](./docs/en_US/tutorials/face_enhancement.md)\n* [PReNet](./docs/en_US/tutorials/prenet.md)\n* [SwinIR](./docs/en_US/tutorials/swinir.md)\n* [InvDN](./docs/en_US/tutorials/invdn.md)\n* [AOT-GAN](./docs/en_US/tutorials/aotgan.md)\n* [NAFNet](./docs/en_US/tutorials/nafnet.md)\n* [GFPGan](./docs/en_US/tutorials/gfpgan.md)\n* [GPEN](./docs/en_US/tutorials/gpen.md)\n\n\n## Composite Application\n\n* [Video restore](./docs/en_US/tutorials/video_restore.md)\n\n## Online Tutorial\n\nYou can run those projects in the [AI Studio](https://aistudio.baidu.com/aistudio/projectoverview/public/1?kw=paddlegan) to learn how to use the models above:\n\n|Online Tutorial      |    link  |\n|--------------|-----------|\n|Motion Driving-multi-personal \"Mai-ha-hi\" | [Click and Try](https://aistudio.baidu.com/aistudio/projectdetail/1603391) |\n|Restore the video of Beijing hundreds years ago|[Click and Try](https://aistudio.baidu.com/aistudio/projectdetail/1161285)|\n|Motion Driving-When \"Su Daqiang\" sings \"unravel\" |[Click and Try](https://aistudio.baidu.com/aistudio/projectdetail/1048840)|\n\n## Examples\n\n### Face Morphing\n\n\u003cdiv align='center'\u003e\n  \u003cimg src='https://user-images.githubusercontent.com/48054808/129020371-75de20d1-705b-44b1-8254-e09710124244.gif'width='700' /\u003e\n\u003c/div\u003e\n\n### Image Translation\n\n\u003cdiv align='center'\u003e\n  \u003cimg src='https://user-images.githubusercontent.com/48054808/119464966-d5c1c000-bd75-11eb-9696-9bb75357229f.gif'width='700' height='200'/\u003e\n\u003c/div\u003e\n\n\n### Old video restore\n\u003cdiv align='center'\u003e\n  \u003cimg src='https://user-images.githubusercontent.com/48054808/119469496-fc81f580-bd79-11eb-865a-5e38482b1ae8.gif' width='700'/\u003e\n\u003c/div\u003e\n\n\n\n### Motion driving\n\u003cdiv align='center'\u003e\n  \u003cimg src='https://user-images.githubusercontent.com/48054808/119469551-0a377b00-bd7a-11eb-9117-e4871c8fb9c0.gif' width='700'\u003e\n\u003c/div\u003e\n\n\n### Super resolution\n\n\u003cdiv align='center'\u003e\n  \u003cimg src='https://user-images.githubusercontent.com/48054808/119469753-3e12a080-bd7a-11eb-9cde-4fa01b3201ab.png'width='700' height='250'/\u003e\n\u003c/div\u003e\n\n\n\n### Makeup shifter\n\n\u003cdiv align='center'\u003e\n  \u003cimg src='https://user-images.githubusercontent.com/48054808/119469834-4ff44380-bd7a-11eb-93b6-05b705dcfbf2.png'width='700' height='250'/\u003e\n\u003c/div\u003e\n\n\n\n### Face cartoonization\n\n\u003cdiv align='center'\u003e\n  \u003cimg src='https://user-images.githubusercontent.com/48054808/119469952-6bf7e500-bd7a-11eb-89ad-9a78b10bd4ab.png'width='700' height='250'/\u003e\n\u003c/div\u003e\n\n\n\n### Realistic face cartoonization\n\n\u003cdiv align='center'\u003e\n  \u003cimg src='https://user-images.githubusercontent.com/48054808/119470028-7f0ab500-bd7a-11eb-88e9-78a6b9e2e319.png'width='700' height='250'/\u003e\n\u003c/div\u003e\n\n\n\n### Photo animation\n\n\u003cdiv align='center'\u003e\n  \u003cimg src='https://user-images.githubusercontent.com/48054808/119470099-9184ee80-bd7a-11eb-8b12-c9400fe01266.png'width='700' height='250'/\u003e\n\u003c/div\u003e\n\n\n\n### Lip-syncing\n\n\u003cdiv align='center'\u003e\n  \u003cimg src='https://user-images.githubusercontent.com/48054808/119470166-a6618200-bd7a-11eb-9f98-58052ce21b14.gif'width='700'\u003e\n\u003c/div\u003e\n\n**NEW** try out the Lip-Syncing web demo on [Huggingface Spaces](https://huggingface.co/spaces) using [Gradio](https://github.com/gradio-app/gradio):  [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/akhaliq/steerable-nafx)\n\n## Changelog\n- v2.1.0 (2021.12.8)\n   - Release a video super-resolution model PP-MSVSR and multiple pre-training weights\n   - Release several SOTA video super-resolution models and their pre-trained models such as BasicVSR, IconVSR and BasicVSR++\n   - Release the light-weight motion-driven model(Volume compression: 229M-\u003e10.1M), and optimized the fusion effect\n   - Release high-resolution FOMM and Wav2Lip pre-trained models\n   - Release several interesting applications based on StyleGANv2, such as face inversion, face fusion and face editing\n   - Released Baidu’s self-developed and effective style transfer model LapStyle and its interesting applications, and launched the official website [experience page](https://www.paddlepaddle.org.cn/paddlegan)\n   - Release a light-weight image super-resolution model PAN\n\n- v2.0.0 (2021.6.2)\n  - Release [Fisrt Order Motion](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/motion_driving.md) model and multiple pre-training weights\n  - Release applications that support [Multi-face action driven](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/motion_driving.md#1-test-for-face)\n  - Release video super-resolution model [EDVR](https://github.com/PaddlePaddle/PaddleGAN/blob/develop/docs/en_US/tutorials/video_super_resolution.md) and multiple pre-training weights\n  - Release the contents of [7-day punch-in training camp](https://github.com/PaddlePaddle/PaddleGAN/tree/develop/education) corresponding to PaddleGAN\n  - Enhance the robustness of PaddleGAN running on the windows platform\n\n- v2.0.0-beta (2021.3.1)\n  - Completely switch the API of Paddle 2.0.0 version.\n  - Release of super-resolution models: ESRGAN, RealSR, LESRCNN, DRN, etc.\n  - Release lip migration model: Wav2Lip\n  - Release anime model of Street View: AnimeGANv2\n  - Release face animation model: U-GAT-IT, Photo2Cartoon\n  - Release SOTA generation model: StyleGAN2\n\n- v0.1.0 (2020.11.02)\n  - Release first version, supported models include Pixel2Pixel, CycleGAN, PSGAN. Supported applications include video frame interpolation, super resolution, colorize images and videos, image animation.\n  - Modular design and friendly interface.\n\n## Community\n\nScan OR Code below to join [PaddleGAN QQ Group：1058398620], you can get offical technical support  here and communicate with other developers/friends. Look forward to your participation!\n\n\u003cdiv align='center'\u003e\n  \u003cimg src='./docs/imgs/qq.png'width='250' height='300'/\u003e\n\u003c/div\u003e\n\n### PaddleGAN Special Interest Group（SIG）\n\nIt was first proposed and used by [ACM（Association for Computing Machinery)](https://en.wikipedia.org/wiki/Association_for_Computing_Machinery) in 1961. Top International open source organizations including [Kubernates](https://kubernetes.io/) all adopt the form of SIGs, so that members with the same specific interests can share, learn knowledge and develop projects. These members do not need to be in the same country/region or the same organization, as long as they are like-minded, they can all study, work, and play together with the same goals~\n\nPaddleGAN SIG is such a developer organization that brings together people who interested in GAN. There are frontline developers of PaddlePaddle, senior engineers from the world's top 500, and students from top universities at home and abroad.\n\nWe are continuing to recruit developers interested and capable to join us building this project and explore more useful and interesting applications together.\n\nSIG contributions:\n\n- [zhen8838](https://github.com/zhen8838): contributed to AnimeGANv2.\n- [Jay9z](https://github.com/Jay9z): contributed to DCGAN and updated install docs, etc.\n- [HighCWu](https://github.com/HighCWu): contributed to c-DCGAN and WGAN. Support to use `paddle.vision.datasets`.\n- [hao-qiang](https://github.com/hao-qiang) \u0026 [ minivision-ai ](https://github.com/minivision-ai): contributed to the photo2cartoon project.\n\n\n## Contributing\n\nContributions and suggestions are highly welcomed. Most contributions require you to agree to a [Contributor License Agreement (CLA)](https://cla-assistant.io/PaddlePaddle/PaddleGAN) declaring.\nWhen you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA. Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.\nFor more, please reference [contribution guidelines](docs/en_US/contribute.md).\n\n## License\nPaddleGAN is released under the [Apache 2.0 license](LICENSE).\n","funding_links":[],"categories":["Python","Deep Learning Framework","📚 学习资料","Tools"],"sub_categories":["High-Level DL APIs","🌐 开源框架","Generative Modeling"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FPaddlePaddle%2FPaddleGAN","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FPaddlePaddle%2FPaddleGAN","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FPaddlePaddle%2FPaddleGAN/lists"}