{"id":13710824,"url":"https://github.com/open-mmlab/mmgeneration","last_synced_at":"2025-05-15T11:06:42.684Z","repository":{"id":37395918,"uuid":"357891182","full_name":"open-mmlab/mmgeneration","owner":"open-mmlab","description":"MMGeneration is a powerful toolkit for generative models, based on PyTorch and MMCV. ","archived":false,"fork":false,"pushed_at":"2023-09-05T13:15:50.000Z","size":27862,"stargazers_count":1970,"open_issues_count":53,"forks_count":232,"subscribers_count":25,"default_branch":"master","last_synced_at":"2025-04-14T19:57:16.348Z","etag":null,"topics":["diffusion-models","gan","generative","generative-adversarial-network","mmcv","openmmlab","pytorch"],"latest_commit_sha":null,"homepage":"https://mmgeneration.readthedocs.io/en/latest/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/open-mmlab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":".github/CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":".github/CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2021-04-14T12:06:16.000Z","updated_at":"2025-04-14T03:44:21.000Z","dependencies_parsed_at":"2024-04-16T22:03:35.967Z","dependency_job_id":"12187434-a98c-44a5-884a-18cd68cf6ccd","html_url":"https://github.com/open-mmlab/mmgeneration","commit_stats":null,"previous_names":[],"tags_count":11,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/open-mmlab%2Fmmgeneration","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/open-mmlab%2Fmmgeneration/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/open-mmlab%2Fmmgeneration/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/open-mmlab%2Fmmgeneration/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/open-mmlab","download_url":"https://codeload.github.com/open-mmlab/mmgeneration/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254328385,"owners_count":22052632,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["diffusion-models","gan","generative","generative-adversarial-network","mmcv","openmmlab","pytorch"],"created_at":"2024-08-02T23:01:01.178Z","updated_at":"2025-05-15T11:06:42.658Z","avatar_url":"https://github.com/open-mmlab.png","language":"Python","readme":"\u003cdiv align=\"center\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/12726765/114528756-de55af80-9c7b-11eb-94d7-d3224ada1585.png\" width=\"400\"/\u003e\n      \u003cdiv\u003e\u0026nbsp;\u003c/div\u003e\n   \u003cdiv align=\"center\"\u003e\n     \u003cb\u003e\u003cfont size=\"5\"\u003eOpenMMLab website\u003c/font\u003e\u003c/b\u003e\n     \u003csup\u003e\n       \u003ca href=\"https://openmmlab.com\"\u003e\n         \u003ci\u003e\u003cfont size=\"4\"\u003eHOT\u003c/font\u003e\u003c/i\u003e\n       \u003c/a\u003e\n     \u003c/sup\u003e\n     \u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\n     \u003cb\u003e\u003cfont size=\"5\"\u003eOpenMMLab platform\u003c/font\u003e\u003c/b\u003e\n     \u003csup\u003e\n       \u003ca href=\"https://platform.openmmlab.com\"\u003e\n         \u003ci\u003e\u003cfont size=\"4\"\u003eTRY IT OUT\u003c/font\u003e\u003c/i\u003e\n       \u003c/a\u003e\n     \u003c/sup\u003e\n   \u003c/div\u003e\n   \u003cdiv\u003e\u0026nbsp;\u003c/div\u003e\n\u003c/div\u003e\n\n[![PyPI](https://img.shields.io/pypi/v/mmgen)](https://pypi.org/project/mmgen)\n[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmgeneration.readthedocs.io/en/latest/)\n[![badge](https://github.com/open-mmlab/mmgeneration/workflows/build/badge.svg)](https://github.com/open-mmlab/mmgeneration/actions)\n[![codecov](https://codecov.io/gh/open-mmlab/mmgeneration/branch/master/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmgeneration)\n[![license](https://img.shields.io/github/license/open-mmlab/mmgeneration.svg)](https://github.com/open-mmlab/mmgeneration/blob/master/LICENSE)\n[![open issues](https://isitmaintained.com/badge/open/open-mmlab/mmgeneration.svg)](https://github.com/open-mmlab/mmgeneration/issues)\n[![issue resolution](https://isitmaintained.com/badge/resolution/open-mmlab/mmgeneration.svg)](https://github.com/open-mmlab/mmgeneration/issues)\n\n[📘Documentation](https://mmgeneration.readthedocs.io/en/latest/) |\n[🛠️Installation](https://mmgeneration.readthedocs.io/en/latest/get_started.html#installation) |\n[👀Model Zoo](https://mmgeneration.readthedocs.io/en/latest/modelzoo_statistics.html) |\n[🆕Update News](https://github.com/open-mmlab/mmgeneration/blob/master/docs/en/changelog.md) |\n[🚀Ongoing Projects](https://github.com/open-mmlab/mmgeneration/projects) |\n[🤔Reporting Issues](https://github.com/open-mmlab/mmgeneration/issues)\n\nEnglish | [简体中文](README_zh-CN.md)\n\n## What's New\n\nMMGeneration has been merged in [MMEditing](https://github.com/open-mmlab/mmediting/tree/1.x). And we have supported new generation tasks and models. We highlight the following new features:\n\n- 🌟 Text2Image\n\n  - ✅ [GLIDE](https://github.com/open-mmlab/mmediting/tree/1.x/projects/glide/configs/README.md)\n  - ✅ [Disco-Diffusion](https://github.com/open-mmlab/mmediting/tree/1.x/configs/disco_diffusion/README.md)\n  - ✅ [Stable-Diffusion](https://github.com/open-mmlab/mmediting/tree/1.x/configs/stable_diffusion/README.md)\n\n- 🌟 3D-aware Generation\n\n  - ✅ [EG3D](https://github.com/open-mmlab/mmediting/tree/1.x/configs/eg3d/README.md)\n\n## Introduction\n\nMMGeneration is a powerful toolkit for generative models, especially for GANs now. It is based on PyTorch and [MMCV](https://github.com/open-mmlab/mmcv). The master branch works with **PyTorch 1.5+**.\n\n\u003cdiv align=\"center\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/12726765/114534478-9a65a900-9c81-11eb-8087-de8b6816eed8.png\" width=\"800\"/\u003e\n\u003c/div\u003e\n\n## Major Features\n\n- **High-quality Training Performance:** We currently support training on Unconditional GANs, Internal GANs, and Image Translation Models. Support for conditional models will come soon.\n- **Powerful Application Toolkit:** A plentiful toolkit containing multiple applications in GANs is provided to users. GAN interpolation, GAN projection, and GAN manipulations are integrated into our framework. It's time to play with your GANs! ([Tutorial for applications](docs/en/tutorials/applications.md))\n- **Efficient Distributed Training for Generative Models:** For the highly dynamic training in generative models, we adopt a new way to train dynamic models with `MMDDP`. ([Tutorial for DDP](docs/en/tutorials/ddp_train_gans.md))\n- **New Modular Design for Flexible Combination:** A new design for complex loss modules is proposed for customizing the links between modules, which can achieve flexible combination among different modules. ([Tutorial for new modular design](docs/en/tutorials/customize_losses.md))\n\n\u003ctable\u003e\n\u003cthead\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\n\u003cdiv align=\"center\"\u003e\n  \u003cb\u003e Training Visualization\u003c/b\u003e\n  \u003cbr/\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/12726765/114509105-b6f4e780-9c67-11eb-8644-110b3cb01314.gif\" width=\"200\"/\u003e\n\u003c/div\u003e\u003c/td\u003e\n    \u003ctd\u003e\n\u003cdiv align=\"center\"\u003e\n  \u003cb\u003e GAN Interpolation\u003c/b\u003e\n  \u003cbr/\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/12726765/114679300-9fd4f900-9d3e-11eb-8f37-c36a018c02f7.gif\" width=\"200\"/\u003e\n\u003c/div\u003e\u003c/td\u003e\n    \u003ctd\u003e\n\u003cdiv align=\"center\"\u003e\n  \u003cb\u003e GAN Projector\u003c/b\u003e\n  \u003cbr/\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/12726765/114524392-c11ee200-9c77-11eb-8b6d-37bc637f5626.gif\" width=\"200\"/\u003e\n\u003c/div\u003e\u003c/td\u003e\n    \u003ctd\u003e\n\u003cdiv align=\"center\"\u003e\n  \u003cb\u003e GAN Manipulation\u003c/b\u003e\n  \u003cbr/\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/12726765/114523716-20302700-9c77-11eb-804e-327ae1ca0c5b.gif\" width=\"200\"/\u003e\n\u003c/div\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n\u003c/thead\u003e\n\u003c/table\u003e\n\n## Highlight\n\n- **Positional Encoding as Spatial Inductive Bias in GANs (CVPR2021)** has been released in `MMGeneration`.  [\\[Config\\]](configs/positional_encoding_in_gans/README.md), [\\[Project Page\\]](https://nbei.github.io/gan-pos-encoding.html)\n- Conditional GANs have been supported in our toolkit. More methods and pre-trained weights will come soon.\n- Mixed-precision training (FP16) for StyleGAN2 has been supported. Please check [the comparison](configs/styleganv2/README.md) between different implementations.\n\n## Changelog\n\nv0.7.3 was released on 14/04/2023. Please refer to [changelog.md](docs/en/changelog.md) for details and release history.\n\n## Installation\n\nMMGeneration depends on [PyTorch](https://pytorch.org/) and [MMCV](https://github.com/open-mmlab/mmcv).\nBelow are quick steps for installation.\n\n**Step 1.**\nInstall PyTorch following [official instructions](https://pytorch.org/get-started/locally/), e.g.\n\n```python\npip3 install torch torchvision\n\n```\n\n**Step 2.**\nInstall MMCV with [MIM](https://github.com/open-mmlab/mim).\n\n```\npip3 install openmim\nmim install mmcv-full\n```\n\n**Step 3.**\nInstall MMGeneration from source.\n\n```\ngit clone https://github.com/open-mmlab/mmgeneration.git\ncd mmgeneration\npip3 install -e .\n```\n\nPlease refer to [get_started.md](docs/en/get_started.md) for more detailed instruction.\n\n## Getting Started\n\nPlease see [get_started.md](docs/en/get_started.md) for the basic usage of MMGeneration. [docs/en/quick_run.md](docs/en/quick_run.md) can offer full guidance for quick run. For other details and tutorials, please go to our [documentation](https://mmgeneration.readthedocs.io/).\n\n## ModelZoo\n\nThese methods have been carefully studied and supported in our frameworks:\n\n\u003cdetails open\u003e\n\u003csummary\u003eUnconditional GANs (click to collapse)\u003c/summary\u003e\n\n- ✅ [DCGAN](configs/dcgan/README.md) (ICLR'2016)\n- ✅ [WGAN-GP](configs/wgan-gp/README.md) (NIPS'2017)\n- ✅ [LSGAN](configs/lsgan/README.md) (ICCV'2017)\n- ✅ [GGAN](configs/ggan/README.md) (arXiv'2017)\n- ✅ [PGGAN](configs/pggan/README.md) (ICLR'2018)\n- ✅ [StyleGANV1](configs/styleganv1/README.md) (CVPR'2019)\n- ✅ [StyleGANV2](configs/styleganv2/README.md) (CVPR'2020)\n- ✅ [StyleGANV3](configs/styleganv3/README.md) (NeurIPS'2021)\n- ✅ [Positional Encoding in GANs](configs/positional_encoding_in_gans/README.md) (CVPR'2021)\n\n\u003c/details\u003e\n\n\u003cdetails open\u003e\n\u003csummary\u003eConditional GANs (click to collapse)\u003c/summary\u003e\n\n- ✅ [SNGAN](configs/sngan_proj/README.md) (ICLR'2018)\n- ✅ [Projection GAN](configs/sngan_proj/README.md) (ICLR'2018)\n- ✅ [SAGAN](configs/sagan/README.md) (ICML'2019)\n- ✅ [BIGGAN/BIGGAN-DEEP](configs/biggan/README.md) (ICLR'2019)\n\n\u003c/details\u003e\n\n\u003cdetails open\u003e\n\u003csummary\u003eTricks for GANs (click to collapse)\u003c/summary\u003e\n\n- ✅ [ADA](configs/ada/README.md) (NeurIPS'2020)\n\n\u003c/details\u003e\n\n\u003cdetails open\u003e\n\u003csummary\u003eImage2Image Translation (click to collapse)\u003c/summary\u003e\n\n- ✅ [Pix2Pix](configs/pix2pix/README.md) (CVPR'2017)\n- ✅ [CycleGAN](configs/cyclegan/README.md) (ICCV'2017)\n\n\u003c/details\u003e\n\n\u003cdetails open\u003e\n\u003csummary\u003eInternal Learning (click to collapse)\u003c/summary\u003e\n\n- ✅ [SinGAN](configs/singan/README.md) (ICCV'2019)\n\n\u003c/details\u003e\n\n\u003cdetails open\u003e\n\u003csummary\u003eDenoising Diffusion Probabilistic Models (click to collapse)\u003c/summary\u003e\n\n- ✅ [Improved DDPM](configs/improved_ddpm/README.md) (arXiv'2021)\n\n\u003c/details\u003e\n\n## Related-Applications\n\n- ✅ [MMGEN-FaceStylor](https://github.com/open-mmlab/MMGEN-FaceStylor)\n\n## Contributing\n\nWe appreciate all contributions to improve MMGeneration. Please refer to [CONTRIBUTING.md](https://github.com/open-mmlab/mmcv/blob/master/CONTRIBUTING.md) in MMCV for more details about the contributing guideline.\n\n## Citation\n\nIf you find this project useful in your research, please consider cite:\n\n```BibTeX\n@misc{2021mmgeneration,\n    title={{MMGeneration}: OpenMMLab Generative Model Toolbox and Benchmark},\n    author={MMGeneration Contributors},\n    howpublished = {\\url{https://github.com/open-mmlab/mmgeneration}},\n    year={2021}\n}\n```\n\n## License\n\nThis project is released under the [Apache 2.0 license](LICENSE). Some operations in `MMGeneration` are with other licenses instead of Apache2.0. Please refer to [LICENSES.md](LICENSES.md) for the careful check, if you are using our code for commercial matters.\n\n## Projects in OpenMMLab\n\n- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.\n- [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages.\n- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab image classification toolbox and benchmark.\n- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.\n- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.\n- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.\n- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.\n- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.\n- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.\n- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.\n- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.\n- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark.\n- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.\n- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.\n- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.\n- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.\n- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.\n- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox.\n- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.\n","funding_links":[],"categories":["Computer Vision","Python","其他_机器视觉","Tools"],"sub_categories":["General Purpose CV","网络服务_其他","Generative Modeling"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopen-mmlab%2Fmmgeneration","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fopen-mmlab%2Fmmgeneration","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopen-mmlab%2Fmmgeneration/lists"}