{"id":15035388,"url":"https://github.com/open-mmlab/mmengine","last_synced_at":"2025-05-13T16:06:45.239Z","repository":{"id":58688519,"uuid":"456857425","full_name":"open-mmlab/mmengine","owner":"open-mmlab","description":"OpenMMLab Foundational Library for Training Deep Learning Models","archived":false,"fork":false,"pushed_at":"2025-03-04T12:22:42.000Z","size":4184,"stargazers_count":1264,"open_issues_count":243,"forks_count":386,"subscribers_count":24,"default_branch":"main","last_synced_at":"2025-04-12T07:16:40.376Z","etag":null,"topics":["ai","computer-vision","deep-learning","machine-learning","python","pytorch"],"latest_commit_sha":null,"homepage":"https://mmengine.readthedocs.io/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/open-mmlab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":".github/CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":".github/CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2022-02-08T09:05:09.000Z","updated_at":"2025-04-11T16:44:28.000Z","dependencies_parsed_at":"2023-11-14T10:27:42.292Z","dependency_job_id":"b2443268-1f32-4f5d-a0b6-321ba86104c9","html_url":"https://github.com/open-mmlab/mmengine","commit_stats":{"total_commits":867,"total_committers":144,"mean_commits":6.020833333333333,"dds":0.6793540945790081,"last_synced_commit":"f79111ecc0eea9fbb1b7d1361a79f7062ca1ac10"},"previous_names":[],"tags_count":29,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/open-mmlab%2Fmmengine","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/open-mmlab%2Fmmengine/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/open-mmlab%2Fmmengine/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/open-mmlab%2Fmmengine/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/open-mmlab","download_url":"https://codeload.github.com/open-mmlab/mmengine/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":250513382,"owners_count":21443200,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","computer-vision","deep-learning","machine-learning","python","pytorch"],"created_at":"2024-09-24T20:28:30.664Z","updated_at":"2025-04-23T20:46:18.053Z","avatar_url":"https://github.com/open-mmlab.png","language":"Python","readme":"\u003cdiv align=\"center\"\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/58739961/187154444-fce76639-ac8d-429b-9354-c6fac64b7ef8.jpg\" width=\"600\"/\u003e\n  \u003cdiv\u003e\u0026nbsp;\u003c/div\u003e\n  \u003cdiv align=\"center\"\u003e\n    \u003cb\u003e\u003cfont size=\"5\"\u003eOpenMMLab website\u003c/font\u003e\u003c/b\u003e\n    \u003csup\u003e\n      \u003ca href=\"https://openmmlab.com\"\u003e\n        \u003ci\u003e\u003cfont size=\"4\"\u003eHOT\u003c/font\u003e\u003c/i\u003e\n      \u003c/a\u003e\n    \u003c/sup\u003e\n    \u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\n    \u003cb\u003e\u003cfont size=\"5\"\u003eOpenMMLab platform\u003c/font\u003e\u003c/b\u003e\n    \u003csup\u003e\n      \u003ca href=\"https://platform.openmmlab.com\"\u003e\n        \u003ci\u003e\u003cfont size=\"4\"\u003eTRY IT OUT\u003c/font\u003e\u003c/i\u003e\n      \u003c/a\u003e\n    \u003c/sup\u003e\n  \u003c/div\u003e\n  \u003cdiv\u003e\u0026nbsp;\u003c/div\u003e\n\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/mmengine)](https://pypi.org/project/mmengine/)\n[![pytorch](https://img.shields.io/badge/pytorch-1.6~2.1-yellow)](#installation)\n[![PyPI](https://img.shields.io/pypi/v/mmengine)](https://pypi.org/project/mmengine)\n[![license](https://img.shields.io/github/license/open-mmlab/mmengine.svg)](https://github.com/open-mmlab/mmengine/blob/main/LICENSE)\n\n[Introduction](#introduction) |\n[Installation](#installation) |\n[Get Started](#get-started) |\n[📘Documentation](https://mmengine.readthedocs.io/en/latest/) |\n[🤔Reporting Issues](https://github.com/open-mmlab/mmengine/issues/new/choose)\n\n\u003c/div\u003e\n\n\u003cdiv align=\"center\"\u003e\n\nEnglish | [简体中文](README_zh-CN.md)\n\n\u003c/div\u003e\n\n\u003cdiv align=\"center\"\u003e\n  \u003ca href=\"https://openmmlab.medium.com/\" style=\"text-decoration:none;\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/25839884/219255827-67c1a27f-f8c5-46a9-811d-5e57448c61d1.png\" width=\"3%\" alt=\"\" /\u003e\u003c/a\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" /\u003e\n  \u003ca href=\"https://discord.com/channels/1037617289144569886/1073056342287323168\" style=\"text-decoration:none;\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218347213-c080267f-cbb6-443e-8532-8e1ed9a58ea9.png\" width=\"3%\" alt=\"\" /\u003e\u003c/a\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" /\u003e\n  \u003ca href=\"https://twitter.com/OpenMMLab\" style=\"text-decoration:none;\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218346637-d30c8a0f-3eba-4699-8131-512fb06d46db.png\" width=\"3%\" alt=\"\" /\u003e\u003c/a\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" /\u003e\n  \u003ca href=\"https://www.youtube.com/openmmlab\" style=\"text-decoration:none;\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218346691-ceb2116a-465a-40af-8424-9f30d2348ca9.png\" width=\"3%\" alt=\"\" /\u003e\u003c/a\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" /\u003e\n  \u003ca href=\"https://space.bilibili.com/1293512903\" style=\"text-decoration:none;\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/25839884/219026751-d7d14cce-a7c9-4e82-9942-8375fca65b99.png\" width=\"3%\" alt=\"\" /\u003e\u003c/a\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" /\u003e\n  \u003ca href=\"https://www.zhihu.com/people/openmmlab\" style=\"text-decoration:none;\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/25839884/219026120-ba71e48b-6e94-4bd4-b4e9-b7d175b5e362.png\" width=\"3%\" alt=\"\" /\u003e\u003c/a\u003e\n\u003c/div\u003e\n\n## What's New\n\nv0.10.6 was released on 2025-01-13.\n\nHighlights:\n\n- Support custom `artifact_location` in MLflowVisBackend [#1505](#1505)\n- Enable `exclude_frozen_parameters` for `DeepSpeedEngine._zero3_consolidated_16bit_state_dict` [#1517](#1517)\n\nRead [Changelog](./docs/en/notes/changelog.md#v0104-2342024) for more details.\n\n## Introduction\n\nMMEngine is a foundational library for training deep learning models based on PyTorch. It serves as the training engine of all OpenMMLab codebases, which support hundreds of algorithms in various research areas. Moreover, MMEngine is also generic to be applied to non-OpenMMLab projects. Its highlights are as follows:\n\n**Integrate mainstream large-scale model training frameworks**\n\n- [ColossalAI](https://mmengine.readthedocs.io/en/latest/common_usage/large_model_training.html#colossalai)\n- [DeepSpeed](https://mmengine.readthedocs.io/en/latest/common_usage/large_model_training.html#deepspeed)\n- [FSDP](https://mmengine.readthedocs.io/en/latest/common_usage/large_model_training.html#fullyshardeddataparallel-fsdp)\n\n**Supports a variety of training strategies**\n\n- [Mixed Precision Training](https://mmengine.readthedocs.io/en/latest/common_usage/speed_up_training.html#mixed-precision-training)\n- [Gradient Accumulation](https://mmengine.readthedocs.io/en/latest/common_usage/save_gpu_memory.html#gradient-accumulation)\n- [Gradient Checkpointing](https://mmengine.readthedocs.io/en/latest/common_usage/save_gpu_memory.html#gradient-checkpointing)\n\n**Provides a user-friendly configuration system**\n\n- [Pure Python-style configuration files, easy to navigate](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/config.html#a-pure-python-style-configuration-file-beta)\n- [Plain-text-style configuration files, supporting JSON and YAML](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/config.html)\n\n**Covers mainstream training monitoring platforms**\n\n- [TensorBoard](https://mmengine.readthedocs.io/en/latest/common_usage/visualize_training_log.html#tensorboard) | [WandB](https://mmengine.readthedocs.io/en/latest/common_usage/visualize_training_log.html#wandb) | [MLflow](https://mmengine.readthedocs.io/en/latest/common_usage/visualize_training_log.html#mlflow-wip)\n- [ClearML](https://mmengine.readthedocs.io/en/latest/common_usage/visualize_training_log.html#clearml) | [Neptune](https://mmengine.readthedocs.io/en/latest/common_usage/visualize_training_log.html#neptune) | [DVCLive](https://mmengine.readthedocs.io/en/latest/common_usage/visualize_training_log.html#dvclive) | [Aim](https://mmengine.readthedocs.io/en/latest/common_usage/visualize_training_log.html#aim)\n\n## Installation\n\n\u003cdetails\u003e\n\u003csummary\u003eSupported PyTorch Versions\u003c/summary\u003e\n\n| MMEngine           | PyTorch      | Python         |\n| ------------------ | ------------ | -------------- |\n| main               | \u003e=1.6 \\\u003c=2.1 | \u003e=3.8, \\\u003c=3.11 |\n| \u003e=0.9.0, \\\u003c=0.10.4 | \u003e=1.6 \\\u003c=2.1 | \u003e=3.8, \\\u003c=3.11 |\n\n\u003c/details\u003e\n\nBefore installing MMEngine, please ensure that PyTorch has been successfully installed following the [official guide](https://pytorch.org/get-started/locally/).\n\nInstall MMEngine\n\n```bash\npip install -U openmim\nmim install mmengine\n```\n\nVerify the installation\n\n```bash\npython -c 'from mmengine.utils.dl_utils import collect_env;print(collect_env())'\n```\n\n## Get Started\n\nTaking the training of a ResNet-50 model on the CIFAR-10 dataset as an example, we will use MMEngine to build a complete, configurable training and validation process in less than 80 lines of code.\n\n\u003cdetails\u003e\n\u003csummary\u003eBuild Models\u003c/summary\u003e\n\nFirst, we need to define a **model** which 1) inherits from `BaseModel` and 2) accepts an additional argument `mode` in the `forward` method, in addition to those arguments related to the dataset.\n\n- During training, the value of `mode` is \"loss\", and the `forward` method should return a `dict` containing the key \"loss\".\n- During validation, the value of `mode` is \"predict\", and the forward method should return results containing both predictions and labels.\n\n```python\nimport torch.nn.functional as F\nimport torchvision\nfrom mmengine.model import BaseModel\n\nclass MMResNet50(BaseModel):\n    def __init__(self):\n        super().__init__()\n        self.resnet = torchvision.models.resnet50()\n\n    def forward(self, imgs, labels, mode):\n        x = self.resnet(imgs)\n        if mode == 'loss':\n            return {'loss': F.cross_entropy(x, labels)}\n        elif mode == 'predict':\n            return x, labels\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eBuild Datasets\u003c/summary\u003e\n\nNext, we need to create **Dataset**s and **DataLoader**s for training and validation.\nIn this case, we simply use built-in datasets supported in TorchVision.\n\n```python\nimport torchvision.transforms as transforms\nfrom torch.utils.data import DataLoader\n\nnorm_cfg = dict(mean=[0.491, 0.482, 0.447], std=[0.202, 0.199, 0.201])\ntrain_dataloader = DataLoader(batch_size=32,\n                              shuffle=True,\n                              dataset=torchvision.datasets.CIFAR10(\n                                  'data/cifar10',\n                                  train=True,\n                                  download=True,\n                                  transform=transforms.Compose([\n                                      transforms.RandomCrop(32, padding=4),\n                                      transforms.RandomHorizontalFlip(),\n                                      transforms.ToTensor(),\n                                      transforms.Normalize(**norm_cfg)\n                                  ])))\nval_dataloader = DataLoader(batch_size=32,\n                            shuffle=False,\n                            dataset=torchvision.datasets.CIFAR10(\n                                'data/cifar10',\n                                train=False,\n                                download=True,\n                                transform=transforms.Compose([\n                                    transforms.ToTensor(),\n                                    transforms.Normalize(**norm_cfg)\n                                ])))\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eBuild Metrics\u003c/summary\u003e\n\nTo validate and test the model, we need to define a **Metric** called accuracy to evaluate the model. This metric needs to inherit from `BaseMetric` and implements the `process` and `compute_metrics` methods.\n\n```python\nfrom mmengine.evaluator import BaseMetric\n\nclass Accuracy(BaseMetric):\n    def process(self, data_batch, data_samples):\n        score, gt = data_samples\n        # Save the results of a batch to `self.results`\n        self.results.append({\n            'batch_size': len(gt),\n            'correct': (score.argmax(dim=1) == gt).sum().cpu(),\n        })\n    def compute_metrics(self, results):\n        total_correct = sum(item['correct'] for item in results)\n        total_size = sum(item['batch_size'] for item in results)\n        # Returns a dictionary with the results of the evaluated metrics,\n        # where the key is the name of the metric\n        return dict(accuracy=100 * total_correct / total_size)\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eBuild a Runner\u003c/summary\u003e\n\nFinally, we can construct a **Runner** with previously defined `Model`, `DataLoader`, and `Metrics`, with some other configs, as shown below.\n\n```python\nfrom torch.optim import SGD\nfrom mmengine.runner import Runner\n\nrunner = Runner(\n    model=MMResNet50(),\n    work_dir='./work_dir',\n    train_dataloader=train_dataloader,\n    # a wrapper to execute back propagation and gradient update, etc.\n    optim_wrapper=dict(optimizer=dict(type=SGD, lr=0.001, momentum=0.9)),\n    # set some training configs like epochs\n    train_cfg=dict(by_epoch=True, max_epochs=5, val_interval=1),\n    val_dataloader=val_dataloader,\n    val_cfg=dict(),\n    val_evaluator=dict(type=Accuracy),\n)\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eLaunch Training\u003c/summary\u003e\n\n```python\nrunner.train()\n```\n\n\u003c/details\u003e\n\n## Learn More\n\n\u003cdetails\u003e\n\u003csummary\u003eTutorials\u003c/summary\u003e\n\n- [Runner](https://mmengine.readthedocs.io/en/latest/tutorials/runner.html)\n- [Dataset and DataLoader](https://mmengine.readthedocs.io/en/latest/tutorials/dataset.html)\n- [Model](https://mmengine.readthedocs.io/en/latest/tutorials/model.html)\n- [Evaluation](https://mmengine.readthedocs.io/en/latest/tutorials/evaluation.html)\n- [OptimWrapper](https://mmengine.readthedocs.io/en/latest/tutorials/optim_wrapper.html)\n- [Parameter Scheduler](https://mmengine.readthedocs.io/en/latest/tutorials/param_scheduler.html)\n- [Hook](https://mmengine.readthedocs.io/en/latest/tutorials/hook.html)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eAdvanced tutorials\u003c/summary\u003e\n\n- [Registry](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/registry.html)\n- [Config](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/config.html)\n- [BaseDataset](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/basedataset.html)\n- [Data Transform](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/data_transform.html)\n- [Weight Initialization](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/initialize.html)\n- [Visualization](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/visualization.html)\n- [Abstract Data Element](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/data_element.html)\n- [Distribution Communication](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/distributed.html)\n- [Logging](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/logging.html)\n- [File IO](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/fileio.html)\n- [Global manager (ManagerMixin)](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/manager_mixin.html)\n- [Use modules from other libraries](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/cross_library.html)\n- [Test Time Agumentation](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/test_time_augmentation.html)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eExamples\u003c/summary\u003e\n\n- [Train a GAN](https://mmengine.readthedocs.io/en/latest/examples/train_a_gan.html)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eCommon Usage\u003c/summary\u003e\n\n- [Resume Training](https://mmengine.readthedocs.io/en/latest/common_usage/resume_training.html)\n- [Speed up Training](https://mmengine.readthedocs.io/en/latest/common_usage/speed_up_training.html)\n- [Save Memory on GPU](https://mmengine.readthedocs.io/en/latest/common_usage/save_gpu_memory.html)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eDesign\u003c/summary\u003e\n\n- [Hook](https://mmengine.readthedocs.io/en/latest/design/hook.html)\n- [Runner](https://mmengine.readthedocs.io/en/latest/design/runner.html)\n- [Evaluation](https://mmengine.readthedocs.io/en/latest/design/evaluation.html)\n- [Visualization](https://mmengine.readthedocs.io/en/latest/design/visualization.html)\n- [Logging](https://mmengine.readthedocs.io/en/latest/design/logging.html)\n- [Infer](https://mmengine.readthedocs.io/en/latest/design/infer.html)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eMigration guide\u003c/summary\u003e\n\n- [Migrate Runner from MMCV to MMEngine](https://mmengine.readthedocs.io/en/latest/migration/runner.html)\n- [Migrate Hook from MMCV to MMEngine](https://mmengine.readthedocs.io/en/latest/migration/hook.html)\n- [Migrate Model from MMCV to MMEngine](https://mmengine.readthedocs.io/en/latest/migration/model.html)\n- [Migrate Parameter Scheduler from MMCV to MMEngine](https://mmengine.readthedocs.io/en/latest/migration/param_scheduler.html)\n- [Migrate Data Transform to OpenMMLab 2.0](https://mmengine.readthedocs.io/en/latest/migration/transform.html)\n\n\u003c/details\u003e\n\n## Contributing\n\nWe appreciate all contributions to improve MMEngine. Please refer to [CONTRIBUTING.md](CONTRIBUTING.md) for the contributing guideline.\n\n## Citation\n\nIf you find this project useful in your research, please consider cite:\n\n```\n@article{mmengine2022,\n  title   = {{MMEngine}: OpenMMLab Foundational Library for Training Deep Learning Models},\n  author  = {MMEngine Contributors},\n  howpublished = {\\url{https://github.com/open-mmlab/mmengine}},\n  year={2022}\n}\n```\n\n## License\n\nThis project is released under the [Apache 2.0 license](LICENSE).\n\n## Ecosystem\n\n- [APES: Attention-based Point Cloud Edge Sampling](https://github.com/JunweiZheng93/APES)\n- [DiffEngine: diffusers training toolbox with mmengine](https://github.com/okotaku/diffengine)\n\n## Projects in OpenMMLab\n\n- [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages.\n- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.\n- [MMEval](https://github.com/open-mmlab/mmeval): A unified evaluation library for multiple machine learning libraries.\n- [MMPreTrain](https://github.com/open-mmlab/mmpretrain): OpenMMLab pre-training toolbox and benchmark.\n- [MMagic](https://github.com/open-mmlab/mmagic): Open**MM**Lab **A**dvanced, **G**enerative and **I**ntelligent **C**reation toolbox.\n- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.\n- [MMYOLO](https://github.com/open-mmlab/mmyolo): OpenMMLab YOLO series toolbox and benchmark.\n- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.\n- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.\n- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.\n- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.\n- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.\n- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.\n- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.\n- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.\n- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.\n- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.\n- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.\n- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.\n- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark.\n- [Playground](https://github.com/open-mmlab/playground): A central hub for gathering and showcasing amazing projects built upon OpenMMLab.\n","funding_links":[],"categories":["Computer Vision"],"sub_categories":["Others"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopen-mmlab%2Fmmengine","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fopen-mmlab%2Fmmengine","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopen-mmlab%2Fmmengine/lists"}