{"id":13443834,"url":"https://github.com/open-mmlab/mmdetection3d","last_synced_at":"2025-04-23T20:48:00.568Z","repository":{"id":37341871,"uuid":"277982133","full_name":"open-mmlab/mmdetection3d","owner":"open-mmlab","description":"OpenMMLab's next-generation platform for general 3D object detection.","archived":false,"fork":false,"pushed_at":"2024-07-10T15:19:34.000Z","size":21095,"stargazers_count":5672,"open_issues_count":624,"forks_count":1603,"subscribers_count":59,"default_branch":"main","last_synced_at":"2025-04-12T19:18:58.894Z","etag":null,"topics":["3d-object-detection","object-detection","point-cloud","pytorch"],"latest_commit_sha":null,"homepage":"https://mmdetection3d.readthedocs.io/en/latest/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/open-mmlab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":".github/CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":".github/CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2020-07-08T03:39:45.000Z","updated_at":"2025-04-12T12:28:15.000Z","dependencies_parsed_at":"2022-07-13T15:59:47.230Z","dependency_job_id":"10cc9d4b-e385-42b6-8d16-780f07f8c521","html_url":"https://github.com/open-mmlab/mmdetection3d","commit_stats":{"total_commits":1270,"total_committers":113,"mean_commits":"11.238938053097344","dds":0.9125984251968504,"last_synced_commit":"fe25f7a51d36e3702f961e198894580d83c4387b"},"previous_names":[],"tags_count":35,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/open-mmlab%2Fmmdetection3d","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/open-mmlab%2Fmmdetection3d/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/open-mmlab%2Fmmdetection3d/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/open-mmlab%2Fmmdetection3d/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/open-mmlab","download_url":"https://codeload.github.com/open-mmlab/mmdetection3d/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":250513665,"owners_count":21443203,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["3d-object-detection","object-detection","point-cloud","pytorch"],"created_at":"2024-07-31T03:02:11.503Z","updated_at":"2025-04-23T20:48:00.537Z","avatar_url":"https://github.com/open-mmlab.png","language":"Python","readme":"\u003cdiv align=\"center\"\u003e\n  \u003cimg src=\"resources/mmdet3d-logo.png\" width=\"600\"/\u003e\n  \u003cdiv\u003e\u0026nbsp;\u003c/div\u003e\n  \u003cdiv align=\"center\"\u003e\n    \u003cb\u003e\u003cfont size=\"5\"\u003eOpenMMLab website\u003c/font\u003e\u003c/b\u003e\n    \u003csup\u003e\n      \u003ca href=\"https://openmmlab.com\"\u003e\n        \u003ci\u003e\u003cfont size=\"4\"\u003eHOT\u003c/font\u003e\u003c/i\u003e\n      \u003c/a\u003e\n    \u003c/sup\u003e\n    \u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\n    \u003cb\u003e\u003cfont size=\"5\"\u003eOpenMMLab platform\u003c/font\u003e\u003c/b\u003e\n    \u003csup\u003e\n      \u003ca href=\"https://platform.openmmlab.com\"\u003e\n        \u003ci\u003e\u003cfont size=\"4\"\u003eTRY IT OUT\u003c/font\u003e\u003c/i\u003e\n      \u003c/a\u003e\n    \u003c/sup\u003e\n  \u003c/div\u003e\n  \u003cdiv\u003e\u0026nbsp;\u003c/div\u003e\n\n[![PyPI](https://img.shields.io/pypi/v/mmdet3d)](https://pypi.org/project/mmdet3d)\n[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmdetection3d.readthedocs.io/en/latest/)\n[![badge](https://github.com/open-mmlab/mmdetection3d/workflows/build/badge.svg)](https://github.com/open-mmlab/mmdetection3d/actions)\n[![codecov](https://codecov.io/gh/open-mmlab/mmdetection3d/branch/main/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmdetection3d)\n[![license](https://img.shields.io/github/license/open-mmlab/mmdetection3d.svg)](https://github.com/open-mmlab/mmdetection3d/blob/main/LICENSE)\n[![open issues](https://isitmaintained.com/badge/open/open-mmlab/mmdetection3d.svg)](https://github.com/open-mmlab/mmdetection3d/issues)\n[![issue resolution](https://isitmaintained.com/badge/resolution/open-mmlab/mmdetection3d.svg)](https://github.com/open-mmlab/mmdetection3d/issues)\n\n[📘Documentation](https://mmdetection3d.readthedocs.io/en/latest/) |\n[🛠️Installation](https://mmdetection3d.readthedocs.io/en/latest/get_started.html) |\n[👀Model Zoo](https://mmdetection3d.readthedocs.io/en/latest/model_zoo.html) |\n[🆕Update News](https://mmdetection3d.readthedocs.io/en/latest/notes/changelog.html) |\n[🚀Ongoing Projects](https://github.com/open-mmlab/mmdetection3d/projects) |\n[🤔Reporting Issues](https://github.com/open-mmlab/mmdetection3d/issues/new/choose)\n\n\u003c/div\u003e\n\n\u003cdiv align=\"center\"\u003e\n\nEnglish | [简体中文](README_zh-CN.md)\n\n\u003c/div\u003e\n\n\u003cdiv align=\"center\"\u003e\n  \u003ca href=\"https://openmmlab.medium.com/\" style=\"text-decoration:none;\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/25839884/219255827-67c1a27f-f8c5-46a9-811d-5e57448c61d1.png\" width=\"3%\" alt=\"\" /\u003e\u003c/a\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" /\u003e\n  \u003ca href=\"https://discord.com/channels/1037617289144569886/1046608014234370059\" style=\"text-decoration:none;\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218347213-c080267f-cbb6-443e-8532-8e1ed9a58ea9.png\" width=\"3%\" alt=\"\" /\u003e\u003c/a\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" /\u003e\n  \u003ca href=\"https://twitter.com/OpenMMLab\" style=\"text-decoration:none;\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218346637-d30c8a0f-3eba-4699-8131-512fb06d46db.png\" width=\"3%\" alt=\"\" /\u003e\u003c/a\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" /\u003e\n  \u003ca href=\"https://www.youtube.com/openmmlab\" style=\"text-decoration:none;\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218346691-ceb2116a-465a-40af-8424-9f30d2348ca9.png\" width=\"3%\" alt=\"\" /\u003e\u003c/a\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" /\u003e\n  \u003ca href=\"https://space.bilibili.com/1293512903\" style=\"text-decoration:none;\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/25839884/219026751-d7d14cce-a7c9-4e82-9942-8375fca65b99.png\" width=\"3%\" alt=\"\" /\u003e\u003c/a\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" /\u003e\n  \u003ca href=\"https://www.zhihu.com/people/openmmlab\" style=\"text-decoration:none;\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/25839884/219026120-ba71e48b-6e94-4bd4-b4e9-b7d175b5e362.png\" width=\"3%\" alt=\"\" /\u003e\u003c/a\u003e\n\u003c/div\u003e\n\n## Introduction\n\nMMDetection3D is an open source object detection toolbox based on PyTorch, towards the next-generation platform for general 3D detection. It is a part of the [OpenMMLab](https://openmmlab.com/) project.\n\nThe main branch works with **PyTorch 1.8+**.\n\n![demo image](resources/mmdet3d_outdoor_demo.gif)\n\n\u003cdetails open\u003e\n\u003csummary\u003eMajor features\u003c/summary\u003e\n\n- **Support multi-modality/single-modality detectors out of box**\n\n  It directly supports multi-modality/single-modality detectors including MVXNet, VoteNet, PointPillars, etc.\n\n- **Support indoor/outdoor 3D detection out of box**\n\n  It directly supports popular indoor and outdoor 3D detection datasets, including ScanNet, SUNRGB-D, Waymo, nuScenes, Lyft, and KITTI. For nuScenes dataset, we also support [nuImages dataset](https://github.com/open-mmlab/mmdetection3d/tree/main/configs/nuimages).\n\n- **Natural integration with 2D detection**\n\n  All the about **300+ models, methods of 40+ papers**, and modules supported in [MMDetection](https://github.com/open-mmlab/mmdetection/blob/3.x/docs/en/model_zoo.md) can be trained or used in this codebase.\n\n- **High efficiency**\n\n  It trains faster than other codebases. The main results are as below. Details can be found in [benchmark.md](./docs/en/notes/benchmarks.md). We compare the number of samples trained per second (the higher, the better). The models that are not supported by other codebases are marked by `✗`.\n\n  |       Methods       | MMDetection3D | [OpenPCDet](https://github.com/open-mmlab/OpenPCDet) | [votenet](https://github.com/facebookresearch/votenet) | [Det3D](https://github.com/poodarchu/Det3D) |\n  | :-----------------: | :-----------: | :--------------------------------------------------: | :----------------------------------------------------: | :-----------------------------------------: |\n  |       VoteNet       |      358      |                          ✗                           |                           77                           |                      ✗                      |\n  |  PointPillars-car   |      141      |                          ✗                           |                           ✗                            |                     140                     |\n  | PointPillars-3class |      107      |                          44                          |                           ✗                            |                      ✗                      |\n  |       SECOND        |      40       |                          30                          |                           ✗                            |                      ✗                      |\n  |       Part-A2       |      17       |                          14                          |                           ✗                            |                      ✗                      |\n\n\u003c/details\u003e\n\nLike [MMDetection](https://github.com/open-mmlab/mmdetection) and [MMCV](https://github.com/open-mmlab/mmcv), MMDetection3D can also be used as a library to support different projects on top of it.\n\n## What's New\n\n### Highlight\n\nIn version 1.4, MMDetecion3D refactors the Waymo dataset and accelerates the preprocessing, training/testing setup, and evaluation of Waymo dataset. We also extends the support for camera-based, such as Monocular and BEV, 3D object detection models on Waymo. A detailed description of the Waymo data information is provided [here](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/datasets/waymo.html).\n\nBesides, in version 1.4, MMDetection3D provides [Waymo-mini](https://download.openmmlab.com/mmdetection3d/data/waymo_mmdet3d_after_1x4/waymo_mini.tar.gz) to help community users get started with Waymo and use it for quick iterative development.\n\n**v1.4.0** was released in 8/1/2024：\n\n- Support the training of [DSVT](\u003c(https://arxiv.org/abs/2301.06051)\u003e) in `projects`\n- Support [Nerf-Det](https://arxiv.org/abs/2307.14620) in `projects`\n- Refactor Waymo dataset\n\n**v1.3.0** was released in 18/10/2023:\n\n- Support [CENet](https://arxiv.org/abs/2207.12691) in `projects`\n- Enhance demos with new 3D inferencers\n\n**v1.2.0** was released in 4/7/2023\n\n- Support [New Config Type](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/config.html#a-pure-python-style-configuration-file-beta) in `mmdet3d/configs`\n- Support the inference of [DSVT](\u003c(https://arxiv.org/abs/2301.06051)\u003e) in `projects`\n- Support downloading datasets from [OpenDataLab](https://opendatalab.com/) using `mim`\n\n**v1.1.1** was released in 30/5/2023:\n\n- Support [TPVFormer](https://arxiv.org/pdf/2302.07817.pdf) in `projects`\n- Support the training of BEVFusion in `projects`\n- Support lidar-based 3D semantic segmentation benchmark\n\n## Installation\n\nPlease refer to [Installation](https://mmdetection3d.readthedocs.io/en/latest/get_started.html) for installation instructions.\n\n## Getting Started\n\nFor detailed user guides and advanced guides, please refer to our [documentation](https://mmdetection3d.readthedocs.io/en/latest/):\n\n\u003cdetails\u003e\n\u003csummary\u003eUser Guides\u003c/summary\u003e\n\n- [Train \u0026 Test](https://mmdetection3d.readthedocs.io/en/latest/user_guides/index.html#train-test)\n  - [Learn about Configs](https://mmdetection3d.readthedocs.io/en/latest/user_guides/config.html)\n  - [Coordinate System](https://mmdetection3d.readthedocs.io/en/latest/user_guides/coord_sys_tutorial.html)\n  - [Dataset Preparation](https://mmdetection3d.readthedocs.io/en/latest/user_guides/dataset_prepare.html)\n  - [Customize Data Pipelines](https://mmdetection3d.readthedocs.io/en/latest/user_guides/data_pipeline.html)\n  - [Test and Train on Standard Datasets](https://mmdetection3d.readthedocs.io/en/latest/user_guides/train_test.html)\n  - [Inference](https://mmdetection3d.readthedocs.io/en/latest/user_guides/inference.html)\n  - [Train with Customized Datasets](https://mmdetection3d.readthedocs.io/en/latest/user_guides/new_data_model.html)\n- [Useful Tools](https://mmdetection3d.readthedocs.io/en/latest/user_guides/index.html#useful-tools)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eAdvanced Guides\u003c/summary\u003e\n\n- [Datasets](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/index.html#datasets)\n  - [KITTI Dataset](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/datasets/kitti.html)\n  - [NuScenes Dataset](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/datasets/nuscenes.html)\n  - [Lyft Dataset](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/datasets/lyft.html)\n  - [Waymo Dataset](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/datasets/waymo.html)\n  - [SUN RGB-D Dataset](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/datasets/sunrgbd.html)\n  - [ScanNet Dataset](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/datasets/scannet.html)\n  - [S3DIS Dataset](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/datasets/s3dis.html)\n  - [SemanticKITTI Dataset](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/datasets/semantickitti.html)\n- [Supported Tasks](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/index.html#supported-tasks)\n  - [LiDAR-Based 3D Detection](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/supported_tasks/lidar_det3d.html)\n  - [Vision-Based 3D Detection](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/supported_tasks/vision_det3d.html)\n  - [LiDAR-Based 3D Semantic Segmentation](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/supported_tasks/lidar_sem_seg3d.html)\n- [Customization](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/index.html#customization)\n  - [Customize Datasets](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/customize_dataset.html)\n  - [Customize Models](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/customize_models.html)\n  - [Customize Runtime Settings](https://mmdetection3d.readthedocs.io/en/latest/advanced_guides/customize_runtime.html)\n\n\u003c/details\u003e\n\n## Overview of Benchmark and Model Zoo\n\nResults and models are available in the [model zoo](docs/en/model_zoo.md).\n\n\u003cdiv align=\"center\"\u003e\n  \u003cb\u003eComponents\u003c/b\u003e\n\u003c/div\u003e\n\u003ctable align=\"center\"\u003e\n  \u003ctbody\u003e\n    \u003ctr align=\"center\" valign=\"bottom\"\u003e\n      \u003ctd\u003e\n        \u003cb\u003eBackbones\u003c/b\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003cb\u003eHeads\u003c/b\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003cb\u003eFeatures\u003c/b\u003e\n      \u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr valign=\"top\"\u003e\n      \u003ctd\u003e\n      \u003cul\u003e\n        \u003cli\u003e\u003ca href=\"configs/pointnet2\"\u003ePointNet (CVPR'2017)\u003c/a\u003e\u003c/li\u003e\n        \u003cli\u003e\u003ca href=\"configs/pointnet2\"\u003ePointNet++ (NeurIPS'2017)\u003c/a\u003e\u003c/li\u003e\n        \u003cli\u003e\u003ca href=\"configs/regnet\"\u003eRegNet (CVPR'2020)\u003c/a\u003e\u003c/li\u003e\n        \u003cli\u003e\u003ca href=\"configs/dgcnn\"\u003eDGCNN (TOG'2019)\u003c/a\u003e\u003c/li\u003e\n        \u003cli\u003eDLA (CVPR'2018)\u003c/li\u003e\n        \u003cli\u003eMinkResNet (CVPR'2019)\u003c/li\u003e\n        \u003cli\u003e\u003ca href=\"configs/minkunet\"\u003eMinkUNet (CVPR'2019)\u003c/a\u003e\u003c/li\u003e\n        \u003cli\u003e\u003ca href=\"configs/cylinder3d\"\u003eCylinder3D (CVPR'2021)\u003c/a\u003e\u003c/li\u003e\n      \u003c/ul\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n      \u003cul\u003e\n        \u003cli\u003e\u003ca href=\"configs/free_anchor\"\u003eFreeAnchor (NeurIPS'2019)\u003c/a\u003e\u003c/li\u003e\n      \u003c/ul\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n      \u003cul\u003e\n        \u003cli\u003e\u003ca href=\"configs/dynamic_voxelization\"\u003eDynamic Voxelization (CoRL'2019)\u003c/a\u003e\u003c/li\u003e\n      \u003c/ul\u003e\n      \u003c/td\u003e\n    \u003c/tr\u003e\n\u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c/tbody\u003e\n\u003c/table\u003e\n\n\u003cdiv align=\"center\"\u003e\n  \u003cb\u003eArchitectures\u003c/b\u003e\n\u003c/div\u003e\n\u003ctable align=\"center\"\u003e\n  \u003ctbody\u003e\n    \u003ctr align=\"center\" valign=\"middle\"\u003e\n      \u003ctd\u003e\n        \u003cb\u003eLiDAR-based 3D Object Detection\u003c/b\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003cb\u003eCamera-based 3D Object Detection\u003c/b\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003cb\u003eMulti-modal 3D Object Detection\u003c/b\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003cb\u003e3D Semantic Segmentation\u003c/b\u003e\n      \u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr valign=\"top\"\u003e\n      \u003ctd\u003e\n        \u003cli\u003e\u003cb\u003eOutdoor\u003c/b\u003e\u003c/li\u003e\n        \u003cul\u003e\n            \u003cli\u003e\u003ca href=\"configs/second\"\u003eSECOND (Sensor'2018)\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"configs/pointpillars\"\u003ePointPillars (CVPR'2019)\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"configs/ssn\"\u003eSSN (ECCV'2020)\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"configs/3dssd\"\u003e3DSSD (CVPR'2020)\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"configs/sassd\"\u003eSA-SSD (CVPR'2020)\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"configs/point_rcnn\"\u003ePointRCNN (CVPR'2019)\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"configs/parta2\"\u003ePart-A2 (TPAMI'2020)\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"configs/centerpoint\"\u003eCenterPoint (CVPR'2021)\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"configs/pv_rcnn\"\u003ePV-RCNN (CVPR'2020)\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"projects/CenterFormer\"\u003eCenterFormer (ECCV'2022)\u003c/a\u003e\u003c/li\u003e\n        \u003c/ul\u003e\n        \u003cli\u003e\u003cb\u003eIndoor\u003c/b\u003e\u003c/li\u003e\n        \u003cul\u003e\n            \u003cli\u003e\u003ca href=\"configs/votenet\"\u003eVoteNet (ICCV'2019)\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"configs/h3dnet\"\u003eH3DNet (ECCV'2020)\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"configs/groupfree3d\"\u003eGroup-Free-3D (ICCV'2021)\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"configs/fcaf3d\"\u003eFCAF3D (ECCV'2022)\u003c/a\u003e\u003c/li\u003e\n            \u003cli\u003e\u003ca href=\"projects/TR3D\"\u003eTR3D (ArXiv'2023)\u003c/a\u003e\u003c/li\u003e\n      \u003c/ul\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003cli\u003e\u003cb\u003eOutdoor\u003c/b\u003e\u003c/li\u003e\n        \u003cul\u003e\n          \u003cli\u003e\u003ca href=\"configs/imvoxelnet\"\u003eImVoxelNet (WACV'2022)\u003c/a\u003e\u003c/li\u003e\n          \u003cli\u003e\u003ca href=\"configs/smoke\"\u003eSMOKE (CVPRW'2020)\u003c/a\u003e\u003c/li\u003e\n          \u003cli\u003e\u003ca href=\"configs/fcos3d\"\u003eFCOS3D (ICCVW'2021)\u003c/a\u003e\u003c/li\u003e\n          \u003cli\u003e\u003ca href=\"configs/pgd\"\u003ePGD (CoRL'2021)\u003c/a\u003e\u003c/li\u003e\n          \u003cli\u003e\u003ca href=\"configs/monoflex\"\u003eMonoFlex (CVPR'2021)\u003c/a\u003e\u003c/li\u003e\n          \u003cli\u003e\u003ca href=\"projects/DETR3D\"\u003eDETR3D (CoRL'2021)\u003c/a\u003e\u003c/li\u003e\n          \u003cli\u003e\u003ca href=\"projects/PETR\"\u003ePETR (ECCV'2022)\u003c/a\u003e\u003c/li\u003e\n        \u003c/ul\u003e\n        \u003cli\u003e\u003cb\u003eIndoor\u003c/b\u003e\u003c/li\u003e\n        \u003cul\u003e\n          \u003cli\u003e\u003ca href=\"configs/imvoxelnet\"\u003eImVoxelNet (WACV'2022)\u003c/a\u003e\u003c/li\u003e\n        \u003c/ul\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003cli\u003e\u003cb\u003eOutdoor\u003c/b\u003e\u003c/li\u003e\n        \u003cul\u003e\n          \u003cli\u003e\u003ca href=\"configs/mvxnet\"\u003eMVXNet (ICRA'2019)\u003c/a\u003e\u003c/li\u003e\n          \u003cli\u003e\u003ca href=\"projects/BEVFusion\"\u003eBEVFusion (ICRA'2023)\u003c/a\u003e\u003c/li\u003e\n        \u003c/ul\u003e\n        \u003cli\u003e\u003cb\u003eIndoor\u003c/b\u003e\u003c/li\u003e\n        \u003cul\u003e\n          \u003cli\u003e\u003ca href=\"configs/imvotenet\"\u003eImVoteNet (CVPR'2020)\u003c/a\u003e\u003c/li\u003e\n        \u003c/ul\u003e\n      \u003c/td\u003e\n      \u003ctd\u003e\n        \u003cli\u003e\u003cb\u003eOutdoor\u003c/b\u003e\u003c/li\u003e\n        \u003cul\u003e\n          \u003cli\u003e\u003ca href=\"configs/minkunet\"\u003eMinkUNet (CVPR'2019)\u003c/a\u003e\u003c/li\u003e\n          \u003cli\u003e\u003ca href=\"configs/spvcnn\"\u003eSPVCNN (ECCV'2020)\u003c/a\u003e\u003c/li\u003e\n          \u003cli\u003e\u003ca href=\"configs/cylinder3d\"\u003eCylinder3D (CVPR'2021)\u003c/a\u003e\u003c/li\u003e\n          \u003cli\u003e\u003ca href=\"projects/TPVFormer\"\u003eTPVFormer (CVPR'2023)\u003c/a\u003e\u003c/li\u003e\n        \u003c/ul\u003e\n        \u003cli\u003e\u003cb\u003eIndoor\u003c/b\u003e\u003c/li\u003e\n        \u003cul\u003e\n          \u003cli\u003e\u003ca href=\"configs/pointnet2\"\u003ePointNet++ (NeurIPS'2017)\u003c/a\u003e\u003c/li\u003e\n          \u003cli\u003e\u003ca href=\"configs/paconv\"\u003ePAConv (CVPR'2021)\u003c/a\u003e\u003c/li\u003e\n          \u003cli\u003e\u003ca href=\"configs/dgcnn\"\u003eDGCNN (TOG'2019)\u003c/a\u003e\u003c/li\u003e\n        \u003c/ul\u003e\n      \u003c/ul\u003e\n      \u003c/td\u003e\n    \u003c/tr\u003e\n\u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c/tbody\u003e\n\u003c/table\u003e\n\n|               | ResNet | VoVNet | Swin-T | PointNet++ | SECOND | DGCNN | RegNetX | DLA | MinkResNet | Cylinder3D | MinkUNet |\n| :-----------: | :----: | :----: | :----: | :--------: | :----: | :---: | :-----: | :-: | :--------: | :--------: | :------: |\n|    SECOND     |   ✗    |   ✗    |   ✗    |     ✗      |   ✓    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✗     |\n| PointPillars  |   ✗    |   ✗    |   ✗    |     ✗      |   ✓    |   ✗   |    ✓    |  ✗  |     ✗      |     ✗      |    ✗     |\n|  FreeAnchor   |   ✗    |   ✗    |   ✗    |     ✗      |   ✗    |   ✗   |    ✓    |  ✗  |     ✗      |     ✗      |    ✗     |\n|    VoteNet    |   ✗    |   ✗    |   ✗    |     ✓      |   ✗    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✗     |\n|    H3DNet     |   ✗    |   ✗    |   ✗    |     ✓      |   ✗    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✗     |\n|     3DSSD     |   ✗    |   ✗    |   ✗    |     ✓      |   ✗    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✗     |\n|    Part-A2    |   ✗    |   ✗    |   ✗    |     ✗      |   ✓    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✗     |\n|    MVXNet     |   ✓    |   ✗    |   ✗    |     ✗      |   ✓    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✗     |\n|  CenterPoint  |   ✗    |   ✗    |   ✗    |     ✗      |   ✓    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✗     |\n|      SSN      |   ✗    |   ✗    |   ✗    |     ✗      |   ✗    |   ✗   |    ✓    |  ✗  |     ✗      |     ✗      |    ✗     |\n|   ImVoteNet   |   ✓    |   ✗    |   ✗    |     ✓      |   ✗    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✗     |\n|    FCOS3D     |   ✓    |   ✗    |   ✗    |     ✗      |   ✗    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✗     |\n|  PointNet++   |   ✗    |   ✗    |   ✗    |     ✓      |   ✗    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✗     |\n| Group-Free-3D |   ✗    |   ✗    |   ✗    |     ✓      |   ✗    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✗     |\n|  ImVoxelNet   |   ✓    |   ✗    |   ✗    |     ✗      |   ✗    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✗     |\n|    PAConv     |   ✗    |   ✗    |   ✗    |     ✓      |   ✗    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✗     |\n|     DGCNN     |   ✗    |   ✗    |   ✗    |     ✗      |   ✗    |   ✓   |    ✗    |  ✗  |     ✗      |     ✗      |    ✗     |\n|     SMOKE     |   ✗    |   ✗    |   ✗    |     ✗      |   ✗    |   ✗   |    ✗    |  ✓  |     ✗      |     ✗      |    ✗     |\n|      PGD      |   ✓    |   ✗    |   ✗    |     ✗      |   ✗    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✗     |\n|   MonoFlex    |   ✗    |   ✗    |   ✗    |     ✗      |   ✗    |   ✗   |    ✗    |  ✓  |     ✗      |     ✗      |    ✗     |\n|    SA-SSD     |   ✗    |   ✗    |   ✗    |     ✗      |   ✓    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✗     |\n|    FCAF3D     |   ✗    |   ✗    |   ✗    |     ✗      |   ✗    |   ✗   |    ✗    |  ✗  |     ✓      |     ✗      |    ✗     |\n|    PV-RCNN    |   ✗    |   ✗    |   ✗    |     ✗      |   ✓    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✗     |\n|  Cylinder3D   |   ✗    |   ✗    |   ✗    |     ✗      |   ✗    |   ✗   |    ✗    |  ✗  |     ✗      |     ✓      |    ✗     |\n|   MinkUNet    |   ✗    |   ✗    |   ✗    |     ✗      |   ✗    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✓     |\n|    SPVCNN     |   ✗    |   ✗    |   ✗    |     ✗      |   ✗    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✓     |\n|   BEVFusion   |   ✗    |   ✗    |   ✓    |     ✗      |   ✓    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✗     |\n| CenterFormer  |   ✗    |   ✗    |   ✗    |     ✗      |   ✓    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✗     |\n|     TR3D      |   ✗    |   ✗    |   ✗    |     ✗      |   ✗    |   ✗   |    ✗    |  ✗  |     ✓      |     ✗      |    ✗     |\n|    DETR3D     |   ✓    |   ✓    |   ✗    |     ✗      |   ✗    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✗     |\n|     PETR      |   ✗    |   ✓    |   ✗    |     ✗      |   ✗    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✗     |\n|   TPVFormer   |   ✓    |   ✗    |   ✗    |     ✗      |   ✗    |   ✗   |    ✗    |  ✗  |     ✗      |     ✗      |    ✗     |\n\n**Note:** All the about **500+ models, methods of 90+ papers** in 2D detection supported by [MMDetection](https://github.com/open-mmlab/mmdetection/blob/3.x/docs/en/model_zoo.md) can be trained or used in this codebase.\n\n## FAQ\n\nPlease refer to [FAQ](docs/en/notes/faq.md) for frequently asked questions.\n\n## Contributing\n\nWe appreciate all contributions to improve MMDetection3D. Please refer to [CONTRIBUTING.md](docs/en/notes/contribution_guides.md) for the contributing guideline.\n\n## Acknowledgement\n\nMMDetection3D is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new 3D detectors.\n\n## Citation\n\nIf you find this project useful in your research, please consider cite:\n\n```latex\n@misc{mmdet3d2020,\n    title={{MMDetection3D: OpenMMLab} next-generation platform for general {3D} object detection},\n    author={MMDetection3D Contributors},\n    howpublished = {\\url{https://github.com/open-mmlab/mmdetection3d}},\n    year={2020}\n}\n```\n\n## License\n\nThis project is released under the [Apache 2.0 license](LICENSE).\n\n## Projects in OpenMMLab\n\n- [MMEngine](https://github.com/open-mmlab/mmengine): OpenMMLab foundational library for training deep learning models.\n- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.\n- [MMEval](https://github.com/open-mmlab/mmeval): A unified evaluation library for multiple machine learning libraries.\n- [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages.\n- [MMPreTrain](https://github.com/open-mmlab/mmpretrain): OpenMMLab pre-training toolbox and benchmark.\n- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.\n- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.\n- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.\n- [MMYOLO](https://github.com/open-mmlab/mmyolo): OpenMMLab YOLO series toolbox and benchmark.\n- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.\n- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.\n- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.\n- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.\n- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.\n- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark.\n- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.\n- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.\n- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.\n- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.\n- [MMagic](https://github.com/open-mmlab/mmagic): Open**MM**Lab **A**dvanced, **G**enerative and **I**ntelligent **C**reation toolbox.\n- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox.\n- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.\n","funding_links":[],"categories":["Python","Topics","Sensor Processing","Pytorch \u0026 related libraries｜Pytorch \u0026 相关库","Computer Vision","code base","Pytorch \u0026 related libraries","Famous CodeBase","对象检测_分割","💻 Open-Source Projects","Code list"],"sub_categories":["Perception","Lidar and Point Cloud Processing","CV｜计算机视觉:","General Purpose CV","keywords","CV:","Workshop","资源传输下载","Papers"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopen-mmlab%2Fmmdetection3d","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fopen-mmlab%2Fmmdetection3d","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopen-mmlab%2Fmmdetection3d/lists"}