{"id":13443518,"url":"https://github.com/maudzung/CenterNet3D-PyTorch","last_synced_at":"2025-03-20T16:31:39.299Z","repository":{"id":108977741,"uuid":"288123334","full_name":"maudzung/CenterNet3D-PyTorch","owner":"maudzung","description":"Unofficial PyTorch implementation of the paper: \"CenterNet3D: An Anchor free Object Detector for Autonomous Driving\"","archived":false,"fork":false,"pushed_at":"2020-08-20T09:15:41.000Z","size":1164,"stargazers_count":71,"open_issues_count":1,"forks_count":13,"subscribers_count":5,"default_branch":"master","last_synced_at":"2024-08-01T03:43:52.892Z","etag":null,"topics":["3d-object-detection","centernet","centernet3d","lidar","object-detection","sparse-convolution","spconv","voxel","voxelnet"],"latest_commit_sha":null,"homepage":"https://arxiv.org/pdf/2007.07214.pdf","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/maudzung.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2020-08-17T08:19:47.000Z","updated_at":"2024-07-22T22:18:45.000Z","dependencies_parsed_at":"2023-03-21T22:47:19.197Z","dependency_job_id":null,"html_url":"https://github.com/maudzung/CenterNet3D-PyTorch","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/maudzung%2FCenterNet3D-PyTorch","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/maudzung%2FCenterNet3D-PyTorch/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/maudzung%2FCenterNet3D-PyTorch/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/maudzung%2FCenterNet3D-PyTorch/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/maudzung","download_url":"https://codeload.github.com/maudzung/CenterNet3D-PyTorch/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":221780030,"owners_count":16879040,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["3d-object-detection","centernet","centernet3d","lidar","object-detection","sparse-convolution","spconv","voxel","voxelnet"],"created_at":"2024-07-31T03:02:02.753Z","updated_at":"2024-10-28T04:31:13.186Z","avatar_url":"https://github.com/maudzung.png","language":"Python","readme":"# CenterNet3D-PyTorch\n\n[![python-image]][python-url]\n[![pytorch-image]][pytorch-url]\n\nThe PyTorch Implementation of the paper: \n[CenterNet3D: An Anchor free Object Detector for Autonomous Driving](https://arxiv.org/pdf/2007.07214.pdf)\n\n---\n\n## Features\n- [x] LiDAR-based realtime 3D object detection\n- [x] Support [distributed data parallel training](https://github.com/pytorch/examples/tree/master/distributed/ddp)\n- [ ] Release pre-trained models \n\n## 2. Getting Started\n### 2.1. Requirement\n\n```shell script\npip install -U -r requirements.txt\n```\n\n- For [`mayavi`](https://docs.enthought.com/mayavi/mayavi/installation.html) library, please refer to the installation instructions from its official website.\n\n- To build the `CenterNet3D` model, I have used the [spconv](https://github.com/traveller59/spconv) library. Please follow the \ninstruction from the repo to install the library. I also wrote notes for the installation [here](./INSTALL_spconv.md)\n\n### 2.2. Data Preparation\nDownload the 3D KITTI detection dataset from [here](http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d).\n\nThe downloaded data includes:\n\n- Velodyne point clouds _**(29 GB)**_\n- Training labels of object data set _**(5 MB)**_\n- Camera calibration matrices of object data set _**(16 MB)**_\n- **Left color images** of object data set _**(12 GB)**_\n\n\nPlease make sure that you construct the source code \u0026 dataset directories structure as below.\n\n### 2.3. CenterNet3D architecture\n\n\n![architecture](./docs/centernet3d_architecture.png)\n\n\n\n### 2.4. How to run\n\n#### 2.4.1. Visualize the dataset \n\n```shell script\ncd src/data_process\n```\n\n- To visualize 3D point clouds with 3D boxes, let's execute:\n\n```shell script\npython kitti_dataset.py\n```\n\n\n*An example of the KITTI dataset:*\n\n![example](./docs/grid_example5.png)\n\n#### 2.4.2. Inference\n\n```\npython test.py --gpu_idx 0 --peak_thresh 0.2\n```\n\n\n#### 2.4.4. Training\n\n##### 2.4.4.1. Single machine, single gpu\n\n```shell script\npython train.py --gpu_idx 0 --batch_size \u003cN\u003e --num_workers \u003cN\u003e...\n```\n\n##### 2.4.4.2. Multi-processing Distributed Data Parallel Training\nWe should always use the `nccl` backend for multi-processing distributed training since it currently provides the best \ndistributed training performance.\n\n- **Single machine (node), multiple GPUs**\n\n```shell script\npython train.py --dist-url 'tcp://127.0.0.1:29500' --dist-backend 'nccl' --multiprocessing-distributed --world-size 1 --rank 0\n```\n\n- **Two machines (two nodes), multiple GPUs**\n\n_**First machine**_\n\n```shell script\npython train.py --dist-url 'tcp://IP_OF_NODE1:FREEPORT' --dist-backend 'nccl' --multiprocessing-distributed --world-size 2 --rank 0\n```\n_**Second machine**_\n\n```shell script\npython train.py --dist-url 'tcp://IP_OF_NODE2:FREEPORT' --dist-backend 'nccl' --multiprocessing-distributed --world-size 2 --rank 1\n```\n\nTo reproduce the results, you can run the bash shell script\n\n```bash\n./train.sh\n```\n\n\n#### Tensorboard\n\n- To track the training progress, go to the `logs/` folder and \n\n```shell script\ncd logs/\u003csaved_fn\u003e/tensorboard/\ntensorboard --logdir=./\n```\n\n- Then go to [http://localhost:6006/](http://localhost:6006/):\n\n\n## Contact\n\nIf you think this work is useful, please give me a star! \u003cbr\u003e\nIf you find any errors or have any suggestions, please contact me (**Email:** `nguyenmaudung93.kstn@gmail.com`). \u003cbr\u003e\nThank you!\n\n\n## Citation\n\n```bash\n@article{CenterNet3D,\n  author = {Guojun Wang, Bin Tian, Yunfeng Ai, Tong Xu, Long Chen, Dongpu Cao},\n  title = {CenterNet3D: An Anchor free Object Detector for Autonomous Driving},\n  year = {2020},\n  journal = {arXiv},\n}\n@misc{CenterNet3D-PyTorch,\n  author =       {Nguyen Mau Dung},\n  title =        {{CenterNet3D-PyTorch: PyTorch Implementation of the CenterNet3D paper}},\n  howpublished = {\\url{https://github.com/maudzung/CenterNet3D-PyTorch}},\n  year =         {2020}\n}\n```\n\n## References\n\n[1] CenterNet: [Objects as Points paper](https://arxiv.org/abs/1904.07850), [PyTorch Implementation](https://github.com/xingyizhou/CenterNet)\n[2] VoxelNet: [PyTorch Implementation](https://github.com/skyhehe123/VoxelNet-pytorch)\n\n## Folder structure\n\n```\n${ROOT}\n└── checkpoints/    \n    ├── centernet3d.pth\n└── dataset/    \n    └── kitti/\n        ├──ImageSets/\n        │   ├── test.txt\n        │   ├── train.txt\n        │   └── val.txt\n        ├── training/\n        │   ├── image_2/ (left color camera)\n        │   ├── calib/\n        │   ├── label_2/\n        │   └── velodyne/\n        └── testing/  \n        │   ├── image_2/ (left color camera)\n        │   ├── calib/\n        │   └── velodyne/\n        └── classes_names.txt\n└── src/\n    ├── config/\n    │   ├── train_config.py\n    │   └── kitti_config.py\n    ├── data_process/\n    │   ├── kitti_dataloader.py\n    │   ├── kitti_dataset.py\n    │   └── kitti_data_utils.py\n    ├── models/\n    │   ├── centernet3d.py\n    │   ├── deform_conv_v2.py\n    │   └── model_utils.py\n    └── utils/\n    │   ├── evaluation_utils.py\n    │   ├── logger.py\n    │   ├── misc.py\n    │   ├── torch_utils.py\n    │   └── train_utils.py\n    ├── evaluate.py\n    ├── test.py\n    ├── train.py\n    └── train.sh\n├── README.md \n└── requirements.txt\n```\n\n\n\n[python-image]: https://img.shields.io/badge/Python-3.6-ff69b4.svg\n[python-url]: https://www.python.org/\n[pytorch-image]: https://img.shields.io/badge/PyTorch-1.5-2BAF2B.svg\n[pytorch-url]: https://pytorch.org/","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmaudzung%2FCenterNet3D-PyTorch","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmaudzung%2FCenterNet3D-PyTorch","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmaudzung%2FCenterNet3D-PyTorch/lists"}