{"id":13444005,"url":"https://github.com/traveller59/second.pytorch","last_synced_at":"2025-05-15T16:04:56.680Z","repository":{"id":37979364,"uuid":"151491863","full_name":"traveller59/second.pytorch","owner":"traveller59","description":"SECOND for KITTI/NuScenes object detection","archived":false,"fork":false,"pushed_at":"2022-10-14T08:05:18.000Z","size":4622,"stargazers_count":1747,"open_issues_count":299,"forks_count":719,"subscribers_count":46,"default_branch":"master","last_synced_at":"2025-04-07T21:13:26.938Z","etag":null,"topics":["kitti","nuscenes","object-detection","voxelnet"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/traveller59.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2018-10-03T23:05:52.000Z","updated_at":"2025-03-31T07:53:03.000Z","dependencies_parsed_at":"2022-08-08T22:45:26.102Z","dependency_job_id":null,"html_url":"https://github.com/traveller59/second.pytorch","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/traveller59%2Fsecond.pytorch","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/traveller59%2Fsecond.pytorch/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/traveller59%2Fsecond.pytorch/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/traveller59%2Fsecond.pytorch/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/traveller59","download_url":"https://codeload.github.com/traveller59/second.pytorch/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254374410,"owners_count":22060610,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["kitti","nuscenes","object-detection","voxelnet"],"created_at":"2024-07-31T03:02:16.432Z","updated_at":"2025-05-15T16:04:56.655Z","avatar_url":"https://github.com/traveller59.png","language":"Python","readme":"# This Project is DEPRECATED, please use [OpenPCDet](https://github.com/open-mmlab/OpenPCDet) or [mmdetection3d](https://github.com/open-mmlab/mmdetection3d) instead, they both implement SECOND and support spconv 2.x.\n\n# SECOND for KITTI/NuScenes object detection (1.6.0 Alpha)\nSECOND detector.\n\n\n\"Alpha\" means there may be many bugs, config format may change, spconv API may change.\n\nONLY support python 3.6+, pytorch 1.0.0+. Tested in Ubuntu 16.04/18.04/Windows 10.\n\nIf you want to train nuscenes dataset, see [this](NUSCENES-GUIDE.md).\n\n## News\n\n2019-4-1: SECOND V1.6.0alpha released: New Data API, [NuScenes](https://www.nuscenes.org) support, [PointPillars](https://github.com/nutonomy/second.pytorch) support, fp16 and multi-gpu support.\n\n2019-3-21: SECOND V1.5.1 (minor improvement and bug fix) released! \n\n2019-1-20: SECOND V1.5 released! Sparse convolution-based network.\n\nSee [release notes](RELEASE.md) for more details.\n\n_WARNING_: you should rerun info generation after every code update.\n\n### Performance in KITTI validation set (50/50 split)\n\n```car.fhd.config``` + 160 epochs (25 fps in 1080Ti):\n\n```\nCar AP@0.70, 0.70, 0.70:\nbbox AP:90.77, 89.50, 80.80\nbev  AP:90.28, 87.73, 79.67\n3d   AP:88.84, 78.43, 76.88\n```\n\n```car.fhd.config``` + 50 epochs + super converge (6.5 hours) +  (25 fps in 1080Ti):\n\n```\nCar AP@0.70, 0.70, 0.70:\nbbox AP:90.78, 89.59, 88.42\nbev  AP:90.12, 87.87, 86.77\n3d   AP:88.62, 78.31, 76.62\n```\n\n```car.fhd.onestage.config``` + 50 epochs + super converge (6.5 hours) +  (25 fps in 1080Ti):\n\n```\nCar AP@0.70, 0.70, 0.70:\nbbox AP:97.65, 89.59, 88.72\nbev  AP:90.38, 88.20, 86.98\n3d   AP:89.16, 78.78, 77.41\n```\n\n### Performance in NuScenes validation set (all.pp.config, NuScenes mini train set, 3517 samples, not v1.0-mini)\n\n```\ncar Nusc dist AP@0.5, 1.0, 2.0, 4.0\n62.90, 73.07, 76.77, 78.79\nbicycle Nusc dist AP@0.5, 1.0, 2.0, 4.0\n0.00, 0.00, 0.00, 0.00\nbus Nusc dist AP@0.5, 1.0, 2.0, 4.0\n9.53, 26.17, 38.01, 40.60\nconstruction_vehicle Nusc dist AP@0.5, 1.0, 2.0, 4.0\n0.00, 0.00, 0.44, 1.43\nmotorcycle Nusc dist AP@0.5, 1.0, 2.0, 4.0\n9.25, 12.90, 13.69, 14.11\npedestrian Nusc dist AP@0.5, 1.0, 2.0, 4.0\n61.44, 62.61, 64.09, 66.35\ntraffic_cone Nusc dist AP@0.5, 1.0, 2.0, 4.0\n11.63, 13.14, 15.81, 21.22\ntrailer Nusc dist AP@0.5, 1.0, 2.0, 4.0\n0.80, 9.90, 17.61, 23.26\ntruck Nusc dist AP@0.5, 1.0, 2.0, 4.0\n9.81, 21.40, 27.55, 30.34\n```\n\n## Install\n\n### 1. Clone code\n\n```bash\ngit clone https://github.com/traveller59/second.pytorch.git\ncd ./second.pytorch/second\n```\n\n### 2. Install dependence python packages\n\nIt is recommend to use Anaconda package manager.\n\n```bash\nconda install scikit-image scipy numba pillow matplotlib\n```\n\n```bash\npip install fire tensorboardX protobuf opencv-python\n```\n\nIf you don't have Anaconda:\n\n```bash\npip install numba scikit-image scipy pillow\n```\n\nFollow instructions in [spconv](https://github.com/traveller59/spconv) to install spconv. \n\nIf you want to train with fp16 mixed precision (train faster in RTX series, Titan V/RTX and Tesla V100, but I only have 1080Ti), you need to install [apex](https://github.com/NVIDIA/apex).\n\nIf you want to use NuScenes dataset, you need to install [nuscenes-devkit](https://github.com/nutonomy/nuscenes-devkit).\n\n### 3. Setup cuda for numba (will be removed in 1.6.0 release)\n\nyou need to add following environment variable for numba.cuda, you can add them to ~/.bashrc:\n\n```bash\nexport NUMBAPRO_CUDA_DRIVER=/usr/lib/x86_64-linux-gnu/libcuda.so\nexport NUMBAPRO_NVVM=/usr/local/cuda/nvvm/lib64/libnvvm.so\nexport NUMBAPRO_LIBDEVICE=/usr/local/cuda/nvvm/libdevice\n```\n\n### 4. add second.pytorch/ to PYTHONPATH\n\n## Prepare dataset\n\n* KITTI Dataset preparation\n\nDownload KITTI dataset and create some directories first:\n\n```plain\n└── KITTI_DATASET_ROOT\n       ├── training    \u003c-- 7481 train data\n       |   ├── image_2 \u003c-- for visualization\n       |   ├── calib\n       |   ├── label_2\n       |   ├── velodyne\n       |   └── velodyne_reduced \u003c-- empty directory\n       └── testing     \u003c-- 7580 test data\n           ├── image_2 \u003c-- for visualization\n           ├── calib\n           ├── velodyne\n           └── velodyne_reduced \u003c-- empty directory\n```\n\nThen run\n```bash\npython create_data.py kitti_data_prep --data_path=KITTI_DATASET_ROOT\n```\n\n* [NuScenes](https://www.nuscenes.org) Dataset preparation\n\nDownload NuScenes dataset:\n```plain\n└── NUSCENES_TRAINVAL_DATASET_ROOT\n       ├── samples       \u003c-- key frames\n       ├── sweeps        \u003c-- frames without annotation\n       ├── maps          \u003c-- unused\n       └── v1.0-trainval \u003c-- metadata and annotations\n└── NUSCENES_TEST_DATASET_ROOT\n       ├── samples       \u003c-- key frames\n       ├── sweeps        \u003c-- frames without annotation\n       ├── maps          \u003c-- unused\n       └── v1.0-test     \u003c-- metadata\n```\n\nThen run\n```bash\npython create_data.py nuscenes_data_prep --data_path=NUSCENES_TRAINVAL_DATASET_ROOT --version=\"v1.0-trainval\" --max_sweeps=10\npython create_data.py nuscenes_data_prep --data_path=NUSCENES_TEST_DATASET_ROOT --version=\"v1.0-test\" --max_sweeps=10\n--dataset_name=\"NuscenesDataset\"\n```\nThis will create gt database **without velocity**. to add velocity, use dataset name ```NuscenesDatasetVelo```.\n\n* Modify config file\n\nThere is some path need to be configured in config file:\n\n```bash\ntrain_input_reader: {\n  ...\n  database_sampler {\n    database_info_path: \"/path/to/dataset_dbinfos_train.pkl\"\n    ...\n  }\n  dataset: {\n    dataset_class_name: \"DATASET_NAME\"\n    kitti_info_path: \"/path/to/dataset_infos_train.pkl\"\n    kitti_root_path: \"DATASET_ROOT\"\n  }\n}\n...\neval_input_reader: {\n  ...\n  dataset: {\n    dataset_class_name: \"DATASET_NAME\"\n    kitti_info_path: \"/path/to/dataset_infos_val.pkl\"\n    kitti_root_path: \"DATASET_ROOT\"\n  }\n}\n```\n\n## Usage\n\n### train\n\nI recommend to use script.py to train and eval. see script.py for more details.\n\n#### train with single GPU\n\n```bash\npython ./pytorch/train.py train --config_path=./configs/car.fhd.config --model_dir=/path/to/model_dir\n```\n\n#### train with multiple GPU (need test, I only have one GPU)\n\nAssume you have 4 GPUs and want to train with 3 GPUs:\n\n```bash\nCUDA_VISIBLE_DEVICES=0,1,3 python ./pytorch/train.py train --config_path=./configs/car.fhd.config --model_dir=/path/to/model_dir --multi_gpu=True\n```\n\nNote: The batch_size and num_workers in config file is per-GPU, if you use multi-gpu, they will be multiplied by number of GPUs. Don't modify them manually.\n\nYou need to modify total step in config file. For example, 50 epochs = 15500 steps for car.lite.config and single GPU, if you use 4 GPUs, you need to divide ```steps``` and ```steps_per_eval``` by 4.\n\n#### train with fp16 (mixed precision)\n\nModify config file, set enable_mixed_precision to true.\n\n* Make sure \"/path/to/model_dir\" doesn't exist if you want to train new model. A new directory will be created if the model_dir doesn't exist, otherwise will read checkpoints in it.\n\n* training process use batchsize=6 as default for 1080Ti, you need to reduce batchsize if your GPU has less memory.\n\n* Currently only support single GPU training, but train a model only needs 20 hours (165 epoch) in a single 1080Ti and only needs 50 epoch to reach 78.3 AP with super converge in car moderate 3D in Kitti validation dateset.\n\n### evaluate\n\n```bash\npython ./pytorch/train.py evaluate --config_path=./configs/car.fhd.config --model_dir=/path/to/model_dir --measure_time=True --batch_size=1\n```\n\n* detection result will saved as a result.pkl file in model_dir/eval_results/step_xxx or save as official KITTI label format if you use --pickle_result=False.\n\n### pretrained model\n\nYou can download pretrained models in [google drive](https://drive.google.com/open?id=1YOpgRkBgmSAJwMknoXmitEArNitZz63C). The ```car_fhd``` model is corresponding to car.fhd.config.\n\nNote that this pretrained model is trained before a bug of sparse convolution fixed, so the eval result may slightly worse. \n\n## Docker (Deprecated. I can't push docker due to network problem.)\n\nYou can use a prebuilt docker for testing:\n```\ndocker pull scrin/second-pytorch \n```\nThen run:\n```\nnvidia-docker run -it --rm -v /media/yy/960evo/datasets/:/root/data -v $HOME/pretrained_models:/root/model --ipc=host second-pytorch:latest\npython ./pytorch/train.py evaluate --config_path=./configs/car.config --model_dir=/root/model/car\n```\n\n## Try Kitti Viewer Web\n\n### Major step\n\n1. run ```python ./kittiviewer/backend/main.py main --port=xxxx``` in your server/local.\n\n2. run ```cd ./kittiviewer/frontend \u0026\u0026 python -m http.server``` to launch a local web server.\n\n3. open your browser and enter your frontend url (e.g. http://127.0.0.1:8000, default]).\n\n4. input backend url (e.g. http://127.0.0.1:16666)\n\n5. input root path, info path and det path (optional)\n\n6. click load, loadDet (optional), input image index in center bottom of screen and press Enter.\n\n### Inference step\n\nFirstly the load button must be clicked and load successfully.\n\n1. input checkpointPath and configPath.\n\n2. click buildNet.\n\n3. click inference.\n\n![GuidePic](https://raw.githubusercontent.com/traveller59/second.pytorch/master/images/viewerweb.png)\n\n\n\n## Try Kitti Viewer (Deprecated)\n\nYou should use kitti viewer based on pyqt and pyqtgraph to check data before training.\n\nrun ```python ./kittiviewer/viewer.py```, check following picture to use kitti viewer:\n![GuidePic](https://raw.githubusercontent.com/traveller59/second.pytorch/master/images/simpleguide.png)\n\n## Concepts\n\n\n* Kitti lidar box\n\nA kitti lidar box is consist of 7 elements: [x, y, z, w, l, h, rz], see figure.\n\n![Kitti Box Image](https://raw.githubusercontent.com/traveller59/second.pytorch/master/images/kittibox.png)\n\nAll training and inference code use kitti box format. So we need to convert other format to KITTI format before training.\n\n* Kitti camera box\n\nA kitti camera box is consist of 7 elements: [x, y, z, l, h, w, ry].\n","funding_links":[],"categories":["Python","code base","三、LiDAR-based BEV","SOTA代码","Code list"],"sub_categories":["keywords","1. List of LiDAR-based BEV sensing methods","物体检测|Object Detection"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftraveller59%2Fsecond.pytorch","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftraveller59%2Fsecond.pytorch","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftraveller59%2Fsecond.pytorch/lists"}