{"id":13443537,"url":"https://github.com/JenningsL/PointRCNN","last_synced_at":"2025-03-20T16:31:49.303Z","repository":{"id":91574237,"uuid":"164992051","full_name":"JenningsL/PointRCNN","owner":"JenningsL","description":"PointRCNN+Frustum Pointnet","archived":false,"fork":false,"pushed_at":"2019-05-13T07:12:18.000Z","size":2123,"stargazers_count":136,"open_issues_count":8,"forks_count":25,"subscribers_count":7,"default_branch":"master","last_synced_at":"2024-08-01T03:43:54.338Z","etag":null,"topics":["3d-object-detection","pointcloud"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/JenningsL.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2019-01-10T04:51:30.000Z","updated_at":"2024-06-04T07:55:25.000Z","dependencies_parsed_at":"2024-01-16T12:46:18.753Z","dependency_job_id":"8c268178-69b4-44ba-9fb8-cc24a7ee547e","html_url":"https://github.com/JenningsL/PointRCNN","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/JenningsL%2FPointRCNN","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/JenningsL%2FPointRCNN/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/JenningsL%2FPointRCNN/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/JenningsL%2FPointRCNN/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/JenningsL","download_url":"https://codeload.github.com/JenningsL/PointRCNN/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":221780037,"owners_count":16879040,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["3d-object-detection","pointcloud"],"created_at":"2024-07-31T03:02:03.201Z","updated_at":"2024-10-28T04:31:16.746Z","avatar_url":"https://github.com/JenningsL.png","language":"Python","readme":"# PointRCNN\nThis is **not** the official implementation of PointRCNN. We add an image segmentation network to improve recall of point cloud segmentation. The 2-stage network is frustum pointNet. Any pull request is appreciated.\n## Introduction\nA 3D object detector that takes point cloud and RGB image(optional) as input.  \n\n## Results\n[![video1](https://i.ytimg.com/vi/T-LzoQpt2N4/sddefault.jpg?sqp=-oaymwEjCNACELwBSFryq4qpAxUIARUAAAAAGAElAADIQj0AgKJDeAE=\u0026rs=AOn4CLA5aI2BbvOQ5gVRctaG5pO9azh0Eg)](https://youtu.be/T-LzoQpt2N4)\n[![video2](https://i.ytimg.com/vi/CVSs2cEkKgk/sddefault.jpg?sqp=-oaymwEjCNACELwBSFryq4qpAxUIARUAAAAAGAElAADIQj0AgKJDeAE=\u0026rs=AOn4CLCD9mHA906oK0XJFPlIubSJNuWzMQ)](https://youtu.be/CVSs2cEkKgk)\n\n## Architecture\n1. Perform foreground point segmentation on the whole point cloud\n2. Output a 3D proposal box for every foreground point\n3. Crop point cloud with proposal boxes and feed into the 2nd-stage classification and box refinement network\n![](images/architecture2.png)\n\n## Usage\n### Dependencies\n+ python2.7\n+ tensorflow(1.10.0)\n+ shapely\n+ mayavi\n+ opencv-python\n+ Compile tensorflow operators for pointnet following to https://github.com/charlesq34/frustum-pointnets\n\n### Data Preparation\nFor trainning and validation, download KITTI 3d object detection dataset, and put the folders as\n\n```\ndataset/KITTI/object/\n  training/\n    calib/\n    image_2/\n    label_2/\n    velodyne/\n\n  testing/\n    calib/\n    image_2/\n    velodyne/\n```\n\nFor testing, download KITTI video sequence and calibration files, and put the folders as\n\n```\n2011_10_03/\n  calib_cam_to_cam.txt\n  calib_imu_to_velo.txt\n  calib_velo_to_cam.txt\n  image_02/\n  velodyne_points/\n```\n\n**[Optional] Scene Augmentation**\n\nOur implementation also supports using augmented scene point cloud for training RPN, please refer to the official implementation of [PointRCNN](https://github.com/sshaoshuai/PointRCNN). After generating the data, just put the `aug_scene/` folder under `dataset/KITTI/object`. If you don't want to use it, just set `use_aug_scene=False` when using `rpn_dataset`. \n\n**Image segmentaion annotation**\n\nOur image segmentation network is DeelabV3+ official implementation. The senmantic segmentation annotation is obtained by the following steps:\n+ Pointcloud completion using [ip_basic](https://github.com/kujason/ip_basic)\n+ Project 3D points to image plane to get segmentation annotation\nCodes for finetuning can be founded at [deeplab_kitti_object](https://github.com/JenningsL/deeplab_kitti_object). Or you can just use your own image segmentation network.\n\n### Train\nThere are 3 sub-model to be trained.\n\n**Region Proposal Network**\n\n```\nsh train_rpn.sh\n```\n\n**Frustum Pointnet**\n\nBefore training the 2-Stage Network, we need to save the output of RPN and Image Segmentaion Network to the disk first. \n\n**Image Segmentation Network**\n\nFor now deeplabv3+ is used and finetune on KITTI 3D object dataset\n\n### Evaluate\n\n**Region Proposal Network**\n\n```\nsh test_rpn.sh\n```\n\nThis will save the output of RPN and Image segmentation network to `./rcnn_data_train` for training the RCNN network.\n\n**Frustum Pointnet**\n\n```\ntest_frustum.sh\n```\n\n### Test\n\n**End to end**\n\n```\nsh test.sh\n```\n\n## Evaluation\n### Point cloud segmentation\n|    Method  | Coverage | Recall | Precision |\n| ---------- | -------- | ------ | --------- |\n| Point Only | 89.7%    | 93.4%  | 82.2%     |\n| Point+Image| 93.5%    | 97.0%  | 76.6%     |\n\nCoverage means the percentage of object that have at least one point being detected.\n\n### Recall of RPN\nSetting: IoU \u003e= 0.5, 100 proposal\n\n|    Method  | 3 Classes Recall    | Car moderate | Pedastrian Moderate | Cyclist Moderate |\n| ---------- | ------------------- | ------------ | ------------------- | ---------------- |\n| Point+Image|                 89% | 96%          | 77%                 | 52%              |\n\n### AP on Val Set\n\n|    Class   | 3D mAP(Easy, Moderate, Hard)  | BEV mAP(Easy, Moderate, Hard)  |\n| ---------- | ----------------------------- |--------------------------------|\n| Car        | 76.56 70.20 64.00 | 86.32 78.42 78.07 |\n| Pedestrain | 70.23 63.09 55.77 | 73.34 65.86 57.94 |\n| Cyclist    | 76.89 50.91 50.28 | 78.27 59.00 51.63 |\n\n## Pretrained Models\n\n| Model | Link |\n| ----- | ---- |\n| RPN |[log_rpn.zip](https://drive.google.com/open?id=1xeBRkwGeF55O41_aht_ROB3wcwnCThHU)| \n| Image SegNet |[log_rpn.zip](https://drive.google.com/open?id=1LhR5p1klFX36IV0hAb54q66pOIsWfTNw)| \n| Frustum PointNet |[log_frustum.zip](https://drive.google.com/open?id=1K5cUgxwLvEDOKDkuYMbYPLa3FGbxGKr3)| \n\n## Reference\n- [Frustum PointNets for 3D Object Detection from RGB-D Data](https://arxiv.org/abs/1711.08488)\n- [PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud](https://arxiv.org/abs/1812.04244)\n","funding_links":[],"categories":["Python","Repositories"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FJenningsL%2FPointRCNN","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FJenningsL%2FPointRCNN","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FJenningsL%2FPointRCNN/lists"}