{"id":13443873,"url":"https://github.com/QingyongHu/SpinNet","last_synced_at":"2025-03-20T17:32:03.477Z","repository":{"id":37673000,"uuid":"313881808","full_name":"QingyongHu/SpinNet","owner":"QingyongHu","description":"[CVPR 2021] SpinNet: Learning a General Surface Descriptor for 3D Point Cloud Registration","archived":false,"fork":false,"pushed_at":"2021-08-04T08:10:30.000Z","size":10487,"stargazers_count":282,"open_issues_count":21,"forks_count":35,"subscribers_count":14,"default_branch":"main","last_synced_at":"2025-03-10T07:05:01.416Z","etag":null,"topics":["3dmatch","descriptor","generalization","kitti","large-scale","pointcloud","pytorch-implementation","registration"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/QingyongHu.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2020-11-18T09:18:56.000Z","updated_at":"2025-02-14T03:23:34.000Z","dependencies_parsed_at":"2022-07-12T16:42:52.388Z","dependency_job_id":null,"html_url":"https://github.com/QingyongHu/SpinNet","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/QingyongHu%2FSpinNet","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/QingyongHu%2FSpinNet/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/QingyongHu%2FSpinNet/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/QingyongHu%2FSpinNet/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/QingyongHu","download_url":"https://codeload.github.com/QingyongHu/SpinNet/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":244660626,"owners_count":20489367,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["3dmatch","descriptor","generalization","kitti","large-scale","pointcloud","pytorch-implementation","registration"],"created_at":"2024-07-31T03:02:12.610Z","updated_at":"2025-03-20T17:32:03.471Z","avatar_url":"https://github.com/QingyongHu.png","language":"Python","readme":"[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/spinnet-learning-a-general-surface-descriptor/point-cloud-registration-on-3dmatch-benchmark)](https://paperswithcode.com/sota/point-cloud-registration-on-3dmatch-benchmark?p=spinnet-learning-a-general-surface-descriptor)\n[![License CC BY-NC-SA 4.0](https://img.shields.io/badge/license-CC4.0-blue.svg)](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)\n[![arXiv](https://img.shields.io/badge/arXiv-2011.12149-b31b1b.svg)](https://arxiv.org/abs/2011.12149)\n# SpinNet: Learning a General Surface Descriptor for 3D Point Cloud Registration (CVPR 2021)\n\nThis is the official repository of **SpinNet**, a conceptually simple neural architecture to extract local \nfeatures which are rotationally invariant whilst sufficiently informative to enable accurate registration. For technical details, please refer to:\n\n**[SpinNet: Learning a General Surface Descriptor for 3D Point Cloud Registration](https://arxiv.org/abs/2011.12149)**  \u003cbr /\u003e\n[Sheng Ao*](http://scholar.google.com/citations?user=cvS1yuMAAAAJ\u0026hl=zh-CN), [Qingyong Hu*](https://www.cs.ox.ac.uk/people/qingyong.hu/), [Bo Yang](https://yang7879.github.io/), [Andrew Markham](https://www.cs.ox.ac.uk/people/andrew.markham/), [Yulan Guo](http://yulanguo.me/). \u003cbr /\u003e\n(* *indicates equal contribution*)\n\n**[[Paper](https://arxiv.org/abs/2011.12149)] [Video] [Project page]** \u003cbr /\u003e\n\n\n### (1) Overview\n\n\u003cp align=\"center\"\u003e \u003cimg src=\"figs/Fig2.png\" width=\"100%\"\u003e \u003c/p\u003e\n\n\u003cp align=\"center\"\u003e \u003cimg src=\"figs/Fig3.png\" width=\"100%\"\u003e \u003c/p\u003e\n\n\n\n### (2) Setup\nThis code has been tested with Python 3.6, Pytorch 1.6.0, CUDA 10.2 on Ubuntu 18.04.\n \n- Clone the repository \n```\ngit clone https://github.com/QingyongHu/SpinNet \u0026\u0026 cd SpinNet\n```\n- Setup conda virtual environment\n```\nconda create -n spinnet python=3.6\nsource activate spinnet\nconda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.2 -c pytorch\nconda install -c open3d-admin open3d==0.11.1\npip install \"git+git://github.com/erikwijmans/Pointnet2_PyTorch.git#egg=pointnet2_ops\u0026subdirectory=pointnet2_ops_lib\"\n```\n\n### (3) 3DMatch\nDownload the processed dataset from [Google Drive](https://drive.google.com/file/d/1PrkSE0nY79gOF_VJcKv2VpxQ8s7DOITg/view?usp=sharing), [Baidu Yun](https://pan.baidu.com/s/1FB7IUbKAAlk7RVnB_AgwcQ) (Verification code:d1vn) and put the folder into `data`. \nThen the structure should be as follows:\n```\n--data--3DMatch--fragments\n              |--intermediate-files-real\n              |--patches\n\n```\n\n**Training**\n\nTraining SpinNet on the 3DMatch dataset:\n```\ncd ./ThreeDMatch/Train\npython train.py\n```\n**Testing**\n\nEvaluate the performance of the trained models on the 3DMatch dataset:\n\n```\ncd ./ThreeDMatch/Test\npython preparation.py\n```\nThe learned descriptors for each point will be saved in `ThreeDMatch/Test/SpinNet_{timestr}/` folder. \nThen the `Feature Matching Recall(FMR)` and `Inlier Ratio(IR)` can be calculated by running:\n```\npython evaluate.py [timestr]\n```\nThe ground truth poses have been put in the `ThreeDMatch/Test/gt_result` folder. \nThe `Registration Recall` can be calculated by running the `evaluate.m` in `ThreeDMatch/Test/3dmatch` which are provided by [3DMatch](https://github.com/andyzeng/3dmatch-toolbox/tree/master/evaluation/geometric-registration).\nNote that, you need to modify the `descriptorName` to `SpinNet_{timestr}` in the `ThreeDMatch/Test/3dmatch/evaluate.m` file.\n\n\n### (4) KITTI\nDownload the processed dataset from [Google Drive](https://drive.google.com/file/d/1fuJiQwAay23BUKtxBG3__MwStyMuvrMQ/view?usp=sharing), [Baidu Yun](https://pan.baidu.com/s/1FB7IUbKAAlk7RVnB_AgwcQ) (Verification code:d1vn), and put the folder into `data`. \nThen the structure is as follows:\n```\n--data--KITTI--dataset\n            |--icp\n            |--patches\n\n```\n\n**Training**\n\nTraining SpinNet on the KITTI dataset:\n\n```\ncd ./KITTI/Train/\npython train.py\n```\n\n**Testing**\n\nEvaluate the performance of the trained models on the KITTI dataset:\n\n```\ncd ./KITTI/Test/\npython test_kitti.py\n```\n\n\n### (5) ETH\n\nThe test set can be downloaded from [here](https://share.phys.ethz.ch/~gsg/3DSmoothNet/data/ETH.rar), and put the folder into `data`, then the structure is as follows:\n```\n--data--ETH--gazebo_summer\n          |--gazebo_winter\n          |--wood_autmn\n          |--wood_summer\n```\n\n### (6) Generalization across Unseen Datasets \n\n**3DMatch to ETH**\n\nGeneralization from 3DMatch dataset to ETH dataset:\n```\ncd ./generalization/ThreeDMatch-to-ETH\npython preparation.py\n```\nThe descriptors for each point will be generated and saved in the `generalization/ThreeDMatch-to-ETH/SpinNet_{timestr}/` folder. \nThen the `Feature Matching Recall` and `inlier ratio` can be caluclated by running\n```\npython evaluate.py [timestr]\n```\n\n**3DMatch to KITTI**\n\nGeneralization from 3DMatch dataset to KITTI dataset:\n\n```\ncd ./generalization/ThreeDMatch-to-KITTI\npython test.py\n```\n\n**KITTI to 3DMatch**\n\nGeneralization from KITTI dataset to 3DMatch dataset:\n```\ncd ./generalization/KITTI-to-ThreeDMatch\npython preparation.py\n```\nThe descriptors for each point will be generated and saved in `generalization/KITTI-to-3DMatch/SpinNet_{timestr}/` folder. \nThen the `Feature Matching Recall` and `inlier ratio` can be caluclated by running\n```\npython evaluate.py [timestr]\n```\n\n## Acknowledgement\n\nIn this project, we use (parts of) the implementations of the following works:\n\n* [Pointnet2_PyTorch](https://github.com/erikwijmans/Pointnet2_PyTorch)\n* [PPF-FoldNet](https://github.com/XuyangBai/PPF-FoldNet)\n* [Spherical CNNs](https://github.com/jonas-koehler/s2cnn)\n* [FCGF](https://github.com/chrischoy/FCGF)\n* [r2d2](https://github.com/naver/r2d2)\n* [D3Feat](https://github.com/XuyangBai/D3Feat)\n* [D3Feat.pytorch](https://github.com/XuyangBai/D3Feat.pytorch)\n\n\n### Citation\nIf you find our work useful in your research, please consider citing:\n\n    @inproceedings{ao2020SpinNet,\n      title={SpinNet: Learning a General Surface Descriptor for 3D Point Cloud Registration},\n      author={Ao, Sheng and Hu, Qingyong and Yang, Bo and Markham, Andrew and Guo, Yulan},\n      booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},\n      year={2021}\n    }\n\n### References\n\u003ca name=\"refs\"\u003e\u003c/a\u003e\n\n[1] 3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions, Andy Zeng, Shuran Song, Matthias Nießner, Matthew Fisher, Jianxiong Xiao, and Thomas Funkhouser, CVPR 2017.\n\n\n\n### Updates\n* 03/04/2021: The code is released!\n* 01/03/2021: This paper has been accepted by CVPR 2021!\n* 25/11/2020: Initial release!\n\n## Related Repos\n1. [RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds](https://github.com/QingyongHu/RandLA-Net) ![GitHub stars](https://img.shields.io/github/stars/QingyongHu/RandLA-Net.svg?style=flat\u0026label=Star)\n2. [SoTA-Point-Cloud: Deep Learning for 3D Point Clouds: A Survey](https://github.com/QingyongHu/SoTA-Point-Cloud) ![GitHub stars](https://img.shields.io/github/stars/QingyongHu/SoTA-Point-Cloud.svg?style=flat\u0026label=Star)\n3. [3D-BoNet: Learning Object Bounding Boxes for 3D Instance Segmentation on Point Clouds](https://github.com/Yang7879/3D-BoNet) ![GitHub stars](https://img.shields.io/github/stars/Yang7879/3D-BoNet.svg?style=flat\u0026label=Star)\n4. [SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point Clouds](https://github.com/QingyongHu/SensatUrban) ![GitHub stars](https://img.shields.io/github/stars/QingyongHu/SensatUrban.svg?style=flat\u0026label=Star)\n5. [SQN: Weakly-Supervised Semantic Segmentation of Large-Scale 3D Point Clouds with 1000x Fewer Labels](https://github.com/QingyongHu/SQN) ![GitHub stars](https://img.shields.io/github/stars/QingyongHu/SQN.svg?style=flat\u0026label=Star)\n\n","funding_links":[],"categories":["Python","2021"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FQingyongHu%2FSpinNet","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FQingyongHu%2FSpinNet","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FQingyongHu%2FSpinNet/lists"}