{"id":13441232,"url":"https://github.com/chrischoy/DeepGlobalRegistration","last_synced_at":"2025-03-20T11:37:42.106Z","repository":{"id":41379420,"uuid":"249498908","full_name":"chrischoy/DeepGlobalRegistration","owner":"chrischoy","description":"[CVPR 2020 Oral] A differentiable framework for 3D registration","archived":false,"fork":false,"pushed_at":"2023-10-25T19:33:32.000Z","size":1541,"stargazers_count":463,"open_issues_count":29,"forks_count":85,"subscribers_count":19,"default_branch":"master","last_synced_at":"2024-08-01T03:33:44.615Z","etag":null,"topics":["3d","correspondences","partial","pointcloud","registration","sparse-tensor-network"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/chrischoy.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2020-03-23T17:29:21.000Z","updated_at":"2024-08-01T03:30:26.000Z","dependencies_parsed_at":"2022-09-19T23:52:10.004Z","dependency_job_id":"5c977c68-642b-4f6a-9b2d-4e1966042c9f","html_url":"https://github.com/chrischoy/DeepGlobalRegistration","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chrischoy%2FDeepGlobalRegistration","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chrischoy%2FDeepGlobalRegistration/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chrischoy%2FDeepGlobalRegistration/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chrischoy%2FDeepGlobalRegistration/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/chrischoy","download_url":"https://codeload.github.com/chrischoy/DeepGlobalRegistration/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":221759944,"owners_count":16876323,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["3d","correspondences","partial","pointcloud","registration","sparse-tensor-network"],"created_at":"2024-07-31T03:01:31.402Z","updated_at":"2024-10-28T01:30:20.208Z","avatar_url":"https://github.com/chrischoy.png","language":"Python","readme":"# Deep Global Registration\n\n## Introduction\nThis repository contains python scripts for training and testing [Deep Global Registration, CVPR 2020 Oral](https://node1.chrischoy.org/data/publications/dgr/DGR.pdf).\nDeep Global Registration (DGR) proposes a differentiable framework for pairwise registration of real-world 3D scans. DGR consists of the following three modules:\n\n- a 6-dimensional convolutional network for correspondence confidence prediction\n- a differentiable Weighted Procrustes algorithm for closed-form pose estimation\n- a robust gradient-based SE(3) optimizer for pose refinement. \n\nFor more details, please check out\n\n- [CVPR 2020 oral paper](https://node1.chrischoy.org/data/publications/dgr/DGR.pdf)\n- [1min oral video](https://youtu.be/stzgn6DkozA)\n- [Full CVPR oral presentation](https://youtu.be/Iy17wvo07BU)\n\n[![1min oral](assets/1min-oral.png)](https://youtu.be/stzgn6DkozA)\n\n\n## Quick Pipleine Visualization\n| Indoor 3DMatch Registration | Outdoor KITTI Lidar Registration |\n|:---------------------------:|:---------------------------:|\n| ![](https://chrischoy.github.io/images/publication/dgr/text_100.gif) | ![](https://chrischoy.github.io/images/publication/dgr/kitti1_optimized.gif) |\n\n## Related Works\nRecent end-to-end frameworks combine feature learning and pose optimization. PointNetLK combines PointNet global features with an iterative pose optimization method. Wang et al. in Deep Closest Point train graph neural network features by backpropagating through pose optimization.\nWe further advance this line of work. In particular, our Weighted Procrustes method reduces the complexity of optimization from quadratic to linear and enables the use of dense correspondences for highly accurate registration of real-world scans.\n\n## Deep Global Registration\nThe first component is a 6-dimensional convolutional network that analyzes the geometry of 3D correspondences and estimates their accuracy. Please refer to [High-dim ConvNets, CVPR'20](https://github.com/chrischoy/HighDimConvNets) for more details.\n\nThe second component we develop is a differentiable Weighted Procrustes solver. The Procrustes method provides a closed-form solution for rigid registration in SE(3). A differentiable version of the Procrustes method used for end-to-end registration passes gradients through coordinates, which requires O(N^2) time and memory for N keypoints. Instead, the Weighted Procrustes method passes gradients through the weights associated with correspondences rather than correspondence coordinates.\nThe computational complexity of the Weighted Procrustes method is linear to the number of correspondences, allowing the registration pipeline to use dense correspondence sets rather than sparse keypoints. This substantially increases registration accuracy.\n\nOur third component is a robust optimization module that fine-tunes the alignment produced by the Weighted Procrustes solver and the failure detection module.\nThis optimization module minimizes a differentiable loss via gradient descent on the continuous SE(3) representation space. The optimization is fast since it does not require neighbor search in the inner loop such as ICP.\n\n## Configuration\nOur network is built on the [MinkowskiEngine](https://github.com/StanfordVL/MinkowskiEngine) and the system requirements are:\n\n- Ubuntu 14.04 or higher\n- \u003cb\u003eCUDA 10.1.243 or higher\u003c/b\u003e\n- pytorch 1.5 or higher\n- python 3.6 or higher\n- GCC 7\n\nYou can install the MinkowskiEngine and the python requirements on your system with:\n\n```shell\n# Install MinkowskiEngine\nsudo apt install libopenblas-dev g++-7\npip install torch\nexport CXX=g++-7; pip install -U MinkowskiEngine --install-option=\"--blas=openblas\" -v\n\n# Download and setup DeepGlobalRegistration\ngit clone https://github.com/chrischoy/DeepGlobalRegistration.git\ncd DeepGlobalRegistration\npip install -r requirements.txt\n```\n\n## Demo\nYou may register your own data with relevant pretrained DGR models. 3DMatch is suitable for indoor RGB-D scans; KITTI is for outdoor LiDAR scans.\n\n| Inlier Model | FCGF model  | Dataset | Voxel Size    | Feature Dimension | Performance                | Link   |\n|:------------:|:-----------:|:-------:|:-------------:|:-----------------:|:--------------------------:|:------:|\n| ResUNetBN2C  | ResUNetBN2C | 3DMatch | 5cm   (0.05)  | 32                | TE: 7.34cm, RE: 2.43deg    | [weights](http://node2.chrischoy.org/data/projects/DGR/ResUNetBN2C-feat32-3dmatch-v0.05.pth) |\n| ResUNetBN2C  | ResUNetBN2C | KITTI   | 30cm  (0.3)   | 32                | TE: 3.14cm, RE: 0.14deg    | [weights](http://node2.chrischoy.org/data/projects/DGR/ResUNetBN2C-feat32-kitti-v0.3.pth) |\n\n\n```shell\npython demo.py\n```\n\n| Input PointClouds           | Output Prediction           |\n|:---------------------------:|:---------------------------:|\n| ![](assets/demo_inputs.png) | ![](assets/demo_outputs.png) |\n\n\n## Experiments\n| Comparison | Speed vs. Recall Pareto Frontier |\n| -------  | --------------- |\n| ![Comparison](assets/comparison-3dmatch.png) | ![Frontier](assets/frontier.png) |\n\n\n## Training\nThe entire network depends on pretrained [FCGF models](https://github.com/chrischoy/FCGF#model-zoo). Please download corresponding models before training.\n| Model       | Normalized Feature  | Dataset | Voxel Size    | Feature Dimension |                  Link   |\n|:-----------:|:-------------------:|:-------:|:-------------:|:-----------------:|:------:|\n| ResUNetBN2C | True                | 3DMatch | 5cm   (0.05)  | 32                     | [download](https://node1.chrischoy.org/data/publications/fcgf/2019-08-16_19-21-47.pth) |\n| ResUNetBN2C | True                | KITTI   | 30cm  (0.3)   | 32                 | [download](https://node1.chrischoy.org/data/publications/fcgf/KITTI-v0.3-ResUNetBN2C-conv1-5-nout32.pth) |\n\n\n### 3DMatch\nYou may download preprocessed data and train via these commands:\n```shell\n./scripts/download_3dmatch.sh /path/to/3dmatch\nexport THREED_MATCH_DIR=/path/to/3dmatch; FCGF_WEIGHTS=/path/to/fcgf_3dmatch.pth ./scripts/train_3dmatch.sh\n```\n\n### KITTI\nFollow the instruction on [KITTI Odometry website](http://www.cvlibs.net/datasets/kitti/eval_odometry.php) to download the KITTI odometry train set. Then train with\n```shell\nexport KITTI_PATH=/path/to/kitti; FCGF_WEIGHTS=/path/to/fcgf_kitti.pth ./scripts/train_kitti.sh\n```\n\n## Testing\n3DMatch test set is different from train set and is available at the [download section](http://3dmatch.cs.princeton.edu/) of the official website. You may download and decompress these scenes to a new folder.\n\nTo evaluate trained model on 3DMatch or KITTI, you may use\n```shell\npython -m scripts.test_3dmatch --threed_match_dir /path/to/3dmatch_test/ --weights /path/to/dgr_3dmatch.pth\n```\nand\n```shell\npython -m scripts.test_kitti --kitti_dir /path/to/kitti/ --weights /path/to/dgr_kitti.pth\n```\n\n## Generate figures\nWe also provide experimental results of 3DMatch comparisons in `results.npz`. To reproduce figures we presented in the paper, you may use\n```shell\npython scripts/analyze_stats.py assets/results.npz\n```\n\n## Citing our work\nPlease cite the following papers if you use our code:\n\n```latex\n@inproceedings{choy2020deep,\n  title={Deep Global Registration},\n  author={Choy, Christopher and Dong, Wei and Koltun, Vladlen},\n  booktitle={CVPR},\n  year={2020}\n}\n\n@inproceedings{choy2019fully,\n  title = {Fully Convolutional Geometric Features},\n  author = {Choy, Christopher and Park, Jaesik and Koltun, Vladlen},\n  booktitle = {ICCV},\n  year = {2019}\n}\n\n@inproceedings{choy20194d,\n  title={4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks},\n  author={Choy, Christopher and Gwak, JunYoung and Savarese, Silvio},\n  booktitle={CVPR},\n  year={2019}\n}\n```\n\n## Concurrent Works\n\nThere have a number of 3D registration works published concurrently.\n\n- Gojcic et al., [Learning Multiview 3D Point Cloud Registration, CVPR'20](https://github.com/zgojcic/3D_multiview_reg)\n- Wang et al., [PRNet: Self-Supervised Learning for Partial-to-Partial Registration, NeurIPS'19](https://github.com/WangYueFt/prnet)\n- Yang et al., [TEASER: Fast and Certifiable Point Cloud Registration, arXiv'20](https://github.com/MIT-SPARK/TEASER-plusplus)\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fchrischoy%2FDeepGlobalRegistration","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fchrischoy%2FDeepGlobalRegistration","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fchrischoy%2FDeepGlobalRegistration/lists"}