{"id":19501957,"url":"https://github.com/hku-mars/std","last_synced_at":"2025-04-04T19:07:23.781Z","repository":{"id":64048771,"uuid":"536367849","full_name":"hku-mars/STD","owner":"hku-mars","description":"A 3D point cloud descriptor for place recognition","archived":false,"fork":false,"pushed_at":"2023-05-06T15:02:39.000Z","size":3142,"stargazers_count":634,"open_issues_count":32,"forks_count":74,"subscribers_count":26,"default_branch":"master","last_synced_at":"2025-03-28T18:08:22.297Z","etag":null,"topics":["lidar-slam","loop-closure","loop-detection","place-recognition","point-cloud"],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/hku-mars.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2022-09-14T01:23:13.000Z","updated_at":"2025-03-26T10:39:04.000Z","dependencies_parsed_at":"2024-01-16T01:30:18.441Z","dependency_job_id":"e1202d74-a210-454e-ab65-9706275a7565","html_url":"https://github.com/hku-mars/STD","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hku-mars%2FSTD","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hku-mars%2FSTD/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hku-mars%2FSTD/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hku-mars%2FSTD/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/hku-mars","download_url":"https://codeload.github.com/hku-mars/STD/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247234921,"owners_count":20905854,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["lidar-slam","loop-closure","loop-detection","place-recognition","point-cloud"],"created_at":"2024-11-10T22:14:33.287Z","updated_at":"2025-04-04T19:07:23.746Z","avatar_url":"https://github.com/hku-mars.png","language":"C++","readme":"# **STD: A Stable Triangle Descriptor for 3D place recognition**\n# **1. Introduction**\n**STD_detector** is a global descriptor for 3D place recognition. For a triangle, its shape is uniquely determined by the length of the sides or included angles. Moreover, the shape of triangles is completely invariant to rigid transformations. Based on this property, we first design an algorithm to efficiently extract local key points from the 3D point cloud and encode these key points into triangular descriptors. Then, place recognition is achieved by matching the side lengths (and some other information) of the descriptors between point clouds. The point correspondence obtained from the descriptor matching pair can be further used in geometric verification, which greatly improves the accuracy of place recognition.\n\n\u003cdiv align=\"center\"\u003e\n    \u003cdiv align=\"center\"\u003e\n        \u003cimg src=\"https://github.com/ChongjianYUAN/STDesc_release/raw/master/pics/introduction.png\" width = 75% \u003e\n    \u003c/div\u003e\n    \u003cfont color=#a0a0a0 size=2\u003eA typical place recognition case with STD. These two frames of point clouds are collected by a small FOV LiDAR (Livox Avia) moving in opposite directions, resulting in a low point cloud overlap and drastic viewpoint change.\u003c/font\u003e\n\u003c/div\u003e\n  \n\n## **1.1. Developers:**\nThe codes of this repo are contributed by:\n[Chongjian Yuan (袁崇健)](https://github.com/ChongjianYUAN), [Jiarong Lin (林家荣)](https://jiaronglin.com) and [dustier](https://github.com/dustier)\n\n\n## **1.2. Related paper**\nOur paper has been accepted to [**ICRA2023**](https://www.icra2023.org/), and our preprint version is now available on **arxiv**:  \n[STD: Stable Triangle Descriptor for 3D place recognition](https://arxiv.org/abs/2209.12435)\n\n\n## **1.3. Related video**\nOur accompanying video is now available on **YouTube**.\n\u003cdiv align=\"center\"\u003e\n    \u003ca href=\"https://youtu.be/O-9iXn1ME3g\" target=\"_blank\"\u003e\u003cimg src=\"https://github.com/ChongjianYUAN/STDesc_release/raw/master/pics/video_cover.png\" width=60% /\u003e\u003c/a\u003e\n\u003c/div\u003e\n\n# **2. Prerequisites**\n\n## **2.1 Ubuntu and [ROS](https://www.ros.org/)**\nWe tested our code on Ubuntu18.04 with ros melodic and Ubuntu20.04 with noetic. Additional ROS package is required:\n```\nsudo apt-get install ros-xxx-pcl-conversions\n```\n\n## **2.2 Eigen**\nFollowing the official [Eigen installation](eigen.tuxfamily.org/index.php?title=Main_Page), or directly install Eigen by:\n```\nsudo apt-get install libeigen3-dev\n```\n## **2.3. ceres-solver (version\u003e=2.1)**\nPlease kindly install ceres-solver by following the guide on [ceres Installation](http://ceres-solver.org/installation.html). Notice that the version of ceres-solver should higher than [ceres-solver 2.1.0](https://github.com/ceres-solver/ceres-solver/releases/tag/2.1.0)\n\n## **2.4. GTSAM**\nFollowing the official [GTSAM installation](https://gtsam.org/get_started/), or directly install GTSAM 4.x stable release by:\n```\n# Add PPA\nsudo add-apt-repository ppa:borglab/gtsam-release-4.0\nsudo apt update  # not necessary since Bionic\n# Install:\nsudo apt install libgtsam-dev libgtsam-unstable-dev\n```\n**!! IMPORTANT !!**: Please do not install the GTSAM of ***develop branch***, which are not compatible with our code! We are still figuring out this issue.\n\n\n## **2.5 Prepare for the data**\nSince this repo does not implement any method (i.e., LOAM, LIO, etc) for solving the pose for registering the LiDAR scan. So, you have to prepare two set of data for reproducing our results, include: **1) the LiDAR point cloud data. 2) the point cloud registration pose.**\n\n### **2.5.1. Download our Example data**\nDeparture from the purpose of convenience, we provide two sets of data for your fast evaluation, which can be downloaded from [**OneDrive**](https://connecthkuhk-my.sharepoint.com/:f:/g/personal/ycj1_connect_hku_hk/EpIDIgeOD05HpZZouhP74IsBfHD9oDibPe1M0JWUyMnfew?e=3AXA9L) and [**BaiduNetDisk(百度网盘)**](https://pan.baidu.com/s/1eYqrmaD0kskyYco2l1n8MA?pwd=xnmr)\n\n\n### **2.5.2. LiDAR Point cloud data**\n- For the ***Kitti dataset*** (i.e., our Example-1), we read the raw scan data with suffix *\".bin\"*. These raw LiDAR scan data can be downloaded from the [Kitti Odometry benchmark website](https://www.cvlibs.net/datasets/kitti/eval_odometry.php).\n- For the ***solid-state LiDAR dataset*** (i.e., our Example-2), we read the undistort scan data from the recorded *rosbag* files, whose bag file contains undistort LiDAR scan data in *rostopic: \"/cloud_undistort\"* \n### **2.5.3. Point cloud registration pose**\nIn the [poses file](https://connecthkuhk-my.sharepoint.com/:f:/g/personal/ycj1_connect_hku_hk/EgnGX4jC2zxDi-45YCfbioEBpPCfBVxa2LcrE-90oL4u_A?e=Lb4Yvv), the poses for LiDAR point cloud registration are given in the following data format:\n```\nTimestamp pos_x pos_y pos_z quat_x quat_y quat_z quat_w\n```\nwhere, ``Timestamp`` is the correspond sampling time stamp of a LiDAR scan, ``pose_{x,y,z}`` and ``quad_{x,y,z,w}`` are the translation and rotation (expressed used quaternion) of pose. \n# **3. Examples**\nThis reposity contains implementations of Stable Triangle Descriptor, as well as demos for place recognition and loop closure correction. For the **complete pipline of online LiDAR SLAM**, we will release this code along with the release of the **extended version**.\n\n## **3.1. Example-1: place recognition with KITTI Odometry dataset**\n\u003cdiv align=\"center\"\u003e\n\u003cimg src=\"https://github.com/ChongjianYUAN/STDesc_release/raw/master/pics/demo1/demo1_right.gif\"  width=\"48%\" /\u003e\n\u003cimg src=\"https://github.com/ChongjianYUAN/STDesc_release/raw/master/pics/demo1/demo1_left.gif\"  width=\"48%\" /\u003e\n\u003c/div\u003e\n\nTo run Example-1, you need to first download the [poses file](https://connecthkuhk-my.sharepoint.com/:f:/g/personal/ycj1_connect_hku_hk/EgnGX4jC2zxDi-45YCfbioEBpPCfBVxa2LcrE-90oL4u_A?e=Lb4Yvv) we provide.\n\nThen, you should modify the **demo_kitti.launch** file\n- Set the **lidar_path** to your local path\n- Set the **pose_path** to your local path\n```\ncd $STD_ROS_DIR\nsource deve/setup.bash\nroslaunch std_detector demo_kitti.launch\n```\n## **3.2. Example-2: place recognition with Livox LiDAR dataset**\n\u003cdiv align=\"center\"\u003e\n\u003cimg src=\"https://github.com/ChongjianYUAN/STDesc_release/raw/master/pics/demo2/demo2_left.gif\"  width=\"48%\" /\u003e\n\u003cimg src=\"https://github.com/ChongjianYUAN/STDesc_release/raw/master/pics/demo2/demo2_right.gif\"  width=\"48%\" /\u003e\n\u003c/div\u003e\n\nTo run Example-2, you need to first download the [rosbag file and poses file](https://connecthkuhk-my.sharepoint.com/:f:/g/personal/ycj1_connect_hku_hk/EvP0ZWZXE-pFqdBYKG_I4egBd3QXMA578r1YgdeYZNq3vw?e=9sjmoB) we provide.\nThen, you should modify the **demo_livox.launch** file\n- Set the **bag_path** to your local path\n- Set the **pose_path** to your local path\n```\ncd $STD_ROS_DIR\nsource deve/setup.bash\nroslaunch std_detector demo_livox.launch\n```\n## **3.3. Example-3: loop closure correction on the KITTI Odometry dataset**\n\n\u003cdiv align=\"center\"\u003e\n    \u003cdiv align=\"center\"\u003e\n        \u003cimg src=\"https://github.com/ChongjianYUAN/STDesc_release/raw/master/pics/demo3/demo3.jpg\" width = 90% \u003e\n    \u003c/div\u003e\n    \u003cfont color=#a0a0a0 size=2\u003eThe point cloud map and trajectory before and after correction by STD.\u003c/font\u003e\n\u003c/div\u003e\n\nTo run Example-3, you need to first download the [poses file](https://connecthkuhk-my.sharepoint.com/:f:/g/personal/ycj1_connect_hku_hk/EqG7JX15nKZNu1qK1FTUMNQB7HJ2wDz7IcBof6y9cXV4sg?e=bgGDjg) we provide or create your own pose file on the KITTI Odometry dataset with a LiDAR odom following the format: **timestamp x y z qx qy qz qw**\nThen, you should modify the **demo_pgo.launch** file\n- Set the **lidar_path** to your local path\n- Set the **pose_path** to your local path\n```\ncd $STD_ROS_DIR\nsource deve/setup.bash\nroslaunch std_detector demo_pgo.launch\n```\n\n## **3.4. Example-4: online loop closure correction with FAST-LIO2 integrated**\n\u003cdiv align=\"center\"\u003e\n    \u003cdiv align=\"center\"\u003e\n        \u003cimg src=\"https://github.com/ChongjianYUAN/STDesc_release/raw/master/pics/demo4/demo4.gif\" width = 80% \u003e\n    \u003c/div\u003e\n\u003c/div\u003e\n\nTo run Example-4, you need to install and configure [FAST-LIO2](https://github.com/hku-mars/FAST_LIO) first. \nYou can try the data `building_slower_motino_avia.bag` [here](https://drive.google.com/drive/folders/1EqNt6Bm_6Jf3beRf_RI3yrhiUCND09se)(provided by [zlwang7](https://github.com/zlwang7/S-FAST_LIO)), which is outdoor scan data with no loop closure other than the one between the starting point and the endpoint. Therefore, relying solely on the fast-lio algorithm results in obvious Z-axis drift, with STD loop detection and graph optimization, there will be a noticeable correction to the drift.\n\n```\n# termianl 1: run FAST-LIO2\nroslaunch fast_lio mapping_avia.launch\n\n# terminal 2: run std online demo\nroslaunch std_detector demo_online.launch\n\n# terminal 3: play data\nrosbag play building_slower_motion_avia.bag\n```\n\n\n# **Acknowledgments**\nIn the development of **STD_detector**, we stand on the shoulders of the following repositories:\n\n- [Scan Context](https://github.com/irapkaist/scancontext): An Egocentric Spatial Descriptor for Place Recognition within {3D} Point Cloud Map\n- [FAST-LIO](https://github.com/hku-mars/FAST_LIO): A computationally efficient and robust LiDAR-inertial odometry package.\n- [VoxelMap](https://github.com/hku-mars/VoxelMap): An efficient and probabilistic adaptive(coarse-to-fine) voxel mapping method for 3D LiDAR.\n- [R3LIVE](https://github.com/hku-mars/r2live): A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package\n\n# **Contact Us**\nWe are still working on improving the performance and reliability of our codes. For any technical issues, please contact us via email Chongjian Yuan \u003c ycj1ATconnect.hku.hk \u003e, Jiarong Lin \u003c ziv.lin.ljrATgmail.com \u003e.\n\nFor commercial use, please contact Dr. Fu Zhang \u003c fuzhang@hku.hk \u003e\n\n\n# **License**\nThe source code of this package is released under [**GPLv2**](http://www.gnu.org/licenses/) license. We only allow it free for personal and academic usage. For commercial use, please contact us to negotiate a different license.\n\nWe are still working on improving the performance and reliability of our codes. For any technical issues, please contact contact us via email Chongjian Yuan \u003c ycj1ATconnect.hku.hk \u003e, Jiarong Lin \u003c ziv.lin.ljrATgmail.com \u003e.\n\nIf you use any code of this repo in your academic research, please cite **at least one** of our papers:\n```\n[1] Yuan, C., Lin, J., Zou, Z., Hong, X., \u0026 Zhang, F.. \"STD: Stable Triangle Descriptor for 3D place recognition.\"\n[2] Xu, W., Cai, Y., He, D., Lin, J., \u0026 Zhang, F. \"Fast-lio2: Fast direct lidar-inertial odometry.\"\n[3] Yuan, C., Xu, W., Liu, X., Hong, X., \u0026 Zhang, F. \"Efficient and probabilistic adaptive voxel mapping for accurate online lidar odometry.\"\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhku-mars%2Fstd","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fhku-mars%2Fstd","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhku-mars%2Fstd/lists"}