Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/cuge1995/deep-slam
a list of papers, code, and other resources focus on deep learning SLAM system
https://github.com/cuge1995/deep-slam
learning-based-slam lidar-slam localization mapping-algorithms monocular-visual-odometry odometry slam slam-algorithms visual-slam
Last synced: about 1 month ago
JSON representation
a list of papers, code, and other resources focus on deep learning SLAM system
- Host: GitHub
- URL: https://github.com/cuge1995/deep-slam
- Owner: cuge1995
- Created: 2021-09-14T23:43:36.000Z (over 3 years ago)
- Default Branch: main
- Last Pushed: 2023-04-11T07:31:17.000Z (over 1 year ago)
- Last Synced: 2023-10-20T19:08:29.000Z (about 1 year ago)
- Topics: learning-based-slam, lidar-slam, localization, mapping-algorithms, monocular-visual-odometry, odometry, slam, slam-algorithms, visual-slam
- Homepage:
- Size: 22.5 KB
- Stars: 126
- Watchers: 3
- Forks: 14
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Deep-SLAM
a list of papers, code, dataset and other resources focus on deep learning SLAM sysytem## Camera
* DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and RGB-D Cameras [[code]](https://github.com/princeton-vl/DROID-SLAM)[[paper]](https://arxiv.org/pdf/2108.10869) `NeurIPS 2021 Oral`
* Deepvo: Towards end-to-end visual odometry with deep recurrent convolutional neural networks [[no code]]()[[paper]](https://arxiv.org/pdf/1709.08429) `ICRA 2017`
* Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction [[no code]]()[[paper]](https://openaccess.thecvf.com/content_cvpr_2018/papers/Zhan_Unsupervised_Learning_of_CVPR_2018_paper.pdf) `CVPR 2018`
* Undeepvo: Monocular visual odometry through unsupervised deep learning [[code]](http://senwang.gitlab.io/UnDeepVO)[[paper]](https://arxiv.org/pdf/1709.06841) `ICRA 2018`
* Deeptam: Deep tracking and mapping [[no code]]()[[paper]](http://openaccess.thecvf.com/content_ECCV_2018/papers/Huizhong_Zhou_DeepTAM_Deep_Tracking_ECCV_2018_paper.pdf) `ECCV 2018`
* Beyond tracking: Selecting memory and refining poses for deep visual odometry [[no code]]()[[paper]](https://openaccess.thecvf.com/content_CVPR_2019/papers/Xue_Beyond_Tracking_Selecting_Memory_and_Refining_Poses_for_Deep_Visual_CVPR_2019_paper.pdf) `CVPR 2019`
* Sequential adversarial learning for self-supervised deep visual odometry [[no code]]()[[paper]](https://openaccess.thecvf.com/content_ICCV_2019/papers/Li_Sequential_Adversarial_Learning_for_Self-Supervised_Deep_Visual_Odometry_ICCV_2019_paper.pdf) `ICCV 2019`
* D2VO: Monocular Deep Direct Visual Odometry [[no code]]()[[paper]](http://ras.papercept.net/images/temp/IROS/files/2025.pdf) `IROS 2020`
* Deepfactors: Real-time probabilistic dense monocular slam [[no code]]()[[paper]](https://arxiv.org/pdf/2001.05049) `IEEE Robotics and Automation Letters, 2020`
* Self-supervised deep visual odometry with online adaptation [[no code]]()[[paper]](http://openaccess.thecvf.com/content_CVPR_2020/papers/Li_Self-Supervised_Deep_Visual_Odometry_With_Online_Adaptation_CVPR_2020_paper.pdf) `CVPR 2020`
* Voldor: Visual odometry from log-logistic dense optical flow residuals [[code]](https://github.com/htkseason/VOLDOR)[[paper]](http://openaccess.thecvf.com/content_CVPR_2020/papers/Min_VOLDOR_Visual_Odometry_From_Log-Logistic_Dense_Optical_Flow_Residuals_CVPR_2020_paper.pdf) `CVPR 2020`
* TartanVO: A Generalizable Learning-based VO [[code]](https://github.com/castacks/tartanvo)[[paper]](https://arxiv.org/pdf/2011.00359) `CoRL 2020`
* gradSLAM: Automagically differentiable SLAM, CVPR 2020
* Generalizing to the Open World: Deep Visual Odometry with Online Adaptation [[no code]]()[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Li_Generalizing_to_the_Open_World_Deep_Visual_Odometry_With_Online_CVPR_2021_paper.pdf) `CVPR 2021`
* Unsupervised monocular visual odometry based on confidence evaluation [[no code]]()[[paper]](https://ieeexplore.ieee.org/abstract/document/9345430/) `IEEE Transactions on Intelligent Transportation Systems, 2021`## LiDAR
* Lo-net: Deep real-time lidar odometry [[no code]]()[[paper]](https://openaccess.thecvf.com/content_CVPR_2019/papers/Li_LO-Net_Deep_Real-Time_Lidar_Odometry_CVPR_2019_paper.pdf) `CVPR 2019`
* Self-supervised Visual-LiDAR Odometry with Flip Consistency [[no code]]()[[paper]](https://openaccess.thecvf.com/content/WACV2021/papers/Li_Self-Supervised_Visual-LiDAR_Odometry_With_Flip_Consistency_WACV_2021_paper.pdf) `WACV 2021`
* LoGG3D-Net: Locally Guided Global Descriptor Learning for 3D Place Recognition [code](https://github.com/csiro-robotics/LoGG3D-Net) `ICRA 2022`
* SHINE-Mapping: Large-Scale 3D Mapping Using Sparse Hierarchical Implicit Neural Representations [code](https://github.com/PRBonn/SHINE_mapping) `ICRA 2023`## Dataset
* [lamar-benchmark](https://github.com/microsoft/lamar-benchmark) `ECCV 2022`
* [TartanAir: A Dataset to Push the Limits of Visual SLAM](https://github.com/castacks/tartanvo) `IROS 2020`
* KITTI Odometry