{"id":13429992,"url":"https://github.com/marknabil/SFM-Visual-SLAM","last_synced_at":"2025-03-16T04:31:29.832Z","repository":{"id":49369064,"uuid":"50433823","full_name":"marknabil/SFM-Visual-SLAM","owner":"marknabil","description":null,"archived":false,"fork":false,"pushed_at":"2024-04-28T17:12:51.000Z","size":299,"stargazers_count":745,"open_issues_count":0,"forks_count":197,"subscribers_count":63,"default_branch":"master","last_synced_at":"2024-05-21T12:19:54.759Z","etag":null,"topics":["augmented-reality","awesome-list","sfm","slam","visual-inertial","visual-odometry"],"latest_commit_sha":null,"homepage":null,"language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/marknabil.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2016-01-26T14:27:10.000Z","updated_at":"2024-05-13T00:22:42.000Z","dependencies_parsed_at":"2024-05-01T10:37:49.404Z","dependency_job_id":"a5c93648-c58c-4b00-bf11-e51f1f4f9bb2","html_url":"https://github.com/marknabil/SFM-Visual-SLAM","commit_stats":{"total_commits":95,"total_committers":2,"mean_commits":47.5,"dds":"0.010526315789473717","last_synced_commit":"1d8204278ea31a11155bdbad8f560ad7d4994c1f"},"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/marknabil%2FSFM-Visual-SLAM","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/marknabil%2FSFM-Visual-SLAM/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/marknabil%2FSFM-Visual-SLAM/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/marknabil%2FSFM-Visual-SLAM/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/marknabil","download_url":"https://codeload.github.com/marknabil/SFM-Visual-SLAM/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":220682787,"owners_count":16687214,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["augmented-reality","awesome-list","sfm","slam","visual-inertial","visual-odometry"],"created_at":"2024-07-31T02:00:48.920Z","updated_at":"2024-10-27T08:30:37.367Z","avatar_url":"https://github.com/marknabil.png","language":null,"readme":"# SFM-AR-Visual-SLAM\n![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)\n\n## Visual SLAM \n\n##### GSLAM\nGeneral SLAM Framework which supports feature based or direct method and different sensors including monocular camera, RGB-D sensors or any other input types can be handled.\nhttps://github.com/zdzhaoyong/GSLAM\n\n##### OKVIS: Open Keyframe-based Visual-Inertial SLAM\nhttp://ethz-asl.github.io/okvis/index.html\n\n##### Uncertainty-aware Receding Horizon Exploration and Mapping Planner\nhttps://github.com/unr-arl/rhem_planner\n\n##### S-PTAM: Stereo Parallel Tracking and Mapping\nhttps://github.com/lrse/sptam\n\n##### mcptam\nMCPTAM is a set of ROS nodes for running Real-time 3D Visual Simultaneous Localization and Mapping (SLAM) using Multi-Camera Clusters. It includes tools for calibrating both the intrinsic and extrinsic parameters of the individual cameras within the rigid camera rig.\n\nhttps://github.com/aharmat/mcptam\n\n##### FAB-MAP\nvisual place recognition algorithm\nhttps://github.com/arrenglover/openfabmap\n\n##### rat-SLAM\nhttps://github.com/davidmball/ratslam\n\n##### maplab\nAn Open Framework for Research in Visual-inertial Mapping and Localization\nhttps://github.com/ethz-asl/maplab\nfrom Roland Siegwart\n\n##### OpenVSLAM: Versatile Visual SLAM Framework\nhttps://github.com/xdspacelab/openvslam\n\n##### SLAM with Apriltag\nhttps://github.com/berndpfrommer/tagslam\nROS ready, bag file available\n\n##### SE2 SLAM fusing odom and Vision\nhttps://github.com/izhengfan/se2clam\n\n\n### RGB-D Visual SLAM\n\n##### Fast Odometry and Scene Flow from RGB-D Cameras \nhttps://github.com/MarianoJT88/Joint-VO-SF\npublished in ICRA 2017\n\n##### Real-Time Appearance-Based Mapping\nhttp://wiki.ros.org/rtabmap_ros ...\nMany Demos are available in the website with Several ROS bags\n\n##### general and scalable framework for visual SLAM\nhttps://github.com/strasdat/ScaViSLAM/\n\nhttps://github.com/felixendres/rgbdslam_v2\nROS ready, It accompany a PHD thesis from TUM \n\n##### SLAM in unstructed environments\nhttps://github.com/tu-darmstadt-ros-pkg/hector_slam\n\n##### Dense Visual Odometry and SLAM (dvo_slam)\nhttps://github.com/tum-vision/dvo_slam\n\n##### Coslam: Collaborative visual slam in dynamic environments\nhttps://github.com/danping/CoSLAM\n\n##### Real-time dense visual SLAM system  : ElasticFusion\n\nhttps://github.com/mp3guy/ElasticFusion ...\nit has nice gui and dataset , paper and video too . \n\n##### Real-time dense visual SLAM\nhttps://github.com/mp3guy/Kintinuous\n\n##### Deferred Triangulation SLAM\nBased on PTAM and SLAM track 3d traingulated and 2d non triangulated features . \nhttps://github.com/plumonito/dtslam\n\n##### Dense RGBD slam\nhttps://github.com/dorian3d/RGBiD-SLAM\n\n##### M2SLAM: Visual SLAM with Memory Management for large-scale Environments\nhttps://github.com/lifunudt/M2SLAM\n\n##### SceneLib2 - MonoSLAM open-source library\nfrom oxford university c++ SLAM  \nhttps://github.com/hanmekim/SceneLib2\n\n##### next best view planner\nhttps://github.com/ethz-asl/nbvplanner\n\n##### Dynamic RGB-D Encoder SLAM for a Differential-Drive Robot\nhttps://github.com/ydsf16/dre_slam\nROS kinetic, openCV 4.0, yolo v3, Ceres\n\n##### DynaSLAM: Tracking, Mapping and Inpainting in Dynamic Scenes\nhttps://github.com/BertaBescos/DynaSLAM \n\n\n### Augmented Reality\n\n##### PTAM (Parallel Tracking and Mapping) :  \nhttp://www.robots.ox.ac.uk/~gk/PTAM/\n\n##### PTAM Android : \nhttps://github.com/damienfir/android-ptam\n\n\n### Monocular SLAM\n\n##### ORB-SLAM: A Versatile and Accurate Monocular SLAM System\nhttps://github.com/raulmur/ORB_SLAM ....\n\nits modification : ORB-SLAM2 is a real-time SLAM library for Monocular, Stereo and RGB-D cameras\nhttps://github.com/raulmur/ORB_SLAM2\n\nits modification to work on IOS : \nhttps://github.com/Thunderbolt-sx/ORB_SLAM_iOS\n\n##### ORB-SLAM3 An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM\n\nhttps://github.com/UZ-SLAMLab/ORB_SLAM3\n\n##### REMODE (REgularized MOnocular Depth Estimation)\nhttps://github.com/uzh-rpg/rpg_open_remode ... \nProbabilistic, Monocular Dense Reconstruction in Real Time\n\n##### Fast Semi-Direct Monocular Visual Odometry\nhttps://github.com/pizzoli/rpg_svo\n\n##### Fast Semi-Direct Visual Odometry for Monocular, Wide Angle, and Multi-camera Systems\nno loop closure or bundle adjustment \nhttp://rpg.ifi.uzh.ch/svo2.html\n\n##### LSD-SLAM: Large-Scale Direct Monocular SLAM\nhttps://github.com/tum-vision/lsd_slam\n\nmodification over the original package to work with rolling chatter camera ( cheap webcams)\nhttps://github.com/FirefoxMetzger/lsd_slam\nThe change is mentioned in this video : https://www.youtube.com/watch?v=TZRICW6R24o\n\n##### ROS wrapper for visolib \nhttps://github.com/srv/viso2\nIt is supported till ROS-indigo.\n\n##### Visual-Inertia-fusion-based Monocular dEnse mAppiNg\nhttps://github.com/HKUST-Aerial-Robotics/VI-MEAN\nwith paper and video ICRA 2017 , rosbag as well.\n\n##### monocular object pose SLAM\nhttps://github.com/shichaoy/cube_slam\n\n##### DeepFactors: Real-Time Probabilistic Dense Monocular SLAM\nhttps://github.com/jczarnowski/DeepFactors?fbclid=IwAR3tMyM_VisfjADs5pX3OHoxSU6w6MorupmvXZDr8c9m2MWLObdcnlBNNpg\n\n##### ORB-SLAM RGBD + Inertial\nhttps://github.com/xiefei2929/ORB_SLAM3-RGBD-Inertial\n\n## LIDAR based\n\n##### LIMO: Lidar-Monocular Visual Odometry\nhttps://github.com/johannes-graeter/limo\nVirtual machine with all the dependencies is ready.\n\n##### LiDAR-based real-time 3D localization and mapping\nhttps://github.com/erik-nelson/blam\n\n##### segmatch\nhttps://github.com/ethz-asl/segmatch\nA 3D segment based loop-closure algorithm | ROS ready\n\n##### LIO-SAM\nhttps://github.com/TixiaoShan/LIO-SAM\nreal-time lidar-inertial odometry\n\nUV-SLAM: Unconstrained Line-based SLAM Using Vanishing Points for Structural Mapping | ICRA'22\nhttps://github.com/url-kaist/UV-SLAM\n\n## Visual Odometry\n\n##### Dense Sparse odometry\nhttps://github.com/JakobEngel/dso\n\n##### monocular odometry algorithm\nhttps://github.com/alejocb/dpptam\nDense Piecewise Planar Tracking and Mapping  from a Monocular Sequence IROS 2015\n\n##### Stereo Visual odometry\nhttps://github.com/rubengooj/StVO-PL\nStereo Visual Odometry by combining point and line segment features\n\n##### Monocular Motion Estimation on Manifolds\nhttps://github.com/johannes-graeter/momo\n\n##### Visual Odometry Revisited: What Should Be Learnt?\npaper + pytorch code: https://github.com/Huangying-Zhan/DF-VO\n\n##### SimVODIS Simultaneous Visual Odometry, Object Detection, and Instance Segmentation\nhttps://github.com/Uehwan/SimVODIS\n\n#### Modality-invariant Visual Odometry for Embodied Vision\nRGB only OR RGB + Depth\nhttps://memmelma.github.io/vot/\n\n### Visual Inertial odometry\n\n##### Kalibr\n\nIMU camera calibration toolbox and more.\nhttps://github.com/ethz-asl/kalibr\n\nCamera-to-IMU calibration toolbox\nhttps://github.com/hovren/crisp \n\n##### ROVIO\nRobust Visual Inertial Odometry\nhttps://github.com/ethz-asl/rovio\n\n##### Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight\n\nhttps://github.com/KumarRobotics/msckf_vio\n\n##### A Robust and Versatile Monocular Visual-Inertial State Estimator\nhttps://github.com/HKUST-Aerial-Robotics/VINS-Mono\n\n##### VINS modification for omnidirectional + Streo camera \nhttps://github.com/gaowenliang/vins_so\n\n##### Realtime Edge Based Inertial Visual Odometry for a Monocular Camera\nhttps://github.com/JuanTarrio/rebvo\nSpecially targetted to embedded hardware.\n\n##### robocentric visual-inertial odometry\nhttps://github.com/rpng/R-VIO\nMonocular camera + 6 DOF IMU \n\n\n## SFM \n\n\n##### Structure from Motion (SfM) for Unordered Image Collections\nhttps://github.com/TheFrenchLeaf/Bundle\n\n##### Android SFM\nhttps://github.com/danylaksono/Android-SfM-client\n\n##### Five Point , 6,7,8 algorithms\nopen geometrical vision\nhttps://github.com/marknabil/opengv\n\n##### openSFM\nStructure from Motion library written in Python on top of OpenCV. It has dockerfile for all installation on ubuntu 14.04\nhttps://github.com/mapillary/OpenSfM\n\n##### Unsupervised Learning of Depth and Ego-Motion from Video\nAn unsupervised learning framework for depth and ego-motion estimation from monocular videos \nhttps://github.com/tinghuiz/SfMLearner\n\n\n#####  CVPR 2015 Tutorial for open source SFM\nSource material for the CVPR 2015 Tutorial: Open Source Structure-from-Motion\nhttps://github.com/mleotta/cvpr2015-opensfm\n\n##### Unsupervised Learning of Depth and Ego-Motion from Video\nhttps://github.com/tinghuiz/SfMLearner\n\n##### Deep Permutation Equivariant Structure from Motion\nhttps://github.com/drormoran/Equivariant-SFM\n\n## concepts in matlab \nhttp://vis.uky.edu/~stewe/FIVEPOINT/\n\nSFMedu: A Matlab-based Structure-from-Motion System for Education\nhttps://github.com/jianxiongxiao/SFMedu\n\nLorenzo Torresani's Structure from Motion Matlab code\nhttps://github.com/scivision/em-sfm\n\nhttps://github.com/vrabaud/sfm_toolbox\n\nOpenMVG C++ library\nhttps://github.com/openMVG/openMVG\n\ncollection of computer vision methods for solving geometric vision problems\nhttps://github.com/laurentkneip/opengv\n\n\n##### Multiview Geometry Library in C++11\nhttp://theia-sfm.org/\n\n##### Quaternion Based Camera Pose Estimation From Matched Feature Points\nhttps://sites.google.com/view/kavehfathian/code\nits paper : https://arxiv.org/pdf/1704.02672.pdf\n\n## Mapping\n##### Direct Sparse Mapping \nhttps://github.com/jzubizarreta/dsm\n\n\n##### Volumetric 3D Mapping in Real-Time on a CPU\nhttps://github.com/tum-vision/fastfusion\n\n\n## Others : \n\n##### SLAM with IMU on Android\n\nhttps://github.com/knagara/SLAMwithCameraIMUforAndroid\n\n##### IOS iphone 7 plus\nhttps://github.com/HKUST-Aerial-Robotics/VINS-Mobile\n\n##### Matlab\nwith some good documentation to how to read the image and so on from the kinect .\nhttps://github.com/AutoSLAM/SLAM\n\n\n\n# Datasets and benchmarking\n## Curated List of datasets: \nhttps://github.com/youngguncho/awesome-slam-datasets\n\n##### igibson \nsimulation environment providing fast visual rendering and physics simulation based on Bullet\nhttps://svl.stanford.edu/igibson/\n\n##### EuRoC MAV Dataset\nhttp://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets\n\nvisual-inertial datasets collected on-board a Micro Aerial Vehicle (MAV). The datasets contain stereo images, synchronized IMU measurements, and accurate motion and structure ground-truth.\n\n##### TUM VI Benchmark for Evaluating Visual-Inertial Odometry\nhttps://vision.in.tum.de/data/datasets/visual-inertial-dataset\ndifferent scenes for evaluating VI odometry\n\n##### Authentic Dataset for Visual-Inertial Odometry\nhttps://github.com/AaltoVision/ADVIO\n\n##### challenging Visual Inertial Odometry benchmark\nhttps://daniilidis-group.github.io/penncosyvio/\nfrom Pennsylvania, published in ICRA2017\n\n##### ICL NIUM\nhttps://www.doc.ic.ac.uk/~ahanda/VaFRIC/iclnuim.html\nbenchmarking RGB-D, Visual Odometry and SLAM algorithms\n\n##### Benchmarking Pose Estimation Algorithms\nhttps://sites.google.com/view/kavehfathian/code/benchmarking-pose-estimation-algorithms\n\n\n![alt text](https://github.com/marknabil/SFM-Visual-SLAM/blob/master/vi_table.png)\n\n##### Toolbox for quantitative trajectory evaluation of VO/VIO\nhttps://github.com/uzh-rpg/rpg_trajectory_evaluation\n\n##### Photorealistic Simulator for VIO testing/benchmarking\nhttps://github.com/mit-fast/FlightGoggles\n\n\n# Machine Learning/ Deep learning based\n\n[Learning monocular visual odometry with dense 3D mapping from dense 3D flow\n](https://arxiv.org/abs/1803.02286)\n\n[DeepVO: A Deep Learning approach for Monocular Visual Odometry\n](https://arxiv.org/abs/1611.06069)\n\n# Survey papers and articles \n\n[Survey with year,sensor used and best practice](https://nbviewer.jupyter.org/github/kafendt/List-of-SLAM-VO-algorithms/blob/master/SLAM_table.pdf)\n\n[RGBD ROS SLAM comparison](https://www.researchgate.net/publication/321895908_Experimental_evaluation_of_ROS_compatible_SLAM_algorithms_for_RGB-D_sensors)\n\n[SLAM past present and future](https://arxiv.org/pdf/1606.05830.pdf) \n\n[Imperial college ICCV 2015 workshop](http://wp.doc.ic.ac.uk/thefutureofslam/)\n\n[Deep Auxiliary Learning for Visual Localization and Odometry](http://ais.informatik.uni-freiburg.de/publications/papers/valada18icra.pdf)\n\n# follow : \n## Robotics and Perception Group\nhttps://github.com/tum-vision\n\n## TUM VISION \nhttps://github.com/uzh-rpg\n## handheld AR \nhttp://studierstube.icg.tugraz.at/handheld_ar/cityofsights.php\n\n## Another Curated list\nfor SFM, 3D reconstruction and V-SLAM\nhttps://github.com/openMVG/awesome_3DReconstruction_list\n\n","funding_links":[],"categories":["Uncategorized","优秀开源项目汇总","二 优秀开源项目汇总"],"sub_categories":["Uncategorized","多相机拼接"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmarknabil%2FSFM-Visual-SLAM","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmarknabil%2FSFM-Visual-SLAM","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmarknabil%2FSFM-Visual-SLAM/lists"}