Awesome-Diffusion-based-SLAM
Paper Survey for Diffusion-based SLAM
https://github.com/KwanWaiPang/Awesome-Diffusion-based-SLAM
Last synced: 2 days ago
JSON representation
-
Depth Estimation
- Scene Splatter: Momentum 3D Scene Generation from Single Image with Video Diffusion Model - --|[website](https://shengjun-zhang.github.io/SceneSplatter/)|
- Free360: Layered Gaussian Splatting for Unbounded 360-Degree View Synthesis from Extremely Sparse and Unposed Views
- Bolt3D: Generating 3D Scenes in Seconds - --|[website](https://szymanowiczs.github.io/bolt3d)|
- Stable Virtual Camera: Generative View Synthesis with Diffusion Models - AI/stable-virtual-camera.svg)](https://github.com/Stability-AI/stable-virtual-camera) |---|
- Multi-view Reconstruction via SfM-guided Monocular Depth Estimation
- Difix3D+: Improving 3D Reconstructions with Single-Step Diffusion Models - --|[website](https://research.nvidia.com/labs/toronto-ai/difix3d/)|
- Align3r: Aligned monocular depth estimation for dynamic videos - cloud/Align3R.svg)](https://github.com/jiah-cloud/Align3R)|---|
- Cat3d: Create anything in 3d with multi-view diffusion models - --|[website](https://cat3d.github.io/)|
- Diffusiondepth: Diffusion denoising approach for monocular depth estimation - hkust.github.io/Align3R.github.io/)|
- The surprising effectiveness of diffusion models for optical flow and monocular depth estimation - --|[website](https://diffusion-vision.github.io/)|
- Monocular depth estimation using diffusion models - --|[website](https://depth-gen.github.io/)|
- Mvdream: Multi-view diffusion for 3d generation - dream.github.io/)|
- Geo4D: Leveraging Video Generators for Geometric 4D Scene Reconstruction
- Difix3D+: Improving 3D Reconstructions with Single-Step Diffusion Models - --|[website](https://research.nvidia.com/labs/toronto-ai/difix3d/)|
- Can Video Diffusion Model Reconstruct 4D Geometry ? - --|[website](https://wayne-mai.github.io/publication/sora3r_arxiv_2025/)<br>Sora3R|
- GenFusion: Closing the Loop between Reconstruction and Generation via Videos
- Learning temporally consistent video depth from video diffusion priors - shao1/ChronoDepth.svg)](https://github.com/jiahao-shao1/ChronoDepth)|[website](https://xdimlab.github.io/ChronoDepth/)|
- Depthcrafter: Generating consistent long depth sequences for open-world videos
- Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation - eth/marigold.svg)](https://github.com/prs-eth/marigold)|[website](https://marigoldmonodepth.github.io/)|
- World-consistent Video Diffusion with Explicit 3D Modeling - --|[website](https://zqh0253.github.io/wvd/)|
-
Pose Estimation
-
Matching
- Sd4match: Learning to prompt stable diffusion model for semantic matching
- Emergent correspondence from image diffusion
- A tale of two features: Stable diffusion complements dino for zero-shot semantic correspondence - dino.svg)](https://github.com/Junyi42/sd-dino)|[website](https://sd-complements-dino.github.io/)<br>SD+DINO|
- Diffusion hyperfeatures: Searching through time and space for semantic correspondence - hyperfeatures/diffusion_hyperfeatures.svg)](https://github.com/diffusion-hyperfeatures/diffusion_hyperfeatures)|[website](https://diffusion-hyperfeatures.github.io/)<br>DHF|
- MATCHA: Towards Matching Anything - --|SD+DINOv2|
-
Other Resource
- |stable diffusion|
- Denoising diffusion implicit models - --|DDIM|
- Denoising diffusion probabilistic models
Programming Languages
Sub Categories