Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
Awesome-Learning-MVS
📑 A list of awesome learning-based multi-view stereo papers
https://github.com/XYZ-qiyh/Awesome-Learning-MVS
Last synced: 4 days ago
JSON representation
-
Awesome-Learning-MVS (Methods and Datasets)
-
Learning-based MVS Methods
- [paper - PAMI](https://ieeexplore.ieee.org/document/9099504)]
- [paper - *voxel occupancy grids* decoded from 3D Grid or *per-view depth maps* decoded after a projection operation.)
- [paper - Patch_Similarity_ICCV_2017_supplemental.pdf)] (Note: Learning to measure multi-image patch similiarity, NOT end-to-end learning MVS pipeline)
- [paper
- [paper
- [paper
- [paper - Based_Multi-View_Stereo_ICCV_2019_supplemental.pdf)] [[Github](https://github.com/callmeray/PointMVSNet)] [[T-PAMI](https://ieeexplore.ieee.org/abstract/document/9076298)] (Point-MVSNet performs multi-view stereo reconstruction in a *coarse-to-fine* fashion, learning to predict the 3D flow of each point to the groundtruth surface based on geometry priors and 2D image appearance cues)
- [paper
- [paper
- [paper
- [paper - stereo)]
- [paper
- [paper - MVSNet)]
- [paper - MVSNet_Sparse-to-Dense_Multi-View_CVPR_2020_supplemental.pdf)] [[Github](https://github.com/svip-lab/FastMVSNet)]
- [paper
- [paper - liujin/REDNet)] [[data](http://gpcv.whu.edu.cn/data/WHU_MVS_Stereo_dataset.html)]
- [paper - yhw/PVAMVSNet)]
- [paper - yhw/D2HC-RMVSNet)]
- [paper - MVSNet)]
- [paper
- [paper
- [paper - RMVSNet_Adaptive_Aggregation_ICCV_2021_supplemental.pdf)] [[Github](https://github.com/QT-Zhu/AA-RMVSNet)]
- [paper
- [paper
- [paper
- [paper
- [paper
- [paper - Based_CVPR_2022_supplemental.pdf)] [[Github](https://github.com/Airobin329/RayMVSNet)]
- [paper - Parametric_Depth_Distribution_CVPR_2022_supplemental.pdf)]
- [paper - Aware_CVPR_2022_supplemental.pdf)] [[Github](https://github.com/MegviiRobot/TransMVSNet)]
- [paper - Net)]
- [paper - View_Stereo_CVPR_2022_supplemental.pdf)] [[Github](https://github.com/bdwsq1996/Effi-MVS)]
- [paper - View_CVPR_2022_supplemental.pdf)] [[Github](https://github.com/zhenpeiyang/MVS2D)]
- [paper
- [paper - vl/CER-MVS)]
- [paper
- [paper
- [paper - MVSNet)]
- [paper
- [paper
- [paper
- [paper
-
~~To Be Continued~~
-
Multi-view Stereo Benchmark
- [paper
- [paper
- paper: [CVPR2014 - 016-0902-9.pdf)] [[website](http://roboimagedata.compute.dtu.dk/?page_id=36)] [[Eval code](https://github.com/Todd-Qi/MVSNet-PyTorch/tree/master/evaluations/dtu)] [[video](https://www.bilibili.com/video/BV1k5411G7NA/)]
- [paper - ePgl6HF260MGhQX0dCcmdHbFk)] [[website](https://www.tanksandtemples.org/)] [[Github](https://github.com/intel-isl/TanksAndTemples)] [[leaderboard](https://www.tanksandtemples.org/leaderboard/)]
- [paper - supp.pdf)] [[website](https://www.eth3d.net/)] [[Github](https://github.com/ETH3D)]
- [paper - Scale_CVPR_2020_supplemental.pdf)] [[Github](https://github.com/YoYo000/BlendedMVS)] [[visual](https://github.com/kwea123/BlendedMVS_scenes)]
- [paper
- [paper
-
Large-scale Real-world Scenes
-
Other similar collections
-
Future works (Personal Perspective)
-
Sub Categories