{"id":13497341,"url":"https://github.com/mjiUST/SurfaceNet-plus","last_synced_at":"2025-03-28T21:32:23.459Z","repository":{"id":215966881,"uuid":"265996799","full_name":"mjiUST/SurfaceNet-plus","owner":"mjiUST","description":"2020 TPAMI, SurfaceNet+ is a volumetric learning framework for the very sparse MVS. The sparse-MVS benchmark is maintained here. Authors: Mengqi Ji#, Jinzhi Zhang#, Qionghai Dai, Lu Fang.","archived":false,"fork":false,"pushed_at":"2020-07-25T09:06:32.000Z","size":9539,"stargazers_count":78,"open_issues_count":3,"forks_count":8,"subscribers_count":14,"default_branch":"master","last_synced_at":"2024-11-22T09:31:40.191Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/mjiUST.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2020-05-22T02:10:37.000Z","updated_at":"2024-01-04T16:46:13.000Z","dependencies_parsed_at":"2024-01-18T23:02:45.522Z","dependency_job_id":null,"html_url":"https://github.com/mjiUST/SurfaceNet-plus","commit_stats":null,"previous_names":["mjiust/surfacenet-plus"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mjiUST%2FSurfaceNet-plus","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mjiUST%2FSurfaceNet-plus/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mjiUST%2FSurfaceNet-plus/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mjiUST%2FSurfaceNet-plus/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/mjiUST","download_url":"https://codeload.github.com/mjiUST/SurfaceNet-plus/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":246105540,"owners_count":20724327,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-07-31T20:00:29.201Z","updated_at":"2025-03-28T21:32:22.410Z","avatar_url":"https://github.com/mjiUST.png","language":null,"readme":"# SurfaceNet+\n- An End-to-end 3D Neural Network for Very Sparse MVS. \n    * 2020TPAMI [early access link](https://ieeexplore.ieee.org/document/9099504).\n    * or Arxiv [preprint version](https://www.researchgate.net/publication/341647549_SurfaceNet_An_End-to-end_3D_Neural_Network_for_Very_Sparse_Multi-view_Stereopsis/figures).\n- **Key contributions**\n    1. Proposed a Sparse-MVS benchmark (under construction)\n        * Comprehensive evaluation on the datasets: [DTU](http://roboimagedata.compute.dtu.dk/?page_id=36), [Tanks and Temples](https://www.tanksandtemples.org/), etc.\n    2. Proposed a **trainable occlusion-aware** view selection scheme for the volumetric MVS method, e.g., [SurfaceNet](https://github.com/mjiUST/SurfaceNet)[5]. \n    3. Analysed the advantages of the volumetric methods, e.g., [SurfaceNet](https://github.com/mjiUST/SurfaceNet)[5] and SurfaceNet+, on the **Sparse-MVS problem** over the depth-fusion methods, e.g., [Gipuma](https://github.com/kysucix/gipuma) [6], [R-MVSNet](https://github.com/YoYo000/MVSNet)[7], [Point-MVSNet](https://github.com/callmeray/PointMVSNet)[8], and [COLMAP](https://github.com/colmap/colmap)[9].\n\n# [Sparse-MVS Benchmark](http://sparse-mvs.com) \n\n\u003cp align=\"center\"\u003e\n  \u003cimg width=\"800\" src=\"figures/teaser.png\"\u003e\n\u003c/p\u003e\n\n## (1) [Sparse-MVS of the DTU dataset](http://sparse-mvs.com/leaderboard.html)\n\n\u003cp align=\"center\"\u003e\n  \u003cimg width=\"500\" src=\"figures/teaser.jpg\"\u003e\n  \n  **Fig.1**: Illustration of a very sparse MVS setting using only $1/7$ of the camera views, i.e., $\\{v_i\\}_{i=1,8,15,22,...}$, to recover the model 23 in the DTU dataset [10]. Compared with the state-of-the-art methods, the proposed SurfaceNet+ provides much complete reconstruction, especially around the boarder region captured by very sparse views.\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003cimg width=\"500\" src=\"figures/DTU.jpg\"\u003e\n  \n  **Fig.2**: Comparison with the existing methods in the DTU Dataset [10] with different sparsely sampling strategy. When Sparsity = 3 and Batchsize = 2, the chosen camera indexes are 1,2 / 4,5 / 7,8 / 10,11 / .... SurfaceNet+ constantly outperforms the state-of-the-art methods at all the settings, especially at the very sparse scenario.\n\u003c/p\u003e\n\n## (2) [Sparse-MVS of the T\u0026T dataset](http://sparse-mvs.com/leaderboard.html)\n\n\u003cp align=\"center\"\u003e\n  \u003cimg width=\"500\" src=\"figures/T\u0026T.jpg\"\u003e\n  \n  **Fig.3**: Results of a tank model in the Tanks and Temples 'intermediate' set [23] compared with R-MVSNet [7] and COLMAP [9], which demonstrate the power of SurfaceNet+ of high recall prediction in the sparse-MVS setting.\n\u003c/p\u003e\n\n\n\n# Citing\n\nIf you find SurfaceNet+, the Sparse-MVS benchmark, or [SurfaceNet](https://github.com/mjiUST/SurfaceNet) useful in your research, please consider citing:\n\n    @article{ji2020surfacenet_plus,\n        title={SurfaceNet+: An End-to-end 3D Neural Network for Very Sparse Multi-view Stereopsis},\n        author={Ji, Mengqi and Zhang, Jinzhi and Dai, Qionghai and Fang, Lu},\n        journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},\n        year={2020},\n        publisher={IEEE}\n   }\n\n    @inproceedings{ji2017surfacenet,\n        title={SurfaceNet: An End-To-End 3D Neural Network for Multiview Stereopsis},\n        author={Ji, Mengqi and Gall, Juergen and Zheng, Haitian and Liu, Yebin and Fang, Lu},\n        booktitle={Proceedings of the IEEE International Conference on Computer Vision (ICCV)},\n        pages={2307--2315},\n        year={2017}\n    }\n\n","funding_links":[],"categories":["3DVision"],"sub_categories":["Depth/StereoMatching"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FmjiUST%2FSurfaceNet-plus","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FmjiUST%2FSurfaceNet-plus","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FmjiUST%2FSurfaceNet-plus/lists"}