{"id":20444825,"url":"https://github.com/xxlong0/estdepth","last_synced_at":"2025-04-13T00:23:19.632Z","repository":{"id":114733461,"uuid":"316110470","full_name":"xxlong0/ESTDepth","owner":"xxlong0","description":"CVPR2021","archived":false,"fork":false,"pushed_at":"2024-02-03T15:02:33.000Z","size":69339,"stargazers_count":67,"open_issues_count":2,"forks_count":6,"subscribers_count":11,"default_branch":"main","last_synced_at":"2025-03-26T18:21:18.462Z","etag":null,"topics":["consistency","depth","transformer","video"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/xxlong0.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2020-11-26T03:11:08.000Z","updated_at":"2024-10-17T13:10:59.000Z","dependencies_parsed_at":"2023-03-22T14:33:00.598Z","dependency_job_id":null,"html_url":"https://github.com/xxlong0/ESTDepth","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xxlong0%2FESTDepth","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xxlong0%2FESTDepth/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xxlong0%2FESTDepth/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xxlong0%2FESTDepth/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/xxlong0","download_url":"https://codeload.github.com/xxlong0/ESTDepth/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248648561,"owners_count":21139317,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["consistency","depth","transformer","video"],"created_at":"2024-11-15T10:09:15.916Z","updated_at":"2025-04-13T00:23:19.612Z","avatar_url":"https://github.com/xxlong0.png","language":"Python","readme":"# ESTDepth: Multi-view Depth Estimation using Epipolar Spatio-Temporal Networks (CVPR 2021)\n\n### [Project Page](https://www.xxlong.site/ESTDepth/) | [Video]() | [Paper](https://arxiv.org/pdf/2011.13118) | [Data](#dataset)\n\n\u003cimg src='docs/images/teaser.png'/\u003e\n\nWe present a novel method for multi-view depth estimation from a single video, which is a critical task in various applications, such as perception, reconstruction and robot navigation. \nAlthough previous learning-based methods have demonstrated compelling results, most works estimate depth maps of individual video frames independently, without taking into consideration the strong geometric and temporal coherence among the frames. \nMoreover, current state-of-the-art (SOTA) models mostly adopt a fully 3D convolution network for cost regularization and therefore require high computational cost, thus limiting their deployment in real-world applications. \nOur method achieves temporally coherent depth estimation results by using a novel Epipolar Spatio-Temporal (EST) transformer to explicitly associate geometric and temporal correlation with multiple estimated depth maps. \nFurthermore, to reduce the computational cost, inspired by recent Mixture-of-Experts models, we design a compact hybrid network consisting of a 2D context-aware network and a 3D matching network which learn 2D context information and 3D disparity cues separately. \n\nHere is the official repo for the paper:\n\n* [Multi-view Depth Estimation using Epipolar Spatio-Temporal Networks (Long et al., 2021, \u003cspan style=\"color:red\"\u003eCVPR 2021\u003c/span\u003e)](https://arxiv.org/pdf/2011.13118).\n\n\n## Table of contents\n-----\n  * [Installation](#requirements-and-installation)\n  * [Dataset](#dataset)\n  * [Usage](#train-a-new-model)\n    + [Training](#train-a-new-model)\n    + [Evaluation](#evaluation)\n  * [License](#license)\n  * [Citation](#citation)\n------\n\n## Requirements and Installation\n\nThis code is implemented in PyTorch.\n\nThe code has been tested on the following system:\n\n* Python 3.6\n* PyTorch 1.2.0\n* [Nvidia apex library](https://github.com/NVIDIA/apex) (optional)\n* Nvidia GPU (GTX 2080Ti) CUDA 10.1\n\n\nTo install, first clone this repo and install all dependencies:\n\n```bash\nconda env create -f environment.yml\n```\n\nOption: install apex to enable synchronized batch normalization \n```\ngit clone https://github.com/NVIDIA/apex.git\ncd apex\npip install -v --no-cache-dir --global-option=\"--cpp_ext\" --global-option=\"--cuda_ext\" ./\n```\n\n## Dataset\nPlease also cite the original papers if you use any of them in your work.\n\nDataset | Notes on Dataset Split\n---|---\n[ScanNet](http://www.scan-net.org/) | see ./data/scannet_split/\n[7scenes](https://www.microsoft.com/en-us/research/project/rgb-d-dataset-7-scenes/) | see ./data/7scenes/test.txt\n\n## Train a new model\n\nIn the training stage, our model takes a sequence of 5 frames as input, with a batch size of ``4`` sequences on ``4`` GPUs.\nWe use the following code to train a model:\n\n```bash\npython -m torch.distributed.launch --nproc_per_node=4 train_hybrid.py  --using_apex  --sync_bn \\\n--datapath /userhome/35/xxlong/dataset/scannet_whole/  \\\n--testdatapath /userhome/35/xxlong/dataset/scannet_test/ \\\n--reloadscan True \\\n--batch_size 1 --seq_len 5 --mode train --summary_freq 10 \\\n--epochs 7 --lr 0.00004 --lrepochs 2,4,6,8:2 \\\n--logdir ./logs/hybrid_res50_ndepths64 \\\n--resnet 50 --ndepths 64 --IF_EST_transformer False \\\n--depth_min 0.1 --depth_max 10. |  tee -a ./logs/hybrid_res50_ndepths64/log.txt\n\n```\n\n```bash\nbash train_hybrid.sh\n```\n\n## Evaluation\n\nOnce the model is trained, the following command is used to evaluate test images given the [trained_model](https://drive.google.com/file/d/12NGc7mqT97yTZY9ZLe2oEEhQRiSuQHp9/view?usp=sharing).\n\nOur model has two testing modes: ``Joint`` and ``ESTM``\n\nFor ``Joint`` mode, run:\n\n```bash\nbash eval_hybrid.sh\n```\n\nFor ``ESTM`` mode, run:\n\n```bash\nbash eval_hybrid_seq.sh\n```\n\n## License\n\nESTDepth is MIT-licensed.\nThe license applies to the pre-trained models as well.\n\n## Citation\n\nPlease cite as \n```bibtex\n@inproceedings{long2021multi,\n  title={Multi-view depth estimation using epipolar spatio-temporal networks},\n  author={Long, Xiaoxiao and Liu, Lingjie and Li, Wei and Theobalt, Christian and Wang, Wenping},\n  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},\n  pages={8258--8267},\n  year={2021}\n}\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fxxlong0%2Festdepth","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fxxlong0%2Festdepth","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fxxlong0%2Festdepth/lists"}