Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/svip-lab/indoor-sfmlearner
[ECCV'20] Patch-match and Plane-regularization for Unsupervised Indoor Depth Estimation
https://github.com/svip-lab/indoor-sfmlearner
depth-estimation eccv2020 extract-superpixel indoor nyuv2 pose-estimation pytorch scannet self-supervised unsupervised-learning
Last synced: 1 day ago
JSON representation
[ECCV'20] Patch-match and Plane-regularization for Unsupervised Indoor Depth Estimation
- Host: GitHub
- URL: https://github.com/svip-lab/indoor-sfmlearner
- Owner: svip-lab
- Created: 2020-07-15T03:33:36.000Z (over 4 years ago)
- Default Branch: master
- Last Pushed: 2022-11-07T20:23:12.000Z (about 2 years ago)
- Last Synced: 2023-10-20T23:18:20.620Z (about 1 year ago)
- Topics: depth-estimation, eccv2020, extract-superpixel, indoor, nyuv2, pose-estimation, pytorch, scannet, self-supervised, unsupervised-learning
- Language: Python
- Homepage:
- Size: 1.7 MB
- Stars: 142
- Watchers: 9
- Forks: 24
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Indoor SfMLearner
PyTorch implementation of our ECCV2020 paper:
[P2Net: Patch-match and Plane-regularization for Unsupervised Indoor Depth Estimation](https://arxiv.org/pdf/2007.07696.pdf)
Zehao Yu\*,
Lei Jin\*,
[Shenghua Gao](http://sist.shanghaitech.edu.cn/sist_en/2018/0820/c3846a31775/page.htm)(\* Equal Contribution)
## Getting Started
### Installation
```bash
pip install -r requirements.txt
```
Then install pytorch with
```bash
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
```
Pytorch version >= 0.4.1 would work well.### Download pretrained model
Please download pretrained model from [Onedrive](https://onedrive.live.com/?authkey=%21ANXK7icE%2D33VPg0&id=C43E510B25EDDE99%21106&cid=C43E510B25EDDE99) and extract:
```bash
tar -xzvf ckpts.tar.gz
rm ckpts.tar.gz
```### Prediction on single image
Run the following command to predict on a single image:
```bash
python inference_single_image.py --image_path=/path/to/image
```
By default, the script saves the predicted depth to the same folder## Evaluation
Download testing data from [Onedrive](https://onedrive.live.com/?authkey=%21ANXK7icE%2D33VPg0&id=C43E510B25EDDE99%21106&cid=C43E510B25EDDE99) and put to ./data.
```bash
cd data
tar -xzvf nyu_test.tar.gz
tar -xzvf scannet_test.tar.gz
tar -xzvf scannet_pose.tar.gz
cd ../
```### NYUv2 Dpeth
```bash
CUDA_VISIBLE_DEVICES=1 python evaluation/nyuv2_eval_depth.py \
--data_path ./data \
--load_weights_folder ckpts/weights_5f \
--post_process
```### NYUv2 normal
```base
CUDA_VISIBLE_DEVICES=1 python evaluation/nyuv2_eval_norm.py \
--data_path ./data \
--load_weights_folder ckpts/weights_5f \
# --post_process
```### ScanNet Depth
```base
CUDA_VISIBLE_DEVICES=1 python evaluation/scannet_eval_depth.py \
--data_path ./data/scannet_test \
--load_weights_folder ckpts/weights_5f \
--post_process
```### ScanNet Pose
```base
CUDA_VISIBLE_DEVICES=1 python evaluation/scannet_eval_pose.py \
--data_path ./data/scannet_pose \
--load_weights_folder ckpts/weights_5f \
--frame_ids 0 1
```## Training
First download [NYU Depth V2](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html) on the official website and unzip the raw data to DATA_PATH.### Extract Superpixel
Run the following command to extract superpixel:
```bash
python extract_superpixel.py --data_path DATA_PATH --output_dir ./data/segments
```### 3-frames
Run the following command to train our network:
```bash
CUDA_VISIBLE_DEVICES=1 python train_geo.py \
--model_name 3frames \
--data_path DATA_PATH \
--val_path ./data \
--segment_path ./data/segments \
--log_dir ./logs \
--lambda_planar_reg 0.05 \
--batch_size 12 \
--scales 0 \
--frame_ids_to_train 0 -1 1
```### 5-frames
Using the pretrained model from 3-frames setting gives better results.
```bash
CUDA_VISIBLE_DEVICES=1 python train_geo.py \
--model_name 5frames \
--data_path DATA_PATH \
--val_path ./data \
--segment_path ./data/segments \
--log_dir ./logs \
--lambda_planar_reg 0.05 \
--batch_size 12 \
--scales 0 \
--load_weights_folder FOLDER_OF_3FRAMES_MODEL \
--frame_ids_to_train 0 -2 -1 1 2
```## Acknowledgements
This project is built upon [Monodepth2](https://github.com/nianticlabs/monodepth2). We thank authors of Monodepth2 for their great work and repo.## License
TBD## Citation
Please cite our paper for any purpose of usage.
```
@inproceedings{IndoorSfMLearner,
author = {Zehao Yu and Lei Jin and Shenghua Gao},
title = {P$^{2}$Net: Patch-match and Plane-regularization for Unsupervised Indoor Depth Estimation},
booktitle = {ECCV},
year = {2020}
}
```