Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/choyingw/scadc-depthcompletion

ICASSP 2021: Scene Completeness-Aware Lidar Depth Completion for Driving Scenario
https://github.com/choyingw/scadc-depthcompletion

3d autonomous-driving autonomous-vehicles computer-vision deep-neural-networks depth-completion depth-estimation icassp icassp2021 lidar scene-reconstruction stereo-vision

Last synced: about 3 hours ago
JSON representation

ICASSP 2021: Scene Completeness-Aware Lidar Depth Completion for Driving Scenario

Awesome Lists containing this project

README

        

# SCADC-DepthCompletion
Scene Completeness-Aware Lidar Depth Completion for Driving Scenario, ICASSP 2021

Cho-Ying Wu and Ulrich Neumann, University of Southern California

The full example video link is here https://www.youtube.com/watch?v=FQDTdpMPKxs

Paper: https://arxiv.org/abs/2003.06945

Project page: https://choyingw.github.io/works/SCADC/index.html

**Advantages:**

👍 **First research to attend scene-completeness in depth completion**

👍 **Sensor Fusion for lidar and stereo cameras**

👍 **Structured upper scene depth**

👍 **Precise lower scene**

# Prerequisite

Ubuntu 16.04/ 20.04
Python 3
PyTorch 1.5+ (Tested on 1.5, should be compatiable for following versions)
NVIDIA GPU + CUDA CuDNN
Other common libraries: matplotlib, cv2, PIL

# Data Preparation

Clone the repo first.

Then, download preprocessed data from train (142G) val (11G). This data includes training/val split that follows KITTI Completion and all required pre-processed data for this work.

Extract the files under the repository. The structure should be like 'SCADC-DepthCompletion/Data/train' and 'SCADC-DepthCompletion/Data/val'

\*.h5 files are provided, including sparse depth (D), semi-dense depth (D_semi), left-right pairs (I_L and I_R), depth completed from SSDC (depth_c), and disparity from PSMNet (disp_c).

# Evaluation/Training Commands:

Our provided pretrained weight is under './test_ckpt/kitti/'. To quickly get our scene completeness-aware depth maps, just use the evaluation command, and it will save frame-by-frame results under './vis/'. Download "val" data split in the Data Preparation section and unzip under 'data/'. The folder structure and the evaluation command should be

.
├── data
├── val
├── 0
├── 00000.h5
......

python3 evaluate.py --name kitti --checkpoints_dir './test_ckpt' --test_path ./data

This is the training command is you want ot train the network yourself.

python3 train_depth_complete.py --name kitti --checkpoints_dir [preferred saving ckpt path] --train_path [train_data_dir] --test_path [test_data_dir]

\[train_data_dir\]: it should be 'Data/train' when you follow the recommended folder structure
\[test_data_dir\]: it should be 'Data/test' when you follow the recommended folder structure

# Customized depth completion and stereo estimation base methods:

Note that we use SSDC, and disparity from PSMNet.

The pre-processed data is in the \*.h5 files. (key: 'depth_c' and 'disp_c'). If you want to make completion results from different basic methods, please prepare those data at your own and replace data stored in \*.h5 files.

If you find our work useful, please consider to cite our work.

@inproceedings{wu2021scene,
title={Scene Completeness-Aware Lidar Depth Completion for Driving Scenario},
author={Wu, Cho-Ying and Neumann, Ulrich},
booktitle={ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={2490--2494},
year={2021},
organization={IEEE}
}

# Acknowledgement

The code development is based on CFCNet, Self-Supervised Depth Completion, and PSMNet.