An open API service indexing awesome lists of open source software.

https://github.com/charmve/sne-roadseg2

🌱 SNE-RoadSeg in PyTorch, ECCV 2020 by Rui (Ranger) Fan & Hengli Wang, but now we have improved it.
https://github.com/charmve/sne-roadseg2

charmve cyclegan cyclegan-pytorch deep-learning imgsegment machine-vision pytorch road-segmentation semantic-segmentation

Last synced: about 2 months ago
JSON representation

🌱 SNE-RoadSeg in PyTorch, ECCV 2020 by Rui (Ranger) Fan & Hengli Wang, but now we have improved it.

Awesome Lists containing this project

README

        

English | [简体中文](doc/ReadmeChinese.md)

# SNE-RoadSeg2

## Introduction
This SNE-RoadSeg2 is based on the official pytorch implementation of [**SNE-RoadSeg: Incorporating Surface Normal Information into Semantic Segmentation for Accurate Freespace Detection**](http://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123750341.pdf), accepted by [ECCV 2020](https://eccv2020.eu/). This is their [project page](https://sites.google.com/view/sne-roadseg).

In this repo, we provide the training and testing setup for the [KITTI Road Dataset](http://www.cvlibs.net/datasets/kitti/eval_road.php). We test our code in Python 3.7, CUDA 10.0, cuDNN 7 and PyTorch 1.1. We provide `Dockerfile` to build the docker image we use.







## Setup
Please setup the KITTI Road Dataset and pretrained weights according to the following folder structure:
```
SNE-RoadSeg
|-- checkpoints
| |-- kitti
| | |-- kitti_net_RoadSeg.pth
|-- data
|-- datasets
| |-- kitti
| | |-- training
| | | |-- calib
| | | |-- depth_u16
| | | |-- gt_image_2
| | | |-- image_2
| | |-- validation
| | | |-- calib
| | | |-- depth_u16
| | | |-- gt_image_2
| | | |-- image_2
| | |-- testing
| | | |-- calib
| | | |-- depth_u16
| | | |-- image_2
|-- examples
...
```
`image_2`, `gt_image_2` and `calib` can be downloaded from the [KITTI Road Dataset](http://www.cvlibs.net/datasets/kitti/eval_road.php). We implement `depth_u16` based on the LiDAR data provided in the KITTI Road Dataset, and it can be downloaded from [here](https://drive.google.com/file/d/1-GHge0p7JdTncmWedTTfmkrQvyNzatZo/view?usp=sharing). Note that `depth_u16` has the `uint16` data format, and the real depth in meters can be obtained by ``double(depth_u16)/1000``. Moreover, the pretrained weights `kitti_net_RoadSeg.pth` for our SNE-RoadSeg-152 can be downloaded from [here](https://drive.google.com/file/d/1hvyeLo-C7k6c998xh0G1QOVxeHRlCr8i/view?usp=sharing).

## Usage

### Run an example ###
We provide one example in `examples`. To run it, you only need to setup the `checkpoints` folder as mentioned above. Then, run the following script:
```
bash ./scripts/run_example.sh
```
and you will see `normal.png`, `pred.png` and `prob_map.png` in `examples`. `normal.png` is the normal estimation by our SNE; `pred.png` is the freespace prediction by our SNE-RoadSeg; and `prob_map.png` is the probability map predicted by our SNE-RoadSeg.

### Testing for KITTI submission
For KITTI submission, you need to setup the `checkpoints` and the `datasets/kitti/testing` folder as mentioned above. Then, run the following script:
```
bash ./scripts/test.sh
```
and you will get the prediction results in `testresults`. After that you can follow the [submission instructions](http://www.cvlibs.net/datasets/kitti/eval_road.php) to transform the prediction results into the BEV perspective for submission.

If everything works fine, you will get a MaxF score of **96.74** for **URBAN**. Note that this is our re-implemented weights, and it is very similar to the reported ones in the paper (a MaxF score of **96.75** for **URBAN**).

### Training on the KITTI dataset
For training, you need to setup the `datasets/kitti` folder as mentioned above. You can split the original training set into a new training set and a validation set as you like. Then, run the following script:
```
bash ./scripts/train.sh
```
and the weights will be saved in `checkpoints` and the tensorboard record containing the loss curves as well as the performance on the validation set will be save in `runs`. Note that `use-sne` in `train.sh` controls if we will use our SNE model, and the default is True. If you delete it, our RoadSeg will take depth images as input, and you also need to delete `use-sne` in `test.sh` to avoid errors when testing.

## Citation




[paper] | [video]

## Acknowledgement
Our code is inspired by [pytorch-CycleGAN-and-pix2pix](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix), and we thank [Jun-Yan Zhu](https://github.com/junyanz) for their great work.