Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/jiarenchang/realtimestereo
Attention-Aware Feature Aggregation for Real-time Stereo Matching on Edge Devices (ACCV, 2020)
https://github.com/jiarenchang/realtimestereo
psmnet pytorch real-time stereo-matching stereo-vision
Last synced: 2 months ago
JSON representation
Attention-Aware Feature Aggregation for Real-time Stereo Matching on Edge Devices (ACCV, 2020)
- Host: GitHub
- URL: https://github.com/jiarenchang/realtimestereo
- Owner: JiaRenChang
- License: gpl-3.0
- Created: 2020-09-26T03:37:15.000Z (over 4 years ago)
- Default Branch: master
- Last Pushed: 2022-05-19T09:08:19.000Z (over 2 years ago)
- Last Synced: 2023-11-07T18:17:08.334Z (about 1 year ago)
- Topics: psmnet, pytorch, real-time, stereo-matching, stereo-vision
- Language: Python
- Homepage:
- Size: 85 KB
- Stars: 139
- Watchers: 5
- Forks: 26
- Open Issues: 10
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Attention-Aware Feature Aggregation for Real-time Stereo Matching on Edge Devices
This repository contains the code (in PyTorch) for "[Attention-Aware Feature Aggregation for Real-time Stereo Matching on Edge Devices](https://openaccess.thecvf.com/content/ACCV2020/papers/Chang_Attention-Aware_Feature_Aggregation_for_Real-time_Stereo_Matching_on_Edge_Devices_ACCV_2020_paper.pdf)" paper (ACCV 2020) by [Jia-Ren Chang](https://jiarenchang.github.io/), [Pei-Chun Chang](https://scholar.google.com/citations?user=eJUcMrQAAAAJ&hl=zh-TW) and [Yong-Sheng Chen](https://people.cs.nctu.edu.tw/~yschen/).
The codes mainly bring from [PSMNet](https://github.com/JiaRenChang/PSMNet/).
### Citation
```
@InProceedings{Chang_2020_ACCV,
author = {Chang, Jia-Ren and Chang, Pei-Chun and Chen, Yong-Sheng},
title = {Attention-Aware Feature Aggregation for Real-time Stereo Matching on Edge Devices},
booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)},
month = {November},
year = {2020}
}
```
### Train
As an example, use the following command to train a RTStereo on Scene Flow```
python main.py --maxdisp 192 \
--model RTStereoNet \
--datapath (your scene flow data folder)\
--epochs 10 \
--loadmodel (optional)\
--savemodel (path for saving model)
```### Pretrained Model
KITTI 2015 Pretrained Model [Google Drive](https://drive.google.com/file/d/12EQKjntE_Vi6m9vpSzJRtuzDCRJRmYoV/view?usp=sharing)