An open API service indexing awesome lists of open source software.

https://github.com/igitugraz/weatherdiffusion

Code for "Restoring Vision in Adverse Weather Conditions with Patch-Based Denoising Diffusion Models" [TPAMI 2023]
https://github.com/igitugraz/weatherdiffusion

dehazing denoising-diffusion-models deraining desnowing diffusion-model image-restoration

Last synced: about 2 months ago
JSON representation

Code for "Restoring Vision in Adverse Weather Conditions with Patch-Based Denoising Diffusion Models" [TPAMI 2023]

Awesome Lists containing this project

README

        

# Restoring Vision in Adverse Weather Conditions with Patch-Based Denoising Diffusion Models

This is the code repository of the following [paper](https://arxiv.org/pdf/2207.14626.pdf) to train and perform inference with patch-based diffusion models for image restoration under adverse weather conditions.

"Restoring Vision in Adverse Weather Conditions with Patch-Based Denoising Diffusion Models"\
Ozan Özdenizci, Robert Legenstein\
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2023.\
https://doi.org/10.1109/TPAMI.2023.3238179

## Datasets

We perform experiments for image desnowing on [Snow100K](https://sites.google.com/view/yunfuliu/desnownet), combined image deraining and dehazing on [Outdoor-Rain](https://github.com/liruoteng/HeavyRainRemoval), and raindrop removal on
the [RainDrop](https://github.com/rui1996/DeRaindrop) datasets. To train multi-weather restoration, we used the AllWeather training set from [TransWeather](https://github.com/jeya-maria-jose/TransWeather), which is composed of subsets of training images from these three benchmarks.

## Saved Model Weights

We share a pre-trained diffusive **multi-weather** restoration model [WeatherDiff64](https://igi-web.tugraz.at/download/OzdenizciLegensteinTPAMI2023/WeatherDiff64.pth.tar) with the network configuration in `configs/allweather.yml`.
To evaluate WeatherDiff64 using the pre-trained model checkpoint with the current version of the repository:
```bash
python eval_diffusion.py --config "allweather.yml" --resume 'WeatherDiff64.pth.tar' --test_set 'raindrop' --sampling_timesteps 25 --grid_r 16
python eval_diffusion.py --config "allweather.yml" --resume 'WeatherDiff64.pth.tar' --test_set 'rainfog' --sampling_timesteps 25 --grid_r 16
python eval_diffusion.py --config "allweather.yml" --resume 'WeatherDiff64.pth.tar' --test_set 'snow' --sampling_timesteps 25 --grid_r 16
```

A smaller value for `grid_r` will yield slightly better results and higher image quality:
```bash
python eval_diffusion.py --config "allweather.yml" --resume 'WeatherDiff64.pth.tar' --test_set 'raindrop' --sampling_timesteps 25 --grid_r 4
python eval_diffusion.py --config "allweather.yml" --resume 'WeatherDiff64.pth.tar' --test_set 'rainfog' --sampling_timesteps 25 --grid_r 4
python eval_diffusion.py --config "allweather.yml" --resume 'WeatherDiff64.pth.tar' --test_set 'snow' --sampling_timesteps 25 --grid_r 4
```

We also share our pre-trained diffusive multi-weather restoration model [WeatherDiff128](https://igi-web.tugraz.at/download/OzdenizciLegensteinTPAMI2023/WeatherDiff128.pth.tar) with the network configuration in `configs/allweather128.yml`.

Check out below for some visualizations of our patch-based diffusive image restoration approach.

## Image Desnowing


Input Condition
Restoration Process
Output

snow11
snow12
snow13


snow21
snow22
snow23


## Image Deraining \& Dehazing


Input Condition
Restoration Process
Output

rh11
rh12
rh13


rh21
rh22
rh23

## Raindrop Removal


Input Condition
Restoration Process
Output

rd11
rd12
rd13


rd21
rd22
rd23

## Reference
If you use this code or models in your research and find it helpful, please cite the following paper:
```
@article{ozdenizci2023,
title={Restoring vision in adverse weather conditions with patch-based denoising diffusion models},
author={Ozan \"{O}zdenizci and Robert Legenstein},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
pages={1-12},
year={2023},
doi={10.1109/TPAMI.2023.3238179}
}
```

## Acknowledgments

Authors of this work are affiliated with Graz University of Technology, Institute of Theoretical Computer Science, and Silicon Austria Labs, TU Graz - SAL Dependable Embedded Systems Lab, Graz, Austria. This work has been supported by the "University SAL Labs" initiative of Silicon Austria Labs (SAL) and its Austrian partner universities for applied fundamental research for electronic based systems.

Parts of this code repository is based on the following works:

* https://github.com/ermongroup/ddim
* https://github.com/bahjat-kawar/ddrm
* https://github.com/JingyunLiang/SwinIR