https://github.com/Zj-BinXia/DiffIR
This project is the official implementation of 'Diffir: Efficient diffusion model for image restoration', ICCV2023
https://github.com/Zj-BinXia/DiffIR
deblurring diffusion-model inpainting super-resolution
Last synced: 3 months ago
JSON representation
This project is the official implementation of 'Diffir: Efficient diffusion model for image restoration', ICCV2023
- Host: GitHub
- URL: https://github.com/Zj-BinXia/DiffIR
- Owner: Zj-BinXia
- Created: 2023-07-20T09:19:57.000Z (almost 2 years ago)
- Default Branch: master
- Last Pushed: 2024-04-01T11:24:47.000Z (about 1 year ago)
- Last Synced: 2024-04-01T12:36:49.730Z (about 1 year ago)
- Topics: deblurring, diffusion-model, inpainting, super-resolution
- Language: Jupyter Notebook
- Homepage:
- Size: 14.2 MB
- Stars: 349
- Watchers: 5
- Forks: 15
- Open Issues: 10
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-diffusion-categorized - [Code
README
# DiffIR: Efficient diffusion model for image restoration (ICCV2023)
[Paper](https://arxiv.org/pdf/2303.09472.pdf) | [Project Page](https://github.com/Zj-BinXia/DiffIR) | [pretrained models](https://drive.google.com/drive/folders/10miVILiopE414GyaSZM3EFAZITeY9q0p?usp=sharing)
#### News
- **Dec 19, 2023:** We propose reference-based DiffIR (DiffRIR) to alleviate texture, brightness, and contrast disparities between generated and preserved regions during image editing, such as inpainting and outpainting. All training and inference codes and pre-trained models (x1, x2, x4) are released at [Github](https://github.com/Zj-BinXia/DiffRIR)
- **Sep 10, 2023:** For real-world SR, we release x1 and x2 pre-trained models.
- **Sep 6, 2023:** For real-world SR and SRGAN, we can test [LR images without GT images](DiffIR-RealSR/options/test_DiffIRS2_GAN_x4.yml) and [inference](DiffIR-RealSR/inference_diffir.py).
- **August 31, 2023:** For real-world SR and SRGAN tasks, we updated 2x SR training files.
- **August 28, 2023:** For real-world SR tasks, we released the pretrained models [RealworldSR-DiffIRS2-GANV2](https://drive.google.com/drive/folders/1H4DU-9fB15fSz-OFko00HlWYbNSqmAKq?usp=sharing) and [training files](DiffIR-RealSR/options/train_DiffIRS2_GAN_x4_V2.yml) that are more focused on perception rather than the distortion, which can be used to super-resolve AIGC generated images.
- **July 20, 2023:** Training&Testing codes and pre-trained models are released!
> **Abstract:** *Diffusion model (DM) has achieved SOTA performance by modeling the image synthesis process into a sequential application of a denoising network. However, different from image synthesis, image restoration (IR) has a strong constraint to generate results in accordance with ground-truth. Thus, for IR, traditional DMs running massive iterations on a large model to estimate whole images or feature maps is inefficient. To address this issue, we propose an efficient DM for IR (DiffIR), which consists of a compact IR prior extraction network (CPEN), dynamic IR transformer (DIRformer), and denoising network. Specifically, DiffIR has two training stages: pretraining and training DM. In pretraining, we input ground-truth images into CPEN$_{S1}$ to capture a compact IR prior representation (IPR) to guide DIRformer. In the second stage, we train the DM to directly estimate the same IRP as pretrained CPEN$_{S1}$ only using LQ images. We observe that since the IPR is only a compact vector, DiffIR can use fewer iterations than traditional DM to obtain accurate estimations and generate more stable and realistic results. Since the iterations are few, our DiffIR can adopt a joint optimization of CPEN$_{S2}$, DIRformer, and denoising network, which can further reduce the estimation error influence. We conduct extensive experiments on several IR tasks and achieve SOTA performance while consuming less computational costs.*
>
![]()
---
## Installation
For inpainting, see [pip.sh](DiffIR-inpainting/pip.sh) for the installation of dependencies required to run DiffIR.
For GAN based single-image super-resolution, see [pip.sh](DiffIR-SRGAN/pip.sh) for the installation of dependencies required to run DiffIR.
For real-world super-resolution, see [pip.sh](DiffIR-RealSR/pip.sh) for the installation of dependencies required to run DiffIR.
For motion deblurring, see [pip.sh](DiffIR-demotionblur/pip.sh) for the installation of dependencies required to run DiffIR.
## Training and Evaluation
Training and Testing instructions for Inpainting, GAN based single-image super-resolution, real-world super-resolution, and motion deblurring are provided in their respective directories. Here is a summary table containing hyperlinks for easy navigation:
Task
Training Instructions
Testing Instructions
DiffIR's Pretrained Models
Inpainting
Link
Link
Download
GAN based single-image super-resolution
Link
Link
Download
Real-world super-resolution
Link
Link
Download
Motion deblurring
Link
Link
Download
## Results
Experiments are performed for different image processing tasks including, inpainting, GAN-based single-image super-resolution, real-world super-resolution, and motion deblurring.Inpainting (click to expand)
![]()
![]()
GAN-based single-image super-resolution (click to expand)
![]()
Real-world super-resolution (click to expand)
![]()
Motion deblurring (click to expand)
![]()
## Citation
If you use DiffIR, please consider citing:@article{xia2023diffir,
title={Diffir: Efficient diffusion model for image restoration},
author={Xia, Bin and Zhang, Yulun and Wang, Shiyin and Wang, Yitong and Wu, Xinglong and Tian, Yapeng and Yang, Wenming and Van Gool, Luc},
journal={ICCV},
year={2023}
}## Contact
Should you have any question, please contact [email protected]