An open API service indexing awesome lists of open source software.

https://github.com/FengodChen/DMP-DUN-CVPR2025


https://github.com/FengodChen/DMP-DUN-CVPR2025

Last synced: 3 months ago
JSON representation

Awesome Lists containing this project

README

        




    Using Powerful Prior Knowledge of Diffusion Model in Deep Unfolding Networks for Image Compressive Sensing


    —— CVPR 2025 ——




[![arXiv](https://img.shields.io/badge/arXiv-2503.08429-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2503.08429)
[![cvpr](https://img.shields.io/badge/CVPR-HomePage-blue)](https://cvpr.thecvf.com/virtual/2025/poster/34025)
[![modelscope](https://img.shields.io/badge/ModelScope-Model-yellow)](https://modelscope.cn/models/FengodChen/DMP-DUN/files)
[![Hits](https://hits.seeyoufarm.com/api/count/incr/badge.svg?url=https%3A%2F%2Fgithub.com%2FFengodChen%2FDMP-DUN-CVPR2025&count_bg=%2379C83D&title_bg=%23555555&icon=&icon_color=%23E7E7E7&title=hits&edge_flat=false)](https://hits.seeyoufarm.com/api/count/incr/badge.svg?url=https%3A%2F%2Fgithub.com%2FFengodChen%2FDMP-DUN-CVPR2025&count_bg=%2379C83D&title_bg=%23555555&icon=&icon_color=%23E7E7E7&title=hits&edge_flat=false)
[![GitHub repo stars](https://img.shields.io/github/stars/FengodChen/DMP-DUN-CVPR2025?style=flat&logo=github&logoColor=whitesmoke&label=Stars)](https://github.com/FengodChen/DMP-DUN-CVPR2025/stargazers)

**Chen Liao** 1   **Yan Shen** 1, ✉   **Dan Li** 1   **Zhongli Wang** 2

1 School of Electronic and Information Engineering, Beijing Jiaotong University, China

2 School of Automation and Intelligence, Beijing Jiaotong University, China

✉️ { [liaochen](mailto:[email protected]) | [sheny](mailto:[email protected]) | [lidan102628](mailto:[email protected]) | [zlwang](mailto:[email protected]) } @bjtu.edu.cn

## Abstract

Recently, Deep Unfolding Networks (DUNs) have achieved impressive reconstruction quality in the field of image Compressive Sensing (CS) by unfolding iterative optimization algorithms into neural networks. The reconstruction quality of DUNs depends on the learned prior knowledge, so introducing stronger prior knowledge can further improve reconstruction quality. On the other hand, pretrained diffusion models contain powerful prior knowledge and have a solid theoretical foundation and strong scalability, but it requires a large number of iterative steps to achieve reconstruction. In this paper, we propose to use the powerful prior knowledge of pretrained diffusion model in DUNs to achieve high-quality reconstruction with less steps for image CS. Specifically, we first design an iterative optimization algorithm named Diffusion Message Passing (DMP), which embeds a pretrained diffusion model into each iteration process of DMP. Then, we deeply unfold the DMP algorithm into a neural network named DMP-DUN. The proposed DMP-DUN can use lightweight neural networks to achieve mapping from measurement data to the intermediate steps of the reverse diffusion process and directly approximate the divergence of the diffusion model, thereby further improving reconstruction efficiency. Extensive experiments show that our proposed DMP-DUN achieves state-of-the-art performance and requires at least only 2 steps to reconstruct the image.

## Network Architecture

![](./assets/DMP_DUN.png)

## Results

![](./assets/table.png)

## Environment

We use Ubuntu 24.04 and NVIDIA RTX 3090, with Python 3.10.10.

You can install the pip package by typing following commands:

```bash
python -m pip install -r requirements.txt
```

We highly recommend running in a new virtual Python environment.

## Testing DMP-DUN

### Download Models and Testsets (Easy Way)

We have written an automatic download script to facilitate the download. You can download a specific testset or model as the following commands:
```bash
# Download Testset
python download.py --testset --testset-name
```

```bash
# Download Model
python download.py --model --model-name --cs-ratios
```

The following commands will donwload all testsets and models:
```bash
# Download Testset
python download.py --testset --testset-name Set11
python download.py --testset --testset-name Urban100

# Download DMP-DUN model weight
python download.py --model --model-name DMP_DUN_10step --cs-ratios 0.5,0.25,0.1,0.04,0.01
python download.py --model --model-name DMP_DUN_plus_2step --cs-ratios 0.5,0.25,0.1,0.04,0.01
python download.py --model --model-name DMP_DUN_plus_4step --cs-ratios 0.5,0.25,0.1,0.04,0.01
```

### Download Models and Testsets (Manual Way)

You can also download our models and testsets in manual way. Our official models and testsets can be download on [Modelscope](https://www.modelscope.cn/models/FengodChen/DMP-DUN/files) or [Baidu Netdisk](https://pan.baidu.com/s/1k7UJhswfXrmjFDWT81P1cg?pwd=8hjr). The full folder tree of our used datasets is as follows:
```text
.
├── datasets
│   ├── Set11
│   │   └── 1
│   │   ├── barbara.tif
│   │   ├── ...
│   │   └── peppers256.tif
│   └── Urban100
│   └── image_SRF_4
│   ├── img_001_SRF_4_HR.png
│   ├── ...
│   └── img_100_SRF_4_HR.png
└── save
   ├── DMP_DUN_10step
   │   ├── C1R0.1
   │   │ ├── log
   │   │ │   ├── eval_log_2024-09-06_13-10-54_epoch-10.csv
   │   │ │   ├── eval_log_2024-09-08_04-27-14_epoch-20.csv
   │   │ │   ├── kernel_20240906131054.pkl
   │   │ │   ├── kernel_20240908042714.pkl
   │   │ │   ├── net_2024-09-08_04-27-14_epoch-20.pt
   │   │ │   ├── train_log_2024-09-06_13-10-54_epoch-10.csv
   │   │ │   └── train_log_2024-09-08_04-27-14_epoch-20.csv
   │   │ └── plot
   │   │ ├── train.png
   │   │ └── val.png
   │   ├── C1R0.25
   │   │ └── ...
   │   ├── C1R0.01
   │   │ └── ...
   │   ├── C1R0.04
   │   │ └── ...
   │   └── C1R0.5
   │   │ └── ...
   ├── DMP_DUN_plus_2step
   │ └── ...
   └── DMP_DUN_plus_4step
   └── ...
```
You can also use your own testset for testing, which can be plased in ```./datasets//1/```. For example, if you want to test General100 dataset, you can place this dataset as follows:
```text
.
└── datasets
   └── General100
      └── 1
      ├── im_1.bmp
      ├── ...
      └── im_100.bmp
```

### Test

You can run the following commands to test:
```bash
python main.py --test --model-name --cs-ratios --testset-names --gpu-id
```

where the `` should be same as the path `./datasets/`. For example:
```bash
python main.py --test --model-name DMP_DUN_10step --cs-ratios 0.5,0.25,0.1,0.04,0.01 --testset-names Set11,Urban100 --gpu-id 0
python main.py --test --model-name DMP_DUN_plus_2step --cs-ratios 0.5,0.25,0.1,0.04,0.01 --testset-names Set11,Urban100 --gpu-id 0
python main.py --test --model-name DMP_DUN_plus_4step --cs-ratios 0.5,0.25,0.1,0.04,0.01 --testset-names Set11,Urban100 --gpu-id 0
```

And the output will be saved in ```test_ans.txt``` in the root path:
```text
>>>>>> DMP_DUN_10step <<<<<<
[CS ratio = 0.5]
[Set11] PSNR(dB)/SSIM = 42.99/0.9857
[Urban100] PSNR(dB)/SSIM = 40.44/0.9827
[CS ratio = 0.25]
[Set11] PSNR(dB)/SSIM = 37.92/0.9668
[Urban100] PSNR(dB)/SSIM = 35.25/0.9538
[CS ratio = 0.1]
[Set11] PSNR(dB)/SSIM = 32.51/0.9161
[Urban100] PSNR(dB)/SSIM = 30.04/0.8857
[CS ratio = 0.04]
[Set11] PSNR(dB)/SSIM = 28.20/0.8340
[Urban100] PSNR(dB)/SSIM = 25.80/0.7727
[CS ratio = 0.01]
[Set11] PSNR(dB)/SSIM = 23.32/0.6305
[Urban100] PSNR(dB)/SSIM = 21.48/0.5671

>>>>>> DMP_DUN_plus_2step <<<<<<
[CS ratio = 0.5]
[Set11] PSNR(dB)/SSIM = 42.06/0.9835
[Urban100] PSNR(dB)/SSIM = 39.46/0.9795
[CS ratio = 0.25]
[Set11] PSNR(dB)/SSIM = 37.58/0.9648
[Urban100] PSNR(dB)/SSIM = 35.07/0.9520
[CS ratio = 0.1]
[Set11] PSNR(dB)/SSIM = 32.63/0.9206
[Urban100] PSNR(dB)/SSIM = 30.41/0.8922
[CS ratio = 0.04]
[Set11] PSNR(dB)/SSIM = 28.25/0.8360
[Urban100] PSNR(dB)/SSIM = 26.25/0.7858
[CS ratio = 0.01]
[Set11] PSNR(dB)/SSIM = 23.18/0.6286
[Urban100] PSNR(dB)/SSIM = 21.66/0.5750

>>>>>> DMP_DUN_plus_4step <<<<<<
[CS ratio = 0.5]
[Set11] PSNR(dB)/SSIM = 42.82/0.9848
[Urban100] PSNR(dB)/SSIM = 40.80/0.9827
[CS ratio = 0.25]
[Set11] PSNR(dB)/SSIM = 38.29/0.9681
[Urban100] PSNR(dB)/SSIM = 36.14/0.9584
[CS ratio = 0.1]
[Set11] PSNR(dB)/SSIM = 33.22/0.9277
[Urban100] PSNR(dB)/SSIM = 31.39/0.9053
[CS ratio = 0.04]
[Set11] PSNR(dB)/SSIM = 28.67/0.8448
[Urban100] PSNR(dB)/SSIM = 26.98/0.8035
[CS ratio = 0.01]
[Set11] PSNR(dB)/SSIM = 23.32/0.6313
[Urban100] PSNR(dB)/SSIM = 21.80/0.5832
```

## Training DMP-DUN
The code will automatically create, load and save the training checkpoint files at the path of `./save//C1R/log/`. Therefore, if you would like to train a DMP-DUN from scratch, please remove this folder manually (if it exist).

### Prepare Pretrained Guided Diffusion

We use [guided-diffusion](https://github.com/openai/guided-diffusion) as our base network. Please put the [pretrained diffusion weight](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/256x256_diffusion_uncond.pt) into ```./pretrained_guided_diffusion/``` as follows:
```text
.
└── pretrained_guided_diffusion/
└── 256x256_diffusion_uncond.pt
```

### Prepare Training and Validation Datasets

The official DMP-DUN use Coco2017 as training dataset and BSDS500 as validation datasets. You can place them into the folder `./datasets` as follows:
```text
.
└── datasets
   ├── BSDS500
   │   └── val
   │   └── val
   │   ├── 101085.png
   │   ├── ...
   │   └── 97033.png
└── Coco2017
   └── train2017
   ├── 000000000009.jpg
   ├── ...
   └── 000000581873.jpg
```

If you want to use your own dataset for training, you can also put them into `./datasets` as follows:
```text
.
└── datasets
└──
   └── 1
   ├──
   ├── ...
   └──
```
and you should modify the configs files in `./configs/.py` to declare your dataset path and loading method (Approximately between lines 103 and 141 of the config file).

### Prepare Training Config
All the official configs are stored in `./configs/.py`, you can modify this file to change some configs such as batch size, learning rate, training epoch, and etc.

### Begin Training

You can use the following commands for training a new DMP-DUN:
```bash
python main.py --train --model-name --cs-ratios --gpu-id
```
where `` is same as the name of config file. For example, if there exist a config file `./configs/CLANNAD.py`, and the `` should be `CLANNAD`, i.e. `python main.py --train --model-name CLANNAD ...`.

For training the official DMP-DUN, you can run the following commands:
```bash
python main.py --train --model-name DMP_DUN_10step --cs-ratios 0.5,0.25,0.1,0.04,0.01 --gpu-id 0
python main.py --train --model-name DMP_DUN_plus_2step --cs-ratios 0.5,0.25,0.1,0.04,0.01 --gpu-id 0
python main.py --train --model-name DMP_DUN_plus_4step --cs-ratios 0.5,0.25,0.1,0.04,0.01 --gpu-id 0
```

## Citation

```
@Inproceedings{liao2025:DmpDun,
author = {Chen Liao and Yan Shen and Dan Li and Zhongli Wang},
title = {Using Powerful Prior Knowledge of Diffusion Model in Deep Unfolding Networks for Image Compressive Sensing},
booktitle = {2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2025},
}
```