Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/superbrucejia/nlnet-iqa

Non-local Modeling for Image Quality Assessment
https://github.com/superbrucejia/nlnet-iqa

blind blind-image-quality-assessment convolutional-neural-networks graph-neural-networks image-processing image-quality image-quality-assesment image-quality-assessment iqa local-modeling no-reference no-reference-image-quality-assessment non-local non-local-modeling non-local-nets non-local-network vggnet

Last synced: 3 days ago
JSON representation

Non-local Modeling for Image Quality Assessment

Awesome Lists containing this project

README

        

# Non-local Modeling for Image Quality Assessment
image

## Table of Contents

## Installation
Framework: PyTorch, OpenCV, PIL, scikit-image, scikit-learn, Numba JIT, Matplotlib, etc.

**Note**: The overall framework is based on **PyTorch**. Here, I didn't provide a specific `pip install -r requirements.txt` because there are so many dependencies. I would like to suggest you install the corresponding packages when they are required to run the code.

## Experiments Settings and Quick Start
### Intra-Database Experiments
Experiments Settings: ๐Ÿ‘‰ Check [this file](https://github.com/SuperBruceJia/NLNet-IQA/blob/main/lib/make_index.py#L8)

โœ”๏ธŽ Split the reference images into 60% training, 20% validation, and 20% testing.

โœ”๏ธŽ 10 random splits of the reference indices by setting seed `random.seed(random_seed)` from 1 to 10 `args.exp_id`.

โœ”๏ธŽ The median SRCC and PLCC on the testing set are reported.

Quick Start:

```python
python main.py --database_path '/home/jsy/BIQA/' --database TID2013 --batch_size 4 --num_workers 8 --gpu 0
```
(1) Other hyper-parameters can also be modified via `--parameter XXX`, _e.g._, `--epochs 200` and `--lr 1e-5`.

(2) Hyper-parameters can be found from the `parser` in the [main.py](https://github.com/SuperBruceJia/NLNet-IQA/blob/main/main.py#L73).

(3) Please change the database path `'/home/jsy/BIQA/'` to your own path.

Experimental Results
image
image

### Cross-Database Evaluations
Experiments Settings: ๐Ÿ‘‰ Check [this file](https://github.com/SuperBruceJia/NLNet-IQA/blob/main/Cross%20Database%20Evaluations/data_process/get_data.py#L50)

โœ”๏ธŽ One database is used as the training set, and the other databases are the testing sets.

โœ”๏ธŽ The performance of the model in the last epoch (100 epochs in this work) is reported.

Quick Start: (Folder: Cross Database Evaluations)

```python
python cross_main.py --database_path '/home/jsy/BIQA/' --train_database TID2013 --test_database CSIQ --num_workers 8 --gpu 0
```

Experimental Results
image

### Single Distortion Type Evaluation
Quick Start (Folder: Individual Distortion Evaluation):
```python
python TID2013-Single-Distortion.py
```
(1) Please change the trained models' path and Database path.

(2) The Index of Distortion Type can be found from original papers: [TID2013](https://www.sciencedirect.com/science/article/pii/S0923596514001490) and [KADID](http://database.mmsp-kn.de/kadid-10k-database.html#:~:text=blurs).

Experimental Results

LIVE Database:

image

---

CSIQ Database:

image

---

TID2013 Database:

image

---

KADID-10k Database:

image

### Real World Image Testing
Quick Start:
```python
python real_testing.py --model_file 'save_model/TID2013-32-4-1.pth' --im_path 'test_images/cr7.jpg' --database TID2013
```
Please comment [these lines](https://github.com/SuperBruceJia/NLNet-IQA/blob/main/real_testing.py#L45) if you don't want to resize the original image.

## Superpixel Segmentation Demo
Quick Start (Folder: Superpixel Segmentation):
```python
python superpixel.py
```

Superpixel vs. Square Patch Representation Demo
image
image
image
image

## Trained Models and Benchmark Databases
All trained models and benchmark databases are available on ๐Ÿค— [Hugging Face](https://huggingface.co/shuyuej/NLNet/tree/main).\
โœ”๏ธŽ Trained Models (Intra-Database Experiments): Download [here](https://drive.google.com/drive/folders/1K-24RGXyvSUZfnTThQ0CXUf4BgJA_pn7?usp=sharing)

โœ”๏ธŽ Trained Models (Cross-Database Evaluations): Download [here](https://drive.google.com/drive/folders/1-9XfTt4ne057Ureecf_eLXiMQ_4xucgJ?usp=sharing)

โœ”๏ธŽ LIVE, CSIQ, TID2013, and KADID-10k Databases: Download [here](https://drive.google.com/drive/folders/1gfBlByg1bpBXQOFZb6LyCttaX4eAf_Eh?usp=sharing)

Databases Summary
image
image

## Evaluation Metrics
(1) Pearson Linear Correlation Coefficient (**PLCC**): measures the prediction accuracy

(2) Spearman Rank-order Correlation Coefficient (**SRCC**): measures the prediction monotonicity

โœ”๏ธŽ A short note of the IQA evaluation metrics can be downloaded [here](https://shuyuej.com/files/MMSP/IQA_Evaluation_Metrics.pdf).

โœ”๏ธŽ In the [code](https://github.com/SuperBruceJia/NLNet-IQA/blob/main/lib/utils.py#L29) (`evaluation_criteria` function), PLCC, SRCC, Kendall Rank-order Correlation Coefficient (KRCC), Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and Outlier Ratio (OR) are all calculated. In this work, I only compare the PLCC and SRCC among different IQA algorithms.

## Motivation
**Local Content**: HVS is adaptive to the local content.

**Long-range Dependency and Relational Modeling**: HVS perceives image quality with long-range dependency constructed among different regions.

image
image

## Local Modeling and Non-local Modeling
**Local Modeling**: The local modeling methods encode spatially proximate local neighborhoods.

**Non-local Modeling**: The non-local modeling establishes the spatial integration of information by long- and short-range communications with different spatial weighting functions.

image
image

Non-local Behavior Demo
image
image

Local Modeling vs. Non-local Modeling Demo
image

## Global Distortions and Local Distortions
**Global Distortions**: the globally and uniformly distributed distortions with non-local recurrences over the image.

**Local Distortions**: the local nonuniform-distributed distortions in a local region.

image

image
โœ”๏ธŽ LIVE Database:

Global Distortions: JPEG, JP2K, WN, and GB

Local Distortions: FF

Distortion Demo
image

โœ”๏ธŽ CSIQ Database:

Global Distortions: JPEG, JP2K, WN, GB, PN, and ะกะก

Local Distortions: There is no local distortion in CSIQ Database.

Distortion Demo
image

โœ”๏ธŽ TID2013 Database:

Global Distortions: Additive Gaussian noise, Lossy compression of noisy images, Additive noise in color components, Comfort noise, Contrast change, Change of color saturation, Spatially correlated noise, High frequency noise, Impulse noise, Quantization noise, Gaussian blur, Image denoising, JPEG compression, JPEG 2000 compression, Multiplicative Gaussian noise, Image color quantization with dither, Sparse sampling and reconstruction, Chromatic aberrations, Masked noise, and Mean shift (intensity shift)

Local Distortions: JPEG transmission errors, JPEG 2000 transmission errors, Non eccentricity pattern noise, and Local bock-wise distortions with different intensity

Distortion Demo
image

โœ”๏ธŽ KADID-10k Database:

Global Distortions: blurs (lens blur, motion blur, and GB), color distortions (color diffusion, color shift, color saturation 1, color saturation 2, and color quantization), compression (JPEG and JP2K), noise (impulse noise, denoise, WN, white noise in color component, and multiplicative noise), brightness change (brighten, darken, and mean shift), spatial distortions (jitter, pixelate, and quantization), and sharpness and contrast (high sharpen and contrast change)

Local Distortions: Color block and Non-eccentricity patch

Distortion Demo
image

## Paper and Presentations
(1) **Thesis** can be downloaded [here](https://scholars.cityu.edu.hk/en/theses/noreference-image-quality-assessment-via-nonlocal-modeling(2d1e72fb-2405-43df-aac9-4838b6da1875).html).

(2) **Original Paper** can be downloaded [here](https://shuyuej.com/files/MMSP/MMSP22_Paper.pdf).

(3) **Detailed Slides Presentation** can be downloaded [here](https://shuyuej.com/files/Presentation/A_Summary_Three_Projects.pdf).

(4) **Detailed Slides Presentation with Animations** can be downloaded [here](https://shuyuej.com/files/Presentation/A_Summary_Three_Projects_Animations.pdf).

(5) **Simple Slides Presentation** can be downloaded [here](https://shuyuej.com/files/MMSP/MMSP22_Slides.pdf).

(6) **Poster Presentation** can be downloaded [here](https://shuyuej.com/files/MMSP/MMSP22_Poster.pdf).

### Model Overiew



NLNet

(i) **Image Preprocessing**: The input image is pre-processed. ๐Ÿ‘‰ Check [this file](https://github.com/SuperBruceJia/NLNet-IQA/blob/main/lib/image_process.py#L17).

(ii) **Graph Neural Network โ€“ Non-Local Modeling Method**: A two-stage GNN approach is presented for the non-local feature extraction and long-range dependency construction among different regions. The first stage aggregates local features inside superpixels. The following stage learns the non-local features and long-range dependencies among the graph nodes. It then integrates short- and long-range information based on an attention mechanism. The means and standard deviations of the non-local features are obtained from the graph feature signals. ๐Ÿ‘‰ Check [this file](https://github.com/SuperBruceJia/NLNet-IQA/blob/main/model/network.py#L62).

(iii) **Pre-trained VGGNet-16 โ€“ Local Modeling Method**: Local feature means and standard deviations are derived from the pre-trained VGGNet-16 considering the hierarchical degradation process of the HVS. ๐Ÿ‘‰ Check [this file](https://github.com/SuperBruceJia/NLNet-IQA/blob/main/model/network.py#L37).

(iv) **Feature Mean & Std Fusion and Quality Prediction**: The means and standard deviations of the local and non-local features are fused to deliver a robust and comprehensive representation for quality assessment. ๐Ÿ‘‰ Check [this file](https://github.com/SuperBruceJia/NLNet-IQA/blob/main/model/network.py). Besides, the distortion type identification loss $L_t$ , quality prediction loss $L_q$ , and quality ranking loss $L_r$ are utilized for training the NLNet. ๐Ÿ‘‰ Check [this file](https://github.com/SuperBruceJia/NLNet-IQA/blob/main/model/solver.py#L171). During inference, the final quality of the image is the averaged quality of all the non-overlapping patches. ๐Ÿ‘‰ Check [this file](https://github.com/SuperBruceJia/NLNet-IQA/blob/main/lib/image_process.py#L17).

### Poster Presentation



Poster

## Structure of the Code
At the root of the project, you will see:
```text
โ”œโ”€โ”€ main.py
โ”œโ”€โ”€ model
โ”‚ย ย  โ”œโ”€โ”€ layers.py
โ”‚ย ย  โ”œโ”€โ”€ network.py
โ”‚ย ย  โ””โ”€โ”€ solver.py
โ”œโ”€โ”€ superpixel
โ”‚ โ””โ”€โ”€ slic.py
โ”œโ”€โ”€ lib
โ”‚ย ย  โ”œโ”€โ”€ image_process.py
โ”‚ย ย  โ”œโ”€โ”€ make_index.py
โ”‚ย ย  โ””โ”€โ”€ utils.py
โ”œโ”€โ”€ data_process
โ”‚ย ย  โ”œโ”€โ”€ get_data.py
โ”‚ย ย  โ””โ”€โ”€ load_data.py
โ”œโ”€โ”€ benchmark
โ”‚ย ย  โ”œโ”€โ”€ CSIQ_datainfo.m
โ”‚ย ย  โ”œโ”€โ”€ CSIQfullinfo.mat
โ”‚ย ย  โ”œโ”€โ”€ KADID-10K.mat
โ”‚ย ย  โ”œโ”€โ”€ LIVEfullinfo.mat
โ”‚ย ย  โ”œโ”€โ”€ TID2013fullinfo.mat
โ”‚ย ย  โ”œโ”€โ”€ database.py
โ”‚ย ย  โ””โ”€โ”€ datainfo_maker.m
โ”œโ”€โ”€ save_model
โ”‚ย  โ””โ”€โ”€ README.md
โ”œโ”€โ”€ test_images
โ”‚ โ””โ”€โ”€ cr7.jpg
โ”œโ”€โ”€ real_testing.py
```

## Citation
If you find our work useful in your research, please consider citing it in your publications.
We provide a BibTeX entry below.

```bibtex
@inproceedings{Jia2022NLNet,
title = {No-reference Image Quality Assessment via Non-local Dependency Modeling},
author = {Jia, Shuyue and Chen, Baoliang and Li, Dingquan and Wang, Shiqi},
booktitle = {2022 IEEE 24th International Workshop on Multimedia Signal Processing (MMSP)},
year = {Sept. 2022},
volume = {},
number = {},
pages = {01-06},
doi = {10.1109/MMSP55362.2022.9950035}
}

@article{Jia2022NLNetThesis,
title = {No-reference Image Quality Assessment via Non-local Modeling},
author = {Jia, Shuyue},
journal = {CityU Scholars},
year = {May 2023},
publisher = {City University of Hong Kong},
url = {https://scholars.cityu.edu.hk/en/theses/noreference-image-quality-assessment-via-nonlocal-modeling(2d1e72fb-2405-43df-aac9-4838b6da1875).html}
}
```

## Contact
If you have any questions, please drop me an email at [email protected].

## Acknowledgement
The authors would like to thank Dr. Xuhao Jiang, Dr. Diqi Chen, and Dr. Jupo Ma for helpful discussions and invaluable inspiration. A special appreciation should be shown to Dr. Dingquan Li because this code is built upon his [(Wa)DIQaM-FR/NR](https://github.com/lidq92/WaDIQaM) re-implementation.