https://github.com/X1716/IQA-Adapter
Code for the paper "IQA-Adapter: Exploring Knowledge Transfer from Image Quality Assessment to Diffusion-based Generative Models"
https://github.com/X1716/IQA-Adapter
adapter diffusion-models image-generation image-quality-assessment
Last synced: 2 months ago
JSON representation
Code for the paper "IQA-Adapter: Exploring Knowledge Transfer from Image Quality Assessment to Diffusion-based Generative Models"
- Host: GitHub
- URL: https://github.com/X1716/IQA-Adapter
- Owner: X1716
- Created: 2024-11-27T12:14:38.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2024-12-04T12:42:22.000Z (10 months ago)
- Last Synced: 2024-12-04T13:40:09.437Z (10 months ago)
- Topics: adapter, diffusion-models, image-generation, image-quality-assessment
- Homepage:
- Size: 5.63 MB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-diffusion-categorized - [Code
README
# ___***IQA-Adapter***___
[](https://arxiv.org/abs/2412.01794)
[](https://x1716.github.io/IQA-Adapter/)**🔥 Accepted to ICCV 2025 (Highlight)**
Code for the paper "IQA-Adapter: Exploring Knowledge Transfer from Image Quality Assessment to Diffusion-based Generative Models"
*TLDR*: IQA-Adapter is a tool that combines Image Quality/Aesthetics Assessment (IQA/IAA) models with image-generation and enables quality-aware generation with diffusion-based models. It allows to condition image generators on target quality/aesthetics scores.
IQA-Adapter builds upon [IP-Adapter](https://github.com/tencent-ailab/IP-Adapter) architecture.
TODO list:
- [x] Release code for IQA-Adapter inference and training for SDXL base model (in progress)
- [x] Release weights for IQA-Adapters trained with different IQA/IAA models (in progress)
- [x] Create project page
- [ ] Release code for experimentsDemonstration of guidance on quality (y-axis) and aesthetics (x-axis) scores:
Image-to-Image generation with Reference-based IQA-Adapter guided with different distortion references:
Reference-based IQA-Adapter employs information-rich activation space of IQA models to transfer only quality-related features from the reference image *without* leaking its semantics (e.g., objects or color scheme).
## Run IQA-Adapter
### Prerequisites
First, clone this repository:
git clone https://github.com/X1716/IQA-Adapter.git
Next, create a virtual environment, e.g. with anaconda:
conda create --name iqa_adapter python=3.12.2
conda activate iqa_adapterInstall PyTorch suitable for your CUDA/ROCm/MPS device:
pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu121
# for CUDA 12.1Newer Python and PyTorch versions should also work.
Install other requirements for this project:
pip install -r requirements.txt
### Demo
To test a pretrained IQA-Adapter you can check out [demo_adapter.ipynb](./demo_adapter.ipynb) jupyter notebook. The weights for the IQA-Adapter can be downloaded from [here](https://drive.google.com/drive/folders/1jVYM96nbk0pUV4HSHiUzWGlTSLg-dv5h?usp=sharing) (Google Drive).
### Training script
train_iqa_adapter.py can be used to train/fine-tune IQA-Adapter. We trained it on a SLURM cluster with slurm_train_script.sh. Train job can be dispatched with:
sbatch slurm_train_script.sh
Note that this script should be modified for your particular cluster setup (e.g., paths to input/output directories, pyXis container and other things should be specified). It is configured for distributed training with 5 nodes and 8 GPUs per node.## Citation
If you find this work useful for your research, please cite us as follows:
```bibtex
@misc{iqaadapter,
title={IQA-Adapter: Exploring Knowledge Transfer from Image Quality Assessment to Diffusion-based Generative Models},
author={Khaled Abud and Sergey Lavrushkin and Alexey Kirillov and Dmitriy Vatolin},
year={2024},
eprint={2412.01794},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2412.01794},
}
```