{"id":13412158,"url":"https://github.com/cszn/KAIR","last_synced_at":"2025-03-14T18:30:32.744Z","repository":{"id":37251367,"uuid":"228241233","full_name":"cszn/KAIR","owner":"cszn","description":"Image Restoration Toolbox (PyTorch). Training and testing codes for DPIR, USRNet, DnCNN, FFDNet, SRMD, DPSR, BSRGAN, SwinIR","archived":false,"fork":false,"pushed_at":"2024-10-02T13:20:42.000Z","size":19865,"stargazers_count":2955,"open_issues_count":63,"forks_count":632,"subscribers_count":47,"default_branch":"master","last_synced_at":"2024-11-13T04:00:15.871Z","etag":null,"topics":["bsrgan","deep-learning","denoising","dncnn","dpsr","esrgan","ffdnet","flops","image-restoration","pytorch","sisr","srmd","super-resolution","swinir","toolbox","usrnet"],"latest_commit_sha":null,"homepage":"https://cszn.github.io/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/cszn.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2019-12-15T19:42:25.000Z","updated_at":"2024-11-13T02:56:54.000Z","dependencies_parsed_at":"2024-05-03T01:54:11.119Z","dependency_job_id":"dbb75719-7df0-4b8a-812f-5d98e3b3f9ee","html_url":"https://github.com/cszn/KAIR","commit_stats":null,"previous_names":[],"tags_count":2,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cszn%2FKAIR","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cszn%2FKAIR/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cszn%2FKAIR/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cszn%2FKAIR/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/cszn","download_url":"https://codeload.github.com/cszn/KAIR/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":243358220,"owners_count":20277992,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["bsrgan","deep-learning","denoising","dncnn","dpsr","esrgan","ffdnet","flops","image-restoration","pytorch","sisr","srmd","super-resolution","swinir","toolbox","usrnet"],"created_at":"2024-07-30T20:01:21.626Z","updated_at":"2025-03-14T18:30:31.081Z","avatar_url":"https://github.com/cszn.png","language":"Python","readme":"## Training and testing codes for USRNet, DnCNN, FFDNet, SRMD, DPSR, MSRResNet, ESRGAN, BSRGAN, SwinIR, VRT, RVRT\n[![download](https://img.shields.io/github/downloads/cszn/KAIR/total.svg)](https://github.com/cszn/KAIR/releases) ![visitors](https://visitor-badge.glitch.me/badge?page_id=cszn/KAIR) \n\n[Kai Zhang](https://cszn.github.io/)\n\n*[Computer Vision Lab](https://vision.ee.ethz.ch/the-institute.html), ETH Zurich, Switzerland*\n\n_______\n- **_News (2023-06-02)_**: Code for \"[Denoising Diffusion Models for Plug-and-Play Image Restoration](https://github.com/yuanzhi-zhu/DiffPIR)\" is released at [yuanzhi-zhu/DiffPIR](https://github.com/yuanzhi-zhu/DiffPIR).\n\n- **_News (2022-10-04)_**: We release [the training codes](https://github.com/cszn/KAIR/blob/master/docs/README_RVRT.md) of [RVRT, NeurlPS2022 ![GitHub Stars](https://img.shields.io/github/stars/JingyunLiang/RVRT?style=social)](https://github.com/JingyunLiang/RVRT) for video SR, deblurring and denoising.\n\n- **_News (2022-05-05)_**: Try the [online demo](https://replicate.com/cszn/scunet) of [SCUNet ![GitHub Stars](https://img.shields.io/github/stars/cszn/SCUNet?style=social)](https://github.com/cszn/SCUNet) for blind real image denoising.\n\n- **_News (2022-03-23)_**: We release [the testing codes](https://github.com/cszn/SCUNet) of [SCUNet ![GitHub Stars](https://img.shields.io/github/stars/cszn/SCUNet?style=social)](https://github.com/cszn/SCUNet) for blind real image denoising.\n\n__*The following results are obtained by our SCUNet with purely synthetic training data! \nWe did not use the paired noisy/clean data by DND and SIDD during training!*__\n\n   \u003cimg src=\"https://github.com/cszn/cszn.github.io/blob/master/files/input_16.gif\" width=\"360px\"/\u003e \u003cimg src=\"https://github.com/cszn/cszn.github.io/blob/master/files/wm_fnb_0010_16.gif\" width=\"360px\"/\u003e\n\n\n- **_News (2022-02-15)_**: We release [the training codes](https://github.com/cszn/KAIR/blob/master/docs/README_VRT.md) of [VRT ![GitHub Stars](https://img.shields.io/github/stars/JingyunLiang/VRT?style=social)](https://github.com/JingyunLiang/VRT) for video SR, deblurring and denoising.\n![Eg1](https://raw.githubusercontent.com/JingyunLiang/VRT/main/assets/teaser_vsr.gif)\n![Eg2](https://raw.githubusercontent.com/JingyunLiang/VRT/main/assets/teaser_vdb.gif)\n![Eg3](https://raw.githubusercontent.com/JingyunLiang/VRT/main/assets/teaser_vdn.gif)\n![Eg4](https://raw.githubusercontent.com/JingyunLiang/VRT/main/assets/teaser_vfi.gif)\n![Eg5](https://raw.githubusercontent.com/JingyunLiang/VRT/main/assets/teaser_stvsr.gif)\n\n- **_News (2021-12-23)_**: Our techniques are adopted in [https://www.amemori.ai/](https://www.amemori.ai/).\n- **_News (2021-12-23)_**: Our new work for practical image denoising.\n\n- \u003cimg src=\"figs/palace.png\" height=\"320px\"/\u003e \u003cimg src=\"figs/palace_HSCU.png\" height=\"320px\"/\u003e \n- [\u003cimg src=\"https://github.com/cszn/KAIR/raw/master/figs/denoising_02.png\" height=\"256px\"/\u003e](https://imgsli.com/ODczMTc) \n[\u003cimg src=\"https://github.com/cszn/KAIR/raw/master/figs/denoising_01.png\" height=\"256px\"/\u003e](https://imgsli.com/ODczMTY) \n- **_News (2021-09-09)_**: Add [main_download_pretrained_models.py](https://github.com/cszn/KAIR/blob/master/main_download_pretrained_models.py) to download pre-trained models.\n- **_News (2021-09-08)_**: Add [matlab code](https://github.com/cszn/KAIR/tree/master/matlab) to zoom local part of an image for the purpose of comparison between different results.\n- **_News (2021-09-07)_**: We upload [the training code](https://github.com/cszn/KAIR/blob/master/docs/README_SwinIR.md) of [SwinIR ![GitHub Stars](https://img.shields.io/github/stars/JingyunLiang/SwinIR?style=social)](https://github.com/JingyunLiang/SwinIR) and provide an [interactive online Colob demo for real-world image SR](https://colab.research.google.com/gist/JingyunLiang/a5e3e54bc9ef8d7bf594f6fee8208533/swinir-demo-on-real-world-image-sr.ipynb). Try to super-resolve your own images on Colab! \u003ca href=\"https://colab.research.google.com/gist/JingyunLiang/a5e3e54bc9ef8d7bf594f6fee8208533/swinir-demo-on-real-world-image-sr.ipynb\"\u003e\u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"google colab logo\"\u003e\u003c/a\u003e\n\n|Real-World Image (x4)|[BSRGAN, ICCV2021](https://github.com/cszn/BSRGAN)|[Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN)|SwinIR (ours)|\n|      :---      |     :---:        |        :-----:         |        :-----:         | \n|\u003cimg width=\"200\" src=\"https://raw.githubusercontent.com/JingyunLiang/SwinIR/main/figs/ETH_LR.png\"\u003e|\u003cimg width=\"200\" src=\"https://raw.githubusercontent.com/JingyunLiang/SwinIR/main/figs/ETH_BSRGAN.png\"\u003e|\u003cimg width=\"200\" src=\"https://raw.githubusercontent.com/JingyunLiang/SwinIR/main/figs/ETH_realESRGAN.jpg\"\u003e|\u003cimg width=\"200\" src=\"https://raw.githubusercontent.com/JingyunLiang/SwinIR/main/figs/ETH_SwinIR.png\"\u003e\n|\u003cimg width=\"200\" src=\"https://raw.githubusercontent.com/JingyunLiang/SwinIR/main/figs/OST_009_crop_LR.png\"\u003e|\u003cimg width=\"200\" src=\"https://raw.githubusercontent.com/JingyunLiang/SwinIR/main/figs/OST_009_crop_BSRGAN.png\"\u003e|\u003cimg width=\"200\" src=\"https://raw.githubusercontent.com/JingyunLiang/SwinIR/main/figs/OST_009_crop_realESRGAN.png\"\u003e|\u003cimg width=\"200\" src=\"https://raw.githubusercontent.com/JingyunLiang/SwinIR/main/figs/OST_009_crop_SwinIR.png\"\u003e|\n\n- **_News (2021-08-31)_**: We upload the [training code of BSRGAN](https://github.com/cszn/BSRGAN#training).\n- **_News (2021-08-24)_**: We upload the BSRGAN degradation model.\n- **_News (2021-08-22)_**: Support multi-feature-layer VGG perceptual loss and UNet discriminator. \n- **_News (2021-08-18)_**: We upload the extended BSRGAN degradation model. It is slightly different from our published version. \n\n- **_News (2021-06-03)_**: Add testing codes of [GPEN (CVPR21)](https://github.com/yangxy/GPEN) for face image enhancement: [main_test_face_enhancement.py](https://github.com/cszn/KAIR/blob/master/main_test_face_enhancement.py)\n\n\u003cimg src=\"figs/face_04_comparison.png\" width=\"730px\"/\u003e \n\u003cimg src=\"figs/face_13_comparison.png\" width=\"730px\"/\u003e \n\u003cimg src=\"figs/face_08_comparison.png\" width=\"730px\"/\u003e \n\u003cimg src=\"figs/face_01_comparison.png\" width=\"730px\"/\u003e \n\u003cimg src=\"figs/face_12_comparison.png\" width=\"730px\"/\u003e \n\u003cimg src=\"figs/face_10_comparison.png\" width=\"730px\"/\u003e \n\n\n- **_News (2021-05-13)_**: Add [PatchGAN discriminator](https://github.com/cszn/KAIR/blob/master/models/network_discriminator.py).\n\n- **_News (2021-05-12)_**: Support distributed training, see also [https://github.com/xinntao/BasicSR/blob/master/docs/TrainTest.md](https://github.com/xinntao/BasicSR/blob/master/docs/TrainTest.md).\n\n- **_News (2021-01)_**: [BSRGAN](https://github.com/cszn/BSRGAN) for blind real image super-resolution will be added.\n\n- **_Pull requests are welcome!_**\n\n- **Correction (2020-10)**: If you use multiple GPUs for GAN training, remove or comment [Line 105](https://github.com/cszn/KAIR/blob/e52a6944c6a40ba81b88430ffe38fd6517e0449e/models/model_gan.py#L105) to enable `DataParallel` for fast training\n\n- **News (2020-10)**: Add [utils_receptivefield.py](https://github.com/cszn/KAIR/blob/master/utils/utils_receptivefield.py) to calculate receptive field.\n\n- **News (2020-8)**: A `deep plug-and-play image restoration toolbox` is released at [cszn/DPIR](https://github.com/cszn/DPIR).\n\n- **Tips (2020-8)**: Use [this](https://github.com/cszn/KAIR/blob/9fd17abff001ab82a22070f7e442bb5246d2d844/main_challenge_sr.py#L147) to avoid `out of memory` issue.\n\n- **News (2020-7)**: Add [main_challenge_sr.py](https://github.com/cszn/KAIR/blob/23b0d0f717980e48fad02513ba14045d57264fe1/main_challenge_sr.py#L90) to get `FLOPs`, `#Params`, `Runtime`, `#Activations`, `#Conv`, and `Max Memory Allocated`.\n```python\nfrom utils.utils_modelsummary import get_model_activation, get_model_flops\ninput_dim = (3, 256, 256)  # set the input dimension\nactivations, num_conv2d = get_model_activation(model, input_dim)\nlogger.info('{:\u003e16s} : {:\u003c.4f} [M]'.format('#Activations', activations/10**6))\nlogger.info('{:\u003e16s} : {:\u003cd}'.format('#Conv2d', num_conv2d))\nflops = get_model_flops(model, input_dim, False)\nlogger.info('{:\u003e16s} : {:\u003c.4f} [G]'.format('FLOPs', flops/10**9))\nnum_parameters = sum(map(lambda x: x.numel(), model.parameters()))\nlogger.info('{:\u003e16s} : {:\u003c.4f} [M]'.format('#Params', num_parameters/10**6))\n```\n\n- **News (2020-6)**: Add [USRNet (CVPR 2020)](https://github.com/cszn/USRNet) for training and testing.\n  - [Network Architecture](https://github.com/cszn/KAIR/blob/3357aa0e54b81b1e26ceb1cee990f39add235e17/models/network_usrnet.py#L309)\n  - [Dataset](https://github.com/cszn/KAIR/blob/6c852636d3715bb281637863822a42c72739122a/data/dataset_usrnet.py#L16)\n\n\nClone repo\n----------\n```\ngit clone https://github.com/cszn/KAIR.git\n```\n```\npip install -r requirement.txt\n```\n\n\n\nTraining\n----------\n\nYou should modify the json file from [options](https://github.com/cszn/KAIR/tree/master/options) first, for example,\nsetting [\"gpu_ids\": [0,1,2,3]](https://github.com/cszn/KAIR/blob/ff80d265f64de67dfb3ffa9beff8949773c81a3d/options/train_msrresnet_psnr.json#L4) if 4 GPUs are used,\nsetting [\"dataroot_H\": \"trainsets/trainH\"](https://github.com/cszn/KAIR/blob/ff80d265f64de67dfb3ffa9beff8949773c81a3d/options/train_msrresnet_psnr.json#L24) if path of the high quality dataset is `trainsets/trainH`.\n\n- Training with `DataParallel` - PSNR\n\n\n```python\npython main_train_psnr.py --opt options/train_msrresnet_psnr.json\n```\n\n- Training with `DataParallel` - GAN\n\n```python\npython main_train_gan.py --opt options/train_msrresnet_gan.json\n```\n\n- Training with `DistributedDataParallel` - PSNR - 4 GPUs\n\n```python\npython -m torch.distributed.launch --nproc_per_node=4 --master_port=1234 main_train_psnr.py --opt options/train_msrresnet_psnr.json  --dist True\n```\n\n- Training with `DistributedDataParallel` - PSNR - 8 GPUs\n\n```python\npython -m torch.distributed.launch --nproc_per_node=8 --master_port=1234 main_train_psnr.py --opt options/train_msrresnet_psnr.json  --dist True\n```\n\n- Training with `DistributedDataParallel` - GAN - 4 GPUs\n\n```python\npython -m torch.distributed.launch --nproc_per_node=4 --master_port=1234 main_train_gan.py --opt options/train_msrresnet_gan.json  --dist True\n```\n\n- Training with `DistributedDataParallel` - GAN - 8 GPUs\n\n```python\npython -m torch.distributed.launch --nproc_per_node=8 --master_port=1234 main_train_gan.py --opt options/train_msrresnet_gan.json  --dist True\n```\n\n- Kill distributed training processes of `main_train_gan.py`\n\n```python\nkill $(ps aux | grep main_train_gan.py | grep -v grep | awk '{print $2}')\n```\n\n----------\n| Method | Original Link |\n|---|---|\n| DnCNN |[https://github.com/cszn/DnCNN](https://github.com/cszn/DnCNN)|\n| FDnCNN |[https://github.com/cszn/DnCNN](https://github.com/cszn/DnCNN)|\n| FFDNet | [https://github.com/cszn/FFDNet](https://github.com/cszn/FFDNet)|\n| SRMD | [https://github.com/cszn/SRMD](https://github.com/cszn/SRMD)|\n| DPSR-SRResNet | [https://github.com/cszn/DPSR](https://github.com/cszn/DPSR)|\n| SRResNet | [https://github.com/xinntao/BasicSR](https://github.com/xinntao/BasicSR)|\n| ESRGAN | [https://github.com/xinntao/ESRGAN](https://github.com/xinntao/ESRGAN)|\n| RRDB | [https://github.com/xinntao/ESRGAN](https://github.com/xinntao/ESRGAN)|\n| IMDB | [https://github.com/Zheng222/IMDN](https://github.com/Zheng222/IMDN)|\n| USRNet | [https://github.com/cszn/USRNet](https://github.com/cszn/USRNet)|\n| DRUNet | [https://github.com/cszn/DPIR](https://github.com/cszn/DPIR)|\n| DPIR | [https://github.com/cszn/DPIR](https://github.com/cszn/DPIR)|\n| BSRGAN | [https://github.com/cszn/BSRGAN](https://github.com/cszn/BSRGAN)|\n| SwinIR | [https://github.com/JingyunLiang/SwinIR](https://github.com/JingyunLiang/SwinIR)|\n| VRT | [https://github.com/JingyunLiang/VRT](https://github.com/JingyunLiang/VRT)       |\n| DiffPIR | [https://github.com/yuanzhi-zhu/DiffPIR](https://github.com/yuanzhi-zhu/DiffPIR)|\n\nNetwork architectures\n----------\n* [USRNet](https://github.com/cszn/USRNet)\n\n  \u003cimg src=\"https://github.com/cszn/USRNet/blob/master/figs/architecture.png\" width=\"600px\"/\u003e \n\n* DnCNN\n\n  \u003cimg src=\"https://github.com/cszn/DnCNN/blob/master/figs/dncnn.png\" width=\"600px\"/\u003e \n \n* IRCNN denoiser\n\n \u003cimg src=\"https://github.com/lipengFu/IRCNN/raw/master/Image/image_2.png\" width=\"680px\"/\u003e \n\n* FFDNet\n\n  \u003cimg src=\"https://github.com/cszn/FFDNet/blob/master/figs/ffdnet.png\" width=\"600px\"/\u003e \n\n* SRMD\n\n  \u003cimg src=\"https://github.com/cszn/SRMD/blob/master/figs/architecture.png\" width=\"605px\"/\u003e \n\n* SRResNet, SRGAN, RRDB, ESRGAN\n\n  \u003cimg src=\"https://github.com/xinntao/ESRGAN/blob/master/figures/architecture.jpg\" width=\"595px\"/\u003e \n  \n* IMDN\n\n  \u003cimg src=\"figs/imdn.png\" width=\"460px\"/\u003e  ----- \u003cimg src=\"figs/imdn_block.png\" width=\"100px\"/\u003e \n\n\n\nTesting\n----------\n|Method | [model_zoo](model_zoo)|\n|---|---|\n| [main_test_dncnn.py](main_test_dncnn.py) |```dncnn_15.pth, dncnn_25.pth, dncnn_50.pth, dncnn_gray_blind.pth, dncnn_color_blind.pth, dncnn3.pth```|\n| [main_test_ircnn_denoiser.py](main_test_ircnn_denoiser.py) | ```ircnn_gray.pth, ircnn_color.pth```| \n| [main_test_fdncnn.py](main_test_fdncnn.py) | ```fdncnn_gray.pth, fdncnn_color.pth, fdncnn_gray_clip.pth, fdncnn_color_clip.pth```|\n| [main_test_ffdnet.py](main_test_ffdnet.py) | ```ffdnet_gray.pth, ffdnet_color.pth, ffdnet_gray_clip.pth, ffdnet_color_clip.pth```|\n| [main_test_srmd.py](main_test_srmd.py) | ```srmdnf_x2.pth, srmdnf_x3.pth, srmdnf_x4.pth, srmd_x2.pth, srmd_x3.pth, srmd_x4.pth```| \n|  | **The above models are converted from MatConvNet.** |\n| [main_test_dpsr.py](main_test_dpsr.py) | ```dpsr_x2.pth, dpsr_x3.pth, dpsr_x4.pth, dpsr_x4_gan.pth```|\n| [main_test_msrresnet.py](main_test_msrresnet.py) | ```msrresnet_x4_psnr.pth, msrresnet_x4_gan.pth```|\n| [main_test_rrdb.py](main_test_rrdb.py) | ```rrdb_x4_psnr.pth, rrdb_x4_esrgan.pth```|\n| [main_test_imdn.py](main_test_imdn.py) | ```imdn_x4.pth```|\n\n[model_zoo](model_zoo)\n--------\n- download link [https://drive.google.com/drive/folders/13kfr3qny7S2xwG9h7v95F5mkWs0OmU0D](https://drive.google.com/drive/folders/13kfr3qny7S2xwG9h7v95F5mkWs0OmU0D)\n\n[trainsets](trainsets)\n----------\n- [https://github.com/xinntao/BasicSR/blob/master/docs/DatasetPreparation.md](https://github.com/xinntao/BasicSR/blob/master/docs/DatasetPreparation.md)\n- [train400](https://github.com/cszn/DnCNN/tree/master/TrainingCodes/DnCNN_TrainingCodes_v1.0/data)\n- [DIV2K](https://data.vision.ee.ethz.ch/cvl/DIV2K/)\n- [Flickr2K](https://cv.snu.ac.kr/research/EDSR/Flickr2K.tar)\n- optional: use [split_imageset(original_dataroot, taget_dataroot, n_channels=3, p_size=512, p_overlap=96, p_max=800)](https://github.com/cszn/KAIR/blob/3ee0bf3e07b90ec0b7302d97ee2adb780617e637/utils/utils_image.py#L123) to get ```trainsets/trainH``` with small images for fast data loading\n\n[testsets](testsets)\n-----------\n- [https://github.com/xinntao/BasicSR/blob/master/docs/DatasetPreparation.md](https://github.com/xinntao/BasicSR/blob/master/docs/DatasetPreparation.md)\n- [set12](https://github.com/cszn/FFDNet/tree/master/testsets)\n- [bsd68](https://github.com/cszn/FFDNet/tree/master/testsets)\n- [cbsd68](https://github.com/cszn/FFDNet/tree/master/testsets)\n- [kodak24](https://github.com/cszn/FFDNet/tree/master/testsets)\n- [srbsd68](https://github.com/cszn/DPSR/tree/master/testsets/BSD68/GT)\n- set5\n- set14\n- cbsd100\n- urban100\n- manga109\n\n\nReferences\n----------\n```BibTex\n@inproceedings{zhu2023denoising, % DiffPIR\ntitle={Denoising Diffusion Models for Plug-and-Play Image Restoration},\nauthor={Yuanzhi Zhu and Kai Zhang and Jingyun Liang and Jiezhang Cao and Bihan Wen and Radu Timofte and Luc Van Gool},\nbooktitle={IEEE Conference on Computer Vision and Pattern Recognition Workshops},\nyear={2023}\n}\n@article{liang2022vrt,\ntitle={VRT: A Video Restoration Transformer},\nauthor={Liang, Jingyun and Cao, Jiezhang and Fan, Yuchen and Zhang, Kai and Ranjan, Rakesh and Li, Yawei and Timofte, Radu and Van Gool, Luc},\njournal={arXiv preprint arXiv:2022.00000},\nyear={2022}\n}\n@inproceedings{liang2021swinir,\ntitle={SwinIR: Image Restoration Using Swin Transformer},\nauthor={Liang, Jingyun and Cao, Jiezhang and Sun, Guolei and Zhang, Kai and Van Gool, Luc and Timofte, Radu},\nbooktitle={IEEE International Conference on Computer Vision Workshops},\npages={1833--1844},\nyear={2021}\n}\n@inproceedings{zhang2021designing,\ntitle={Designing a Practical Degradation Model for Deep Blind Image Super-Resolution},\nauthor={Zhang, Kai and Liang, Jingyun and Van Gool, Luc and Timofte, Radu},\nbooktitle={IEEE International Conference on Computer Vision},\npages={4791--4800},\nyear={2021}\n}\n@article{zhang2021plug, % DPIR \u0026 DRUNet \u0026 IRCNN\n  title={Plug-and-Play Image Restoration with Deep Denoiser Prior},\n  author={Zhang, Kai and Li, Yawei and Zuo, Wangmeng and Zhang, Lei and Van Gool, Luc and Timofte, Radu},\n  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},\n  year={2021}\n}\n@inproceedings{zhang2020aim, % efficientSR_challenge\n  title={AIM 2020 Challenge on Efficient Super-Resolution: Methods and Results},\n  author={Kai Zhang and Martin Danelljan and Yawei Li and Radu Timofte and others},\n  booktitle={European Conference on Computer Vision Workshops},\n  year={2020}\n}\n@inproceedings{zhang2020deep, % USRNet\n  title={Deep unfolding network for image super-resolution},\n  author={Zhang, Kai and Van Gool, Luc and Timofte, Radu},\n  booktitle={IEEE Conference on Computer Vision and Pattern Recognition},\n  pages={3217--3226},\n  year={2020}\n}\n@article{zhang2017beyond, % DnCNN\n  title={Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising},\n  author={Zhang, Kai and Zuo, Wangmeng and Chen, Yunjin and Meng, Deyu and Zhang, Lei},\n  journal={IEEE Transactions on Image Processing},\n  volume={26},\n  number={7},\n  pages={3142--3155},\n  year={2017}\n}\n@inproceedings{zhang2017learning, % IRCNN\ntitle={Learning deep CNN denoiser prior for image restoration},\nauthor={Zhang, Kai and Zuo, Wangmeng and Gu, Shuhang and Zhang, Lei},\nbooktitle={IEEE conference on computer vision and pattern recognition},\npages={3929--3938},\nyear={2017}\n}\n@article{zhang2018ffdnet, % FFDNet, FDnCNN\n  title={FFDNet: Toward a fast and flexible solution for CNN-based image denoising},\n  author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},\n  journal={IEEE Transactions on Image Processing},\n  volume={27},\n  number={9},\n  pages={4608--4622},\n  year={2018}\n}\n@inproceedings{zhang2018learning, % SRMD\n  title={Learning a single convolutional super-resolution network for multiple degradations},\n  author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},\n  booktitle={IEEE Conference on Computer Vision and Pattern Recognition},\n  pages={3262--3271},\n  year={2018}\n}\n@inproceedings{zhang2019deep, % DPSR\n  title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels},\n  author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},\n  booktitle={IEEE Conference on Computer Vision and Pattern Recognition},\n  pages={1671--1681},\n  year={2019}\n}\n@InProceedings{wang2018esrgan, % ESRGAN, MSRResNet\n    author = {Wang, Xintao and Yu, Ke and Wu, Shixiang and Gu, Jinjin and Liu, Yihao and Dong, Chao and Qiao, Yu and Loy, Chen Change},\n    title = {ESRGAN: Enhanced super-resolution generative adversarial networks},\n    booktitle = {The European Conference on Computer Vision Workshops (ECCVW)},\n    month = {September},\n    year = {2018}\n}\n@inproceedings{hui2019lightweight, % IMDN\n  title={Lightweight Image Super-Resolution with Information Multi-distillation Network},\n  author={Hui, Zheng and Gao, Xinbo and Yang, Yunchu and Wang, Xiumei},\n  booktitle={Proceedings of the 27th ACM International Conference on Multimedia (ACM MM)},\n  pages={2024--2032},\n  year={2019}\n}\n@inproceedings{zhang2019aim, % IMDN\n  title={AIM 2019 Challenge on Constrained Super-Resolution: Methods and Results},\n  author={Kai Zhang and Shuhang Gu and Radu Timofte and others},\n  booktitle={IEEE International Conference on Computer Vision Workshops},\n  year={2019}\n}\n@inproceedings{yang2021gan,\n    title={GAN Prior Embedded Network for Blind Face Restoration in the Wild},\n    author={Tao Yang, Peiran Ren, Xuansong Xie, and Lei Zhang},\n    booktitle={IEEE Conference on Computer Vision and Pattern Recognition},\n    year={2021}\n}\n```\n","funding_links":[],"categories":["Toolbox","Python","图像恢复"],"sub_categories":["Libraries","资源传输下载"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcszn%2FKAIR","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcszn%2FKAIR","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcszn%2FKAIR/lists"}