Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/tensorlayer/SRGAN
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
https://github.com/tensorlayer/SRGAN
cnn gan srgan super-resolution tensorflow tensorlayer vgg vgg16 vgg19
Last synced: 13 days ago
JSON representation
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
- Host: GitHub
- URL: https://github.com/tensorlayer/SRGAN
- Owner: tensorlayer
- Created: 2017-04-20T17:08:27.000Z (over 7 years ago)
- Default Branch: master
- Last Pushed: 2024-02-22T02:05:20.000Z (9 months ago)
- Last Synced: 2024-10-29T15:34:58.662Z (14 days ago)
- Topics: cnn, gan, srgan, super-resolution, tensorflow, tensorlayer, vgg, vgg16, vgg19
- Language: Python
- Homepage: https://github.com/tensorlayer/tensorlayerx
- Size: 145 MB
- Stars: 3,314
- Watchers: 98
- Forks: 812
- Open Issues: 149
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
## Super Resolution Examples
- Implementation of ["Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"](https://arxiv.org/abs/1609.04802)
- For earlier version, please check [srgan release](https://github.com/tensorlayer/srgan/releases) and [tensorlayer](https://github.com/tensorlayer/TensorLayer).
- For more computer vision applications, check [TLXCV](https://github.com/tensorlayer/TLXCV)
### SRGAN Architecture
### Prepare Data and Pre-trained VGG
- 1. You need to download the pretrained VGG19 model weights in [here](https://drive.google.com/file/d/1CLw6Cn3yNI1N15HyX99_Zy9QnDcgP3q7/view?usp=sharing).
- 2. You need to have the high resolution images for training.
- In this experiment, I used images from [DIV2K - bicubic downscaling x4 competition](http://www.vision.ee.ethz.ch/ntire17/), so the hyper-paremeters in `config.py` (like number of epochs) are seleted basic on that dataset, if you change a larger dataset you can reduce the number of epochs.
- If you dont want to use DIV2K dataset, you can also use [Yahoo MirFlickr25k](http://press.liacs.nl/mirflickr/mirdownload.html), just simply download it using `train_hr_imgs = tl.files.load_flickr25k_dataset(tag=None)` in `main.py`.
- If you want to use your own images, you can set the path to your image folder via `config.TRAIN.hr_img_path` in `config.py`.### Run
π₯π₯π₯π₯π₯π₯ You need install [TensorLayerX](https://github.com/tensorlayer/TensorLayerX#installation) at first!
π₯π₯π₯π₯π₯π₯ Please install TensorLayerX via source
```bash
pip install git+https://github.com/tensorlayer/tensorlayerx.git
```#### Train
- Set your image folder in `config.py`, if you download [DIV2K - bicubic downscaling x4 competition](http://www.vision.ee.ethz.ch/ntire17/) dataset, you don't need to change it.
- Other links for DIV2K, in case you can't find it : [test\_LR\_bicubic_X4](https://data.vision.ee.ethz.ch/cvl/DIV2K/validation_release/DIV2K_test_LR_bicubic_X4.zip), [train_HR](https://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_train_HR.zip), [train\_LR\_bicubic_X4](https://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_train_LR_bicubic_X4.zip), [valid_HR](https://data.vision.ee.ethz.ch/cvl/DIV2K/validation_release/DIV2K_valid_HR.zip), [valid\_LR\_bicubic_X4](https://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_valid_LR_bicubic_X4.zip).```python
config.TRAIN.img_path = "your_image_folder/"
```
Your directory structure should look like this:```
srgan/
βββ config.py
βββ srgan.py
βββ train.py
βββ vgg.py
βββ model
βββ vgg19.npy
βββ DIV2K
βββ DIV2K_train_HR
βββ DIV2K_train_LR_bicubic
βββ DIV2K_valid_HR
βββ DIV2K_valid_LR_bicubic```
- Start training.
```bash
python train.py
```π₯Modify a line of code in **train.py**, easily switch to any framework!
```python
import os
os.environ['TL_BACKEND'] = 'tensorflow'
# os.environ['TL_BACKEND'] = 'mindspore'
# os.environ['TL_BACKEND'] = 'paddle'
# os.environ['TL_BACKEND'] = 'pytorch'
```
π§ We will support PyTorch as Backend soon.#### Evaluation.
π₯ We have trained SRGAN on DIV2K dataset.
π₯ Download model weights as follows.| | SRGAN_g | SRGAN_d |
|------------- |---------|---------|
| TensorFlow | [Baidu](https://pan.baidu.com/s/118uUg3oce_3NZQCIWHVjmA?pwd=p9li), [Googledrive](https://drive.google.com/file/d/1GlU9At-5XEDilgnt326fyClvZB_fsaFZ/view?usp=sharing) |[Baidu](https://pan.baidu.com/s/1DOpGzDJY5PyusKzaKqbLOg?pwd=g2iy), [Googledrive](https://drive.google.com/file/d/1RpOtVcVK-yxnVhNH4KSjnXHDvuU_pq3j/view?usp=sharing) |
| PaddlePaddle | [Baidu](https://pan.baidu.com/s/1ngBpleV5vQZQqNE_8djDIg?pwd=s8wc), [Googledrive](https://drive.google.com/file/d/1GRNt_ZsgorB19qvwN5gE6W9a_bIPLkg1/view?usp=sharing) | [Baidu](https://pan.baidu.com/s/1nSefLNRanFImf1DskSVpCg?pwd=befc), [Googledrive](https://drive.google.com/file/d/1Jf6W1ZPdgtmUSfrQ5mMZDB_hOCVU-zFo/view?usp=sharing) |
| MindSpore | π§Coming soon! | π§Coming soon! |
| PyTorch | π§Coming soon! | π§Coming soon! |Download weights file and put weights under the folder srgan/models/.
Your directory structure should look like this:
```
srgan/
βββ config.py
βββ srgan.py
βββ train.py
βββ vgg.py
βββ model
βββ vgg19.npy
βββ DIV2K
βββ DIV2K_train_HR
βββ DIV2K_train_LR_bicubic
βββ DIV2K_valid_HR
βββ DIV2K_valid_LR_bicubic
βββ models
βββ g.npz # You should rename the weigths file.
βββ d.npz # If you set os.environ['TL_BACKEND'] = 'tensorflow',you should rename srgan-g-tensorflow.npz to g.npz .```
- Start evaluation.
```bash
python train.py --mode=eval
```Results will be saved under the folder srgan/samples/.
### Results
### Reference
* [1] [Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network](https://arxiv.org/abs/1609.04802)
* [2] [Is the deconvolution layer the same as a convolutional layer ?](https://arxiv.org/abs/1609.07009)### Citation
If you find this project useful, we would be grateful if you cite the TensorLayer paperοΌ```
@article{tensorlayer2017,
author = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},
journal = {ACM Multimedia},
title = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},
url = {http://tensorlayer.org},
year = {2017}
}@inproceedings{tensorlayer2021,
title={TensorLayer 3.0: A Deep Learning Library Compatible With Multiple Backends},
author={Lai, Cheng and Han, Jiarong and Dong, Hao},
booktitle={2021 IEEE International Conference on Multimedia \& Expo Workshops (ICMEW)},
pages={1--3},
year={2021},
organization={IEEE}
}
```### Other Projects
- [Style Transfer](https://github.com/tensorlayer/adaptive-style-transfer)
- [Pose Estimation](https://github.com/tensorlayer/openpose)### Discussion
- [TensorLayer Slack](https://join.slack.com/t/tensorlayer/shared_invite/enQtMjUyMjczMzU2Njg4LWI0MWU0MDFkOWY2YjQ4YjVhMzI5M2VlZmE4YTNhNGY1NjZhMzUwMmQ2MTc0YWRjMjQzMjdjMTg2MWQ2ZWJhYzc)
- [TensorLayer WeChat](https://github.com/tensorlayer/tensorlayer-chinese/blob/master/docs/wechat_group.md)### License
- For academic and non-commercial use only.
- For commercial use, please contact [email protected].