{"id":13482715,"url":"https://github.com/tensorlayer/srgan","last_synced_at":"2025-05-14T20:02:00.630Z","repository":{"id":37952708,"uuid":"88890801","full_name":"tensorlayer/SRGAN","owner":"tensorlayer","description":"Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network","archived":false,"fork":false,"pushed_at":"2024-02-22T02:05:20.000Z","size":151663,"stargazers_count":3400,"open_issues_count":153,"forks_count":814,"subscribers_count":96,"default_branch":"master","last_synced_at":"2025-04-06T10:07:45.085Z","etag":null,"topics":["cnn","gan","srgan","super-resolution","tensorflow","tensorlayer","vgg","vgg16","vgg19"],"latest_commit_sha":null,"homepage":"https://github.com/tensorlayer/tensorlayerx","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tensorlayer.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2017-04-20T17:08:27.000Z","updated_at":"2025-04-06T06:48:00.000Z","dependencies_parsed_at":"2024-02-22T03:23:42.741Z","dependency_job_id":"909c3fc3-b61a-4d67-8953-520c8e8d3e8c","html_url":"https://github.com/tensorlayer/SRGAN","commit_stats":null,"previous_names":[],"tags_count":11,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorlayer%2FSRGAN","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorlayer%2FSRGAN/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorlayer%2FSRGAN/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorlayer%2FSRGAN/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tensorlayer","download_url":"https://codeload.github.com/tensorlayer/SRGAN/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248718771,"owners_count":21150637,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cnn","gan","srgan","super-resolution","tensorflow","tensorlayer","vgg","vgg16","vgg19"],"created_at":"2024-07-31T17:01:04.805Z","updated_at":"2025-04-13T13:23:44.708Z","avatar_url":"https://github.com/tensorlayer.png","language":"Python","readme":"## Super Resolution Examples\n\n- Implementation of [\"Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network\"](https://arxiv.org/abs/1609.04802)\n\n- For earlier version, please check [srgan release](https://github.com/tensorlayer/srgan/releases) and [tensorlayer](https://github.com/tensorlayer/TensorLayer).\n\n- For more computer vision applications, check [TLXCV](https://github.com/tensorlayer/TLXCV)\n\n\n### SRGAN Architecture\n\n\n\u003ca href=\"https://github.com/tensorlayer/TensorLayerX\"\u003e\n\u003cdiv align=\"center\"\u003e\n\t\u003cimg src=\"img/model.jpeg\" width=\"80%\" height=\"10%\"/\u003e\n\u003c/div\u003e\n\u003c/a\u003e\n\u003ca href=\"https://github.com/tensorlayer/TensorLayerX\"\u003e\n\u003cdiv align=\"center\"\u003e\n\t\u003cimg src=\"img/SRGAN_Result3.png\" width=\"80%\" height=\"50%\"/\u003e\n\u003c/div\u003e\n\u003c/a\u003e\n\n### Prepare Data and Pre-trained VGG\n\n- 1. You need to download the pretrained VGG19 model weights in [here](https://drive.google.com/file/d/1CLw6Cn3yNI1N15HyX99_Zy9QnDcgP3q7/view?usp=sharing).\n- 2. You need to have the high resolution images for training.\n  -  In this experiment, I used images from [DIV2K - bicubic downscaling x4 competition](http://www.vision.ee.ethz.ch/ntire17/), so the hyper-paremeters in `config.py` (like number of epochs) are seleted basic on that dataset, if you change a larger dataset you can reduce the number of epochs. \n  -  If you dont want to use DIV2K dataset, you can also use [Yahoo MirFlickr25k](http://press.liacs.nl/mirflickr/mirdownload.html), just simply download it using `train_hr_imgs = tl.files.load_flickr25k_dataset(tag=None)` in `main.py`. \n  -  If you want to use your own images, you can set the path to your image folder via `config.TRAIN.hr_img_path` in `config.py`.\n\n\n\n### Run\n\n🔥🔥🔥🔥🔥🔥 You need install [TensorLayerX](https://github.com/tensorlayer/TensorLayerX#installation) at first!\n\n🔥🔥🔥🔥🔥🔥 Please install TensorLayerX via source\n\n```bash\npip install git+https://github.com/tensorlayer/tensorlayerx.git \n```\n\n#### Train\n- Set your image folder in `config.py`, if you download [DIV2K - bicubic downscaling x4 competition](http://www.vision.ee.ethz.ch/ntire17/) dataset, you don't need to change it. \n- Other links for DIV2K, in case you can't find it : [test\\_LR\\_bicubic_X4](https://data.vision.ee.ethz.ch/cvl/DIV2K/validation_release/DIV2K_test_LR_bicubic_X4.zip), [train_HR](https://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_train_HR.zip), [train\\_LR\\_bicubic_X4](https://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_train_LR_bicubic_X4.zip), [valid_HR](https://data.vision.ee.ethz.ch/cvl/DIV2K/validation_release/DIV2K_valid_HR.zip), [valid\\_LR\\_bicubic_X4](https://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_valid_LR_bicubic_X4.zip).\n\n```python\nconfig.TRAIN.img_path = \"your_image_folder/\"\n```\nYour directory structure should look like this:\n\n```\nsrgan/\n    └── config.py\n    └── srgan.py\n    └── train.py\n    └── vgg.py\n    └── model\n          └── vgg19.npy\n    └── DIV2K\n          └── DIV2K_train_HR\n          ├── DIV2K_train_LR_bicubic\n          ├── DIV2K_valid_HR\n          └── DIV2K_valid_LR_bicubic\n\n```\n\n- Start training.\n\n```bash\npython train.py\n```\n\n🔥Modify a line of code in **train.py**, easily switch to any framework!\n\n```python\nimport os\nos.environ['TL_BACKEND'] = 'tensorflow'\n# os.environ['TL_BACKEND'] = 'mindspore'\n# os.environ['TL_BACKEND'] = 'paddle'\n# os.environ['TL_BACKEND'] = 'pytorch'\n```\n🚧 We will support PyTorch as Backend soon.\n\n\n#### Evaluation.\n\n🔥 We have trained SRGAN on DIV2K dataset.\n🔥 Download model weights as follows.\n\n|              | SRGAN_g | SRGAN_d | \n|------------- |---------|---------|\n| TensorFlow   | [Baidu](https://pan.baidu.com/s/118uUg3oce_3NZQCIWHVjmA?pwd=p9li), [Googledrive](https://drive.google.com/file/d/1GlU9At-5XEDilgnt326fyClvZB_fsaFZ/view?usp=sharing) |[Baidu](https://pan.baidu.com/s/1DOpGzDJY5PyusKzaKqbLOg?pwd=g2iy), [Googledrive](https://drive.google.com/file/d/1RpOtVcVK-yxnVhNH4KSjnXHDvuU_pq3j/view?usp=sharing)   |        \n| PaddlePaddle | [Baidu](https://pan.baidu.com/s/1ngBpleV5vQZQqNE_8djDIg?pwd=s8wc), [Googledrive](https://drive.google.com/file/d/1GRNt_ZsgorB19qvwN5gE6W9a_bIPLkg1/view?usp=sharing)  | [Baidu](https://pan.baidu.com/s/1nSefLNRanFImf1DskSVpCg?pwd=befc), [Googledrive](https://drive.google.com/file/d/1Jf6W1ZPdgtmUSfrQ5mMZDB_hOCVU-zFo/view?usp=sharing)   |         \n| MindSpore    | 🚧Coming soon!    | 🚧Coming soon!     |         \n| PyTorch      | 🚧Coming soon!    | 🚧Coming soon!     |\n\n\nDownload weights file and put weights under the folder srgan/models/.\n\nYour directory structure should look like this:\n\n```\nsrgan/\n    └── config.py\n    └── srgan.py\n    └── train.py\n    └── vgg.py\n    └── model\n          └── vgg19.npy\n    └── DIV2K\n          ├── DIV2K_train_HR\n          ├── DIV2K_train_LR_bicubic\n          ├── DIV2K_valid_HR\n          └── DIV2K_valid_LR_bicubic\n    └── models\n          ├── g.npz  # You should rename the weigths file. \n          └── d.npz  # If you set os.environ['TL_BACKEND'] = 'tensorflow',you should rename srgan-g-tensorflow.npz to g.npz .\n\n```\n\n- Start evaluation.\n```bash\npython train.py --mode=eval\n```\n\nResults will be saved under the folder srgan/samples/. \n\n### Results\n\n\u003ca href=\"http://tensorlayer.readthedocs.io\"\u003e\n\u003cdiv align=\"center\"\u003e\n\t\u003cimg src=\"img/SRGAN_Result2.png\" width=\"80%\" height=\"50%\"/\u003e\n\u003c/div\u003e\n\u003c/a\u003e\n\n\n### Reference\n* [1] [Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network](https://arxiv.org/abs/1609.04802)\n* [2] [Is the deconvolution layer the same as a convolutional layer ?](https://arxiv.org/abs/1609.07009)\n\n\n\n### Citation\nIf you find this project useful, we would be grateful if you cite the TensorLayer paper：\n\n```\n@article{tensorlayer2017,\nauthor = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},\njournal = {ACM Multimedia},\ntitle = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},\nurl = {http://tensorlayer.org},\nyear = {2017}\n}\n\n@inproceedings{tensorlayer2021,\n  title={TensorLayer 3.0: A Deep Learning Library Compatible With Multiple Backends},\n  author={Lai, Cheng and Han, Jiarong and Dong, Hao},\n  booktitle={2021 IEEE International Conference on Multimedia \\\u0026 Expo Workshops (ICMEW)},\n  pages={1--3},\n  year={2021},\n  organization={IEEE}\n}\n```\n\n### Other Projects\n\n- [Style Transfer](https://github.com/tensorlayer/adaptive-style-transfer)\n- [Pose Estimation](https://github.com/tensorlayer/openpose)\n\n### Discussion\n\n- [TensorLayer Slack](https://join.slack.com/t/tensorlayer/shared_invite/enQtMjUyMjczMzU2Njg4LWI0MWU0MDFkOWY2YjQ4YjVhMzI5M2VlZmE4YTNhNGY1NjZhMzUwMmQ2MTc0YWRjMjQzMjdjMTg2MWQ2ZWJhYzc)\n- [TensorLayer WeChat](https://github.com/tensorlayer/tensorlayer-chinese/blob/master/docs/wechat_group.md)\n\n### License\n\n- For academic and non-commercial use only.\n- For commercial use, please contact tensorlayer@gmail.com.\n","funding_links":[],"categories":["Models/Projects","Uncategorized","Model Deployment library","Super Resolution","4. GAN"],"sub_categories":["Uncategorized","Tensorflow \u003ca name=\"tensorflow\"/\u003e","1.2 DatasetAPI and TFRecord Examples"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorlayer%2Fsrgan","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftensorlayer%2Fsrgan","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorlayer%2Fsrgan/lists"}