{"id":18421104,"url":"https://github.com/aitorzip/pytorch-srgan","last_synced_at":"2025-04-09T08:09:49.005Z","repository":{"id":56727041,"uuid":"98061813","full_name":"aitorzip/PyTorch-SRGAN","owner":"aitorzip","description":"A modern PyTorch implementation of SRGAN","archived":false,"fork":false,"pushed_at":"2017-11-09T11:22:19.000Z","size":152603,"stargazers_count":363,"open_issues_count":10,"forks_count":92,"subscribers_count":9,"default_branch":"master","last_synced_at":"2025-04-02T05:09:23.155Z","etag":null,"topics":["cnn","computer-vision","convolutional-neural-networks","deep-learning","gan","generative-adversarial-network","pytorch","pytorch-gan","srgan","super-resolution"],"latest_commit_sha":null,"homepage":"https://arxiv.org/abs/1609.04802","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/aitorzip.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2017-07-22T22:45:25.000Z","updated_at":"2025-03-05T13:02:03.000Z","dependencies_parsed_at":"2022-08-16T00:31:08.533Z","dependency_job_id":null,"html_url":"https://github.com/aitorzip/PyTorch-SRGAN","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aitorzip%2FPyTorch-SRGAN","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aitorzip%2FPyTorch-SRGAN/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aitorzip%2FPyTorch-SRGAN/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aitorzip%2FPyTorch-SRGAN/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/aitorzip","download_url":"https://codeload.github.com/aitorzip/PyTorch-SRGAN/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247999861,"owners_count":21031046,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cnn","computer-vision","convolutional-neural-networks","deep-learning","gan","generative-adversarial-network","pytorch","pytorch-gan","srgan","super-resolution"],"created_at":"2024-11-06T04:24:24.435Z","updated_at":"2025-04-09T08:09:48.986Z","avatar_url":"https://github.com/aitorzip.png","language":"Python","readme":"# PyTorch-SRGAN\nA modern PyTorch implementation of SRGAN\n\nIt is deeply based on __Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network__ paper published by the Twitter team (https://arxiv.org/abs/1609.04802) but I replaced activations by Swish (https://arxiv.org/abs/1710.05941)\n\nYou can start training out-of-the-box with the CIFAR-10 or CIFAR-100 datasets, to emulate the paper results however, you will need to download and clean the ImageNet dataset yourself. Results and weights are provided for the ImageNet dataset. \n\nContributions are welcome!\n\n## Requirements\n\n* PyTorch\n* torchvision\n* tensorboard_logger (https://github.com/TeamHG-Memex/tensorboard_logger)\n\n## Training\n\n```\nusage: train [-h] [--dataset DATASET] [--dataroot DATAROOT]\n             [--workers WORKERS] [--batchSize BATCHSIZE]\n             [--imageSize IMAGESIZE] [--upSampling UPSAMPLING]\n             [--nEpochs NEPOCHS] [--generatorLR GENERATORLR]\n             [--discriminatorLR DISCRIMINATORLR] [--cuda] [--nGPU NGPU]\n             [--generatorWeights GENERATORWEIGHTS]\n             [--discriminatorWeights DISCRIMINATORWEIGHTS] [--out OUT]\n```\n\nExample: ```./train --cuda```\n\nThis will start a training session in the GPU. First it will pre-train the generator using MSE error for 2 epochs, then it will train the full GAN (generator + discriminator) for 100 epochs, using content (mse + vgg) and adversarial loss. Although weights are already provided in the repository, this script will also generate them in the checkpoints file.\n\n## Testing\n\n```\nusage: test [-h] [--dataset DATASET] [--dataroot DATAROOT] [--workers WORKERS]\n            [--batchSize BATCHSIZE] [--imageSize IMAGESIZE]\n            [--upSampling UPSAMPLING] [--cuda] [--nGPU NGPU]\n            [--generatorWeights GENERATORWEIGHTS]\n            [--discriminatorWeights DISCRIMINATORWEIGHTS]\n\n```\n\nExample: ```./test --cuda```\n\nThis will start a testing session in the GPU. It will display mean error values and save the generated images in the output directory, all three versions: low resolution, high resolution (original) and high resolution (generated).\n\n## Results\n\n### Training\nThe following results have been obtained with the current training setup:\n\n* Dataset: 350K randomly selected ImageNet samples\n* Input image size: 24x24\n* Output image size: 96x96 (16x)\n\nOther training parameters are the default of _train_ script\n\n![Tensorboard training graphs](https://raw.githubusercontent.com/ai-tor/PyTorchSRGAN/master/output/training_results.png)\n\n### Testing\nTesting has been executed on 128 randomly selected ImageNet samples (disjoint from training set)\n\n```[7/8] Discriminator_Loss: 1.4123 Generator_Loss (Content/Advers/Total): 0.0901/0.6152/0.0908```\n\n### Examples\nSee more under the _output_ directory\n\n__High resolution / Low resolution / Recovered High Resolution__\n\n![Original doggy](https://raw.githubusercontent.com/ai-tor/PyTorchSRGAN/master/output/high_res_real/41.png)\n\u003cimg src=\"https://raw.githubusercontent.com/ai-tor/PyTorchSRGAN/master/output/low_res/41.png\" alt=\"Low res doggy\" width=\"96\" height=\"96\"\u003e\n![Generated doggy](https://raw.githubusercontent.com/ai-tor/PyTorchSRGAN/master/output/high_res_fake/41.png)\n\n![Original woman](https://raw.githubusercontent.com/ai-tor/PyTorchSRGAN/master/output/high_res_real/38.png)\n\u003cimg src=\"https://raw.githubusercontent.com/ai-tor/PyTorchSRGAN/master/output/low_res/38.png\" alt=\"Low res woman\" width=\"96\" height=\"96\"\u003e\n![Generated woman](https://raw.githubusercontent.com/ai-tor/PyTorchSRGAN/master/output/high_res_fake/38.png)\n\n![Original hair](https://raw.githubusercontent.com/ai-tor/PyTorchSRGAN/master/output/high_res_real/127.png)\n\u003cimg src=\"https://raw.githubusercontent.com/ai-tor/PyTorchSRGAN/master/output/low_res/127.png\" alt=\"Low res hair\" width=\"96\" height=\"96\"\u003e\n![Generated hair](https://raw.githubusercontent.com/ai-tor/PyTorchSRGAN/master/output/high_res_fake/127.png)\n\n![Original sand](https://raw.githubusercontent.com/ai-tor/PyTorchSRGAN/master/output/high_res_real/72.png)\n\u003cimg src=\"https://raw.githubusercontent.com/ai-tor/PyTorchSRGAN/master/output/low_res/72.png\" alt=\"Low res sand\" width=\"96\" height=\"96\"\u003e\n![Generated sand](https://raw.githubusercontent.com/ai-tor/PyTorchSRGAN/master/output/high_res_fake/72.png)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Faitorzip%2Fpytorch-srgan","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Faitorzip%2Fpytorch-srgan","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Faitorzip%2Fpytorch-srgan/lists"}