{"id":15036938,"url":"https://github.com/aitorzip/pytorch-cyclegan","last_synced_at":"2025-05-16T13:02:03.595Z","repository":{"id":39543454,"uuid":"113921911","full_name":"aitorzip/PyTorch-CycleGAN","owner":"aitorzip","description":"A clean and readable Pytorch implementation of CycleGAN","archived":false,"fork":false,"pushed_at":"2022-04-20T23:23:16.000Z","size":547,"stargazers_count":1291,"open_issues_count":39,"forks_count":298,"subscribers_count":13,"default_branch":"master","last_synced_at":"2025-05-16T13:01:32.894Z","etag":null,"topics":["artificial-intelligence","computer-graphics","computer-vision","cyclegan","deep-learning","generative-adversarial-network","image-generation","image-processing","pytorch"],"latest_commit_sha":null,"homepage":"https://arxiv.org/abs/1703.10593","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/aitorzip.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2017-12-12T00:01:28.000Z","updated_at":"2025-05-16T09:10:37.000Z","dependencies_parsed_at":"2022-07-14T04:50:30.020Z","dependency_job_id":null,"html_url":"https://github.com/aitorzip/PyTorch-CycleGAN","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aitorzip%2FPyTorch-CycleGAN","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aitorzip%2FPyTorch-CycleGAN/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aitorzip%2FPyTorch-CycleGAN/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/aitorzip%2FPyTorch-CycleGAN/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/aitorzip","download_url":"https://codeload.github.com/aitorzip/PyTorch-CycleGAN/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254535792,"owners_count":22087397,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["artificial-intelligence","computer-graphics","computer-vision","cyclegan","deep-learning","generative-adversarial-network","image-generation","image-processing","pytorch"],"created_at":"2024-09-24T20:32:49.255Z","updated_at":"2025-05-16T13:02:03.572Z","avatar_url":"https://github.com/aitorzip.png","language":"Python","readme":"# Pytorch-CycleGAN\nA clean and readable Pytorch implementation of CycleGAN (https://arxiv.org/abs/1703.10593)\n\n## Prerequisites\nCode is intended to work with ```Python 3.6.x```, it hasn't been tested with previous versions\n\n### [PyTorch \u0026 torchvision](http://pytorch.org/)\nFollow the instructions in [pytorch.org](http://pytorch.org) for your current setup\n\n### [Visdom](https://github.com/facebookresearch/visdom)\nTo plot loss graphs and draw images in a nice web browser view\n```\npip3 install visdom\n```\n\n## Training\n### 1. Setup the dataset\nFirst, you will need to download and setup a dataset. The easiest way is to use one of the already existing datasets on UC Berkeley's repository:\n```\n./download_dataset \u003cdataset_name\u003e\n```\nValid \u003cdataset_name\u003e are: apple2orange, summer2winter_yosemite, horse2zebra, monet2photo, cezanne2photo, ukiyoe2photo, vangogh2photo, maps, cityscapes, facades, iphone2dslr_flower, ae_photos\n\nAlternatively you can build your own dataset by setting up the following directory structure:\n\n    .\n    ├── datasets                   \n    |   ├── \u003cdataset_name\u003e         # i.e. brucewayne2batman\n    |   |   ├── train              # Training\n    |   |   |   ├── A              # Contains domain A images (i.e. Bruce Wayne)\n    |   |   |   └── B              # Contains domain B images (i.e. Batman)\n    |   |   └── test               # Testing\n    |   |   |   ├── A              # Contains domain A images (i.e. Bruce Wayne)\n    |   |   |   └── B              # Contains domain B images (i.e. Batman)\n    \n### 2. Train!\n```\n./train --dataroot datasets/\u003cdataset_name\u003e/ --cuda\n```\nThis command will start a training session using the images under the *dataroot/train* directory with the hyperparameters that showed best results according to CycleGAN authors. You are free to change those hyperparameters, see ```./train --help``` for a description of those.\n\nBoth generators and discriminators weights will be saved under the output directory.\n\nIf you don't own a GPU remove the --cuda option, although I advise you to get one!\n\nYou can also view the training progress as well as live output images by running ```python3 -m visdom``` in another terminal and opening [http://localhost:8097/](http://localhost:8097/) in your favourite web browser. This should generate training loss progress as shown below (default params, horse2zebra dataset):\n\n![Generator loss](https://github.com/ai-tor/PyTorch-CycleGAN/raw/master/output/loss_G.png)\n![Discriminator loss](https://github.com/ai-tor/PyTorch-CycleGAN/raw/master/output/loss_D.png)\n![Generator GAN loss](https://github.com/ai-tor/PyTorch-CycleGAN/raw/master/output/loss_G_GAN.png)\n![Generator identity loss](https://github.com/ai-tor/PyTorch-CycleGAN/raw/master/output/loss_G_identity.png)\n![Generator cycle loss](https://github.com/ai-tor/PyTorch-CycleGAN/raw/master/output/loss_G_cycle.png)\n\n## Testing\n```\n./test --dataroot datasets/\u003cdataset_name\u003e/ --cuda\n```\nThis command will take the images under the *dataroot/test* directory, run them through the generators and save the output under the *output/A* and *output/B* directories. As with train, some parameters like the weights to load, can be tweaked, see ```./test --help``` for more information.\n\nExamples of the generated outputs (default params, horse2zebra dataset):\n\n![Real horse](https://github.com/ai-tor/PyTorch-CycleGAN/raw/master/output/real_A.jpg)\n![Fake zebra](https://github.com/ai-tor/PyTorch-CycleGAN/raw/master/output/fake_B.png)\n![Real zebra](https://github.com/ai-tor/PyTorch-CycleGAN/raw/master/output/real_B.jpg)\n![Fake horse](https://github.com/ai-tor/PyTorch-CycleGAN/raw/master/output/fake_A.png)\n\n## License\nThis project is licensed under the GPL v3 License - see the [LICENSE.md](LICENSE.md) file for details\n\n## Acknowledgments\nCode is basically a cleaner and less obscured implementation of [pytorch-CycleGAN-and-pix2pix](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix). All credit goes to the authors of [CycleGAN](https://arxiv.org/abs/1703.10593), Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Faitorzip%2Fpytorch-cyclegan","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Faitorzip%2Fpytorch-cyclegan","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Faitorzip%2Fpytorch-cyclegan/lists"}