Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/tensorlayer/cyclegan
CycleGAN in 300 lines of code
https://github.com/tensorlayer/cyclegan
Last synced: 3 months ago
JSON representation
CycleGAN in 300 lines of code
- Host: GitHub
- URL: https://github.com/tensorlayer/cyclegan
- Owner: tensorlayer
- Created: 2019-06-26T08:54:18.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2020-01-28T21:52:02.000Z (almost 5 years ago)
- Last Synced: 2024-06-11T01:22:01.130Z (5 months ago)
- Language: Python
- Homepage: http://tensorlayer.org
- Size: 8.95 MB
- Stars: 20
- Watchers: 3
- Forks: 4
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-tensorlayer - CycleGAN - convolution based on the paper by [[J. Zhu et al, 2017]](https://arxiv.org/abs/1703.10593). (4. GAN / 1.2 DatasetAPI and TFRecord Examples)
README
# The Simplest CycleGAN Full Implementation
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
## Requirement
Check the `requirements.txt`
## TODO
- replay buffer## Run
It will automatically download the data in `data.py`.
```python
python3 train.py
```## Distributed Training
GAN-like networks are particularly challenging given that they often use multiple optimizers.
In addition, GANs also consume a large amont of GPU memory and are usually batch-size sensitive.To speed up training, we thus use a novel [KungFu](https://github.com/lsds/KungFu) distributed training library.
KungFu is easy to install and run (compared to today's Horovod library
which depends on OpenMPI). You can install it using a few lines by following
the [instruction](https://github.com/lsds/KungFu#install). KungFu is also very fast and scalable, compared
to Horovod and parameter servers, making it an attractive option for GAN networks.In the following, we assume that you have added `kungfu-run` into the `$PATH`.
(i) To run on a machine with 4 GPUs:
```bash
kungfu-run -np 4 python3 train.py --parallel --kf-optimizer=sma
```The default KungFu optimizer is `sma` which implements synchronous model averaging.
The `sma` decouple batch size and the number of GPUs, making it hyper-parameter-robust during scaling.
You can also use other KungFu optimizers: `sync-sgd` (which is the same as the DistributedOptimizer in Horovod)
and `async-sgd` if you train your model in a cluster that has limited bandwidth and straggelers.(ii) To run on 2 machines (which have the nic `eth0` with IPs as `192.168.0.1` and `192.168.0.2`):
```bash
kungfu-run -np 8 -H 192.168.0.1:4,192.168.0.1:4 -nic eth0 python3 train.py --parallel --kf-optimizer=sma
```## Results
## Author
- @zsdonghao
- @luomai### Discussion
- [TensorLayer Slack](https://join.slack.com/t/tensorlayer/shared_invite/enQtMjUyMjczMzU2Njg4LWI0MWU0MDFkOWY2YjQ4YjVhMzI5M2VlZmE4YTNhNGY1NjZhMzUwMmQ2MTc0YWRjMjQzMjdjMTg2MWQ2ZWJhYzc)
- [TensorLayer WeChat](https://github.com/tensorlayer/tensorlayer-chinese/blob/master/docs/wechat_group.md)### License
- For academic and non-commercial use only.
- For commercial use, please contact [email protected].