Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/mchong6/SOAT
Official PyTorch repo for StyleGAN of All Trades: Image Manipulation with Only Pretrained StyleGAN.
https://github.com/mchong6/SOAT
Last synced: 8 days ago
JSON representation
Official PyTorch repo for StyleGAN of All Trades: Image Manipulation with Only Pretrained StyleGAN.
- Host: GitHub
- URL: https://github.com/mchong6/SOAT
- Owner: mchong6
- License: mit
- Created: 2021-10-28T07:00:01.000Z (about 3 years ago)
- Default Branch: main
- Last Pushed: 2021-11-20T15:38:35.000Z (almost 3 years ago)
- Last Synced: 2024-10-29T18:21:12.110Z (14 days ago)
- Language: Jupyter Notebook
- Homepage:
- Size: 82.3 MB
- Stars: 377
- Watchers: 11
- Forks: 56
- Open Issues: 8
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# StyleGAN of All Trades: Image Manipulation with Only Pretrained StyleGAN
![](teaser.jpg)This is the PyTorch implementation of [StyleGAN of All Trades: Image Manipulation with Only Pretrained StyleGAN](https://arxiv.org/abs/2111.01619). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mchong6/SOAT/blob/main/infinity.ipynb)
**Web Demo**
Integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio). See demo for Panorama Generation for Landscapes: [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/akhaliq/SOAT)>**Abstract:**
Recently, StyleGAN has enabled various image manipulation and editing tasks thanks to the high-quality generation and the disentangled latent space. However, additional architectures or task-specific training paradigms are usually required for different tasks. In this work, we take a deeper look at the spatial properties of StyleGAN. We show that with a pretrained StyleGAN along with some operations, without any additional architecture, we can perform comparably to the state-of-the-art methods on various tasks, including image blending, panorama generation, generation from a single image, controllable and local multimodal image to image translation, and attributes transfer.## How to use
Everything to get started is in the [colab notebook](https://colab.research.google.com/github/mchong6/SOAT/blob/main/infinity.ipynb).## Toonification
For toonification, you can train a new model yourself by running
```bash
python train.py
```
For disney toonification, we use the disney dataset [here](https://github.com/justinpinkney/toonify). Feel free to experiment with different datasets.## GAN inversion
To perform GAN inversion with gaussian regularization in W+ space,
```bash
python projector.py xxx.jpg
```
the code will be saved in ./inversion_codes/xxx.pt which you can load by
```python
source = load_source(['xxx'], generator, device)
source_im, _ = generator(source)```
## Citation
If you use this code or ideas from our paper, please cite our paper:
```
@article{chong2021stylegan,
title={StyleGAN of All Trades: Image Manipulation with Only Pretrained StyleGAN},
author={Chong, Min Jin and Lee, Hsin-Ying and Forsyth, David},
journal={arXiv preprint arXiv:2111.01619},
year={2021}
}
```## Acknowledgments
This code borrows from [StyleGAN2 by rosalinity](https://github.com/rosinality/stylegan2-pytorch)