Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ttop32/arcaneanimegan
AnimeGAN2 trained on Arcane
https://github.com/ttop32/arcaneanimegan
alignment anime animegan animegan2 animeganv2 arcanegan blending blur face-alignment fastai fine-tuning gan generative-adversarial-network image2image pytorch style-transfer stylegan stylegan3 torch weight
Last synced: 3 months ago
JSON representation
AnimeGAN2 trained on Arcane
- Host: GitHub
- URL: https://github.com/ttop32/arcaneanimegan
- Owner: ttop32
- License: mit
- Created: 2021-12-24T03:46:19.000Z (about 3 years ago)
- Default Branch: main
- Last Pushed: 2022-01-06T06:42:05.000Z (about 3 years ago)
- Last Synced: 2023-03-10T11:07:19.523Z (almost 2 years ago)
- Topics: alignment, anime, animegan, animegan2, animeganv2, arcanegan, blending, blur, face-alignment, fastai, fine-tuning, gan, generative-adversarial-network, image2image, pytorch, style-transfer, stylegan, stylegan3, torch, weight
- Language: Jupyter Notebook
- Homepage:
- Size: 32.3 MB
- Stars: 22
- Watchers: 1
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# ArcaneAnimeGAN
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1sBnFG9XR0euphD1LLspZOD2hxfyBwooN?usp=sharing)
AnimeGAN2 trained on Arcane
trying to follow bryandlee animegan2 training methodology
# Result
![result](doc/result0.2.png)# Training Workflow
- Get video data
- Split video into frames
- Align frame image using face-alignment
- Filter blurry image using opencv Laplacian
- Zip image dataset for fit into styleGAN
- Finetune FFHQ pretrained styleGAN using created zip dataset
- Blend findtuned styleGAN weight and pretrained styleGAN weight
- Create data pair using blended styleGAN model and pretrained model
- Train animeGAN using paired data
# Change log
- 0.2
- use anime-face-detector
- add color correction on data preprocessing
- add aug_transforms() on batch transform
- use l1_loss(vgg(g(x)), vgg(y)) and mse_loss(g(x), y) instead of vgg feature loss and gram matrix loss
- 0.1
- first release
# To Do
- Use animegan vgg19[0,255] instead of vgg19_bn[0,1]
- Add canny edge method to gaussigan blur
- Background segmentation
# Required environment to run
```python
!conda install pytorch torchvision cudatoolkit=11.1 -c pytorch -c nvidia -y
!sudo apt install ffmpeg
!pip install face-alignment
!pip install --upgrade psutil
!pip install kornia
!pip install fastai==2.5.3
!pip install opencv-python
!git clone https://github.com/NVlabs/stylegan3.git!pip install openmim
!mim install mmcv-full mmdet mmpose -y
!pip install anime-face-detector --no-dependencies
```
# Acknowledgement and References
- [AnimeGAN](https://github.com/TachibanaYoshino/AnimeGAN)
- [AnimeGANv2](https://github.com/TachibanaYoshino/AnimeGANv2)
- [animegan2-pytorch](https://github.com/bryandlee/animegan2-pytorch)
- [animegan2-pytorch-Face-Portrait-v1](https://github.com/bryandlee/animegan2-pytorch/issues/3)
- [pytorch-animeGAN](https://github.com/ptran1203/pytorch-animeGAN)
- [AnimeGANv2_pytorch](https://github.com/wan-h/AnimeGANv2_pytorch)
- [AnimeGAN_in_Pytorch](https://github.com/XuHangkun/AnimeGAN_in_Pytorch)
- [AnimeGAN-torch](https://github.com/MrVoid918/AnimeGAN-torch)
- [style_transfer_implementation](https://github.com/Snailpong/style_transfer_implementation)
- [Anime-Sketch-Coloring](https://github.com/pradeeplam/Anime-Sketch-Coloring-with-Swish-Gated-Residual-UNet)
- [CartoonGAN-Tensorflow](https://github.com/taki0112/CartoonGAN-Tensorflow)
- [cartoon-gan](https://github.com/FilipAndersson245/cartoon-gan)
- [pytorch-implementation-of-perceptual-losses](https://towardsdatascience.com/pytorch-implementation-of-perceptual-losses-for-real-time-style-transfer-8d608e2e9902)
- [Artistic-Style-Transfer](https://kyounju.tistory.com/3)
-
- [animegan2-pytorch-arcane](https://github.com/bryandlee/animegan2-pytorch/issues/17)
- [DeepStudio](https://github.com/bryandlee/DeepStudio)
- [ArcaneGAN](https://github.com/Sxela/ArcaneGAN)
- [stylegan3_blending](https://github.com/Sxela/stylegan3_blending)
- [toonify](https://github.com/justinpinkney/toonify)
- [BlendGAN](https://github.com/onion-liu/BlendGAN)
- [Cartoon-StyleGAN](https://github.com/happy-jihye/Cartoon-StyleGAN)
- [FFHQ-Alignment](https://github.com/happy-jihye/FFHQ-Alignment)
- [FFHQ-dataset](https://github.com/NVlabs/ffhq-dataset)
- [face-alignment](https://github.com/1adrianb/face-alignment)
- [stylegan3](https://github.com/NVlabs/stylegan3)
- [FreezeD](https://github.com/sangwoomo/FreezeD)
- [Image-Blur-Detection](https://github.com/priyabagaria/Image-Blur-Detection)
- [Classifying_image_blur_nonblur](https://github.com/pranavAL/Classifying_image_blur_nonblur)
- [arcane](https://www.netflix.com/kr/title/81435684)
- [JoJoGAN](https://github.com/mchong6/JoJoGAN)
- [anime-face-detector](https://github.com/hysts/anime-face-detector)
- [color correction](https://github.com/luftj/MaRE/blob/4284fe2b3307ca407e87e3b0dbdaa3c1ef646731/simple_cb.py)
- [White-box-Cartoonization](https://github.com/SystemErrorWang/White-box-Cartoonization)