Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/lengstrom/fast-style-transfer
TensorFlow CNN for fast style transfer ⚡🖥🎨🖼
https://github.com/lengstrom/fast-style-transfer
deep-learning neural-networks neural-style style-transfer
Last synced: 2 days ago
JSON representation
TensorFlow CNN for fast style transfer ⚡🖥🎨🖼
- Host: GitHub
- URL: https://github.com/lengstrom/fast-style-transfer
- Owner: lengstrom
- Created: 2016-07-21T02:59:11.000Z (over 8 years ago)
- Default Branch: master
- Last Pushed: 2023-07-16T02:36:30.000Z (over 1 year ago)
- Last Synced: 2024-11-27T14:03:59.559Z (16 days ago)
- Topics: deep-learning, neural-networks, neural-style, style-transfer
- Language: Python
- Homepage:
- Size: 10.8 MB
- Stars: 10,931
- Watchers: 320
- Forks: 2,602
- Open Issues: 111
-
Metadata Files:
- Readme: README.md
- Funding: .github/FUNDING.yml
- Citation: CITATION.cff
Awesome Lists containing this project
- awesome-neural-art - fast-style-transfer - TensorFlow CNN for fast style transfer with larger scale style features in transformations. (Style Transfer)
- StarryDivineSky - lengstrom/fast-style-transfer
README
## Fast Style Transfer in [TensorFlow](https://github.com/tensorflow/tensorflow)
Add styles from famous paintings to any photo in a fraction of a second! [You can even style videos!](#video-stylization)
It takes 100ms on a 2015 Titan X to style the MIT Stata Center (1024×680) like Udnie, by Francis Picabia.Our implementation is based off of a combination of Gatys' [A Neural Algorithm of Artistic Style](https://arxiv.org/abs/1508.06576), Johnson's [Perceptual Losses for Real-Time Style Transfer and Super-Resolution](http://cs.stanford.edu/people/jcjohns/eccv16/), and Ulyanov's [Instance Normalization](https://arxiv.org/abs/1607.08022).
### Sponsorship
Please consider sponsoring my work on this project!### License
Copyright (c) 2016 Logan Engstrom. Contact me for commercial use (or rather any use that is not academic research) (email: engstrom at my university's domain dot edu). Free for research use, as long as proper attribution is given and this copyright notice is retained.## Video Stylization
Here we transformed every frame in a video, then combined the results. [Click to go to the full demo on YouTube!](https://www.youtube.com/watch?v=xVJwwWQlQ1o) The style here is Udnie, as above.See how to generate these videos [here](#stylizing-video)!
## Image Stylization
We added styles from various paintings to a photo of Chicago. Click on thumbnails to see full applied style images.
## Implementation Details
Our implementation uses TensorFlow to train a fast style transfer network. We use roughly the same transformation network as described in Johnson, except that batch normalization is replaced with Ulyanov's instance normalization, and the scaling/offset of the output `tanh` layer is slightly different. We use a loss function close to the one described in Gatys, using VGG19 instead of VGG16 and typically using "shallower" layers than in Johnson's implementation (e.g. we use `relu1_1` rather than `relu1_2`). Empirically, this results in larger scale style features in transformations.
## Virtual Environment Setup (Anaconda) - Windows/Linux
Tested on
| Spec | |
|-----------------------------|-------------------------------------------------------------|
| Operating System | Windows 10 Home |
| GPU | Nvidia GTX 2080 TI |
| CUDA Version | 11.0 |
| Driver Version | 445.75 |
### Step 1:Install Anaconda
https://docs.anaconda.com/anaconda/install/
### Step 2:Build a virtual environment
Run the following commands in sequence in Anaconda Prompt:
```
conda create -n tf-gpu tensorflow-gpu=2.1.0
conda activate tf-gpu
conda install jupyterlab
jupyter lab
```
Run the following command in the notebook or just conda install the package:
```
!pip install moviepy==1.0.2
```
Follow the commands below to use fast-style-transfer
## Documentation
### Training Style Transfer Networks
Use `style.py` to train a new style transfer network. Run `python style.py` to view all the possible parameters. Training takes 4-6 hours on a Maxwell Titan X. [More detailed documentation here](docs.md#stylepy). **Before you run this, you should run `setup.sh`**. Example usage:python style.py --style path/to/style/img.jpg \
--checkpoint-dir checkpoint/path \
--test path/to/test/img.jpg \
--test-dir path/to/test/dir \
--content-weight 1.5e1 \
--checkpoint-iterations 1000 \
--batch-size 20### Evaluating Style Transfer Networks
Use `evaluate.py` to evaluate a style transfer network. Run `python evaluate.py` to view all the possible parameters. Evaluation takes 100 ms per frame (when batch size is 1) on a Maxwell Titan X. [More detailed documentation here](docs.md#evaluatepy). Takes several seconds per frame on a CPU. **Models for evaluation are [located here](https://drive.google.com/drive/folders/0B9jhaT37ydSyRk9UX0wwX3BpMzQ?resourcekey=0-Z9LcNHC-BTB4feKwm4loXw&usp=sharing)**. Example usage:python evaluate.py --checkpoint path/to/style/model.ckpt \
--in-path dir/of/test/imgs/ \
--out-path dir/for/results/### Stylizing Video
Use `transform_video.py` to transfer style into a video. Run `python transform_video.py` to view all the possible parameters. Requires `ffmpeg`. [More detailed documentation here](docs.md#transform_videopy). Example usage:python transform_video.py --in-path path/to/input/vid.mp4 \
--checkpoint path/to/style/model.ckpt \
--out-path out/video.mp4 \
--device /gpu:0 \
--batch-size 4### Requirements
You will need the following to run the above:
- TensorFlow 0.11.0
- Python 2.7.9, Pillow 3.4.2, scipy 0.18.1, numpy 1.11.2
- If you want to train (and don't want to wait for 4 months):
- A decent GPU
- All the required NVIDIA software to run TF on a GPU (cuda, etc)
- ffmpeg 3.1.3 if you want to stylize video### Citation
```
@misc{engstrom2016faststyletransfer,
author = {Logan Engstrom},
title = {Fast Style Transfer},
year = {2016},
howpublished = {\url{https://github.com/lengstrom/fast-style-transfer/}},
note = {commit xxxxxxx}
}
```### Attributions/Thanks
- This project could not have happened without the advice (and GPU access) given by [Anish Athalye](http://www.anishathalye.com/).
- The project also borrowed some code from Anish's [Neural Style](https://github.com/anishathalye/neural-style/)
- Some readme/docs formatting was borrowed from Justin Johnson's [Fast Neural Style](https://github.com/jcjohnson/fast-neural-style)
- The image of the Stata Center at the very beginning of the README was taken by [Juan Paulo](https://juanpaulo.me/)### Related Work
- Michael Ramos ported this network [to use CoreML on iOS](https://medium.com/@rambossa/diy-prisma-fast-style-transfer-app-with-coreml-and-tensorflow-817c3b90dacd)