Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/gsurma/style_transfer
CNN image style transfer 🎨.
https://github.com/gsurma/style_transfer
cnn computer-vision convolutional-neural-networks deep-learning jupyter-notebook keras machine-learning neural-network notebook python style-transfer
Last synced: about 1 month ago
JSON representation
CNN image style transfer 🎨.
- Host: GitHub
- URL: https://github.com/gsurma/style_transfer
- Owner: gsurma
- License: mit
- Created: 2018-12-10T05:52:01.000Z (about 6 years ago)
- Default Branch: master
- Last Pushed: 2021-07-09T08:50:48.000Z (over 3 years ago)
- Last Synced: 2023-11-07T18:57:45.674Z (about 1 year ago)
- Topics: cnn, computer-vision, convolutional-neural-networks, deep-learning, jupyter-notebook, keras, machine-learning, neural-network, notebook, python, style-transfer
- Language: Jupyter Notebook
- Homepage: https://gsurma.github.io
- Size: 16.4 MB
- Stars: 273
- Watchers: 8
- Forks: 69
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- Funding: .github/FUNDING.yml
- License: LICENSE
Awesome Lists containing this project
README
# Style Transfer
Style Transfer is a process in which we strive to modify the style of an image while preserving its content. Given an input image and a style image, we can compute an output image with the original content but a new style.
[Kaggle kernel](https://www.kaggle.com/greg115/style-transfer)
Check out corresponding Medium article:
[Style Transfer - Styling Images with Convolutional Neural Networks](https://towardsdatascience.com/style-transfer-styling-images-with-convolutional-neural-networks-7d215b58f461)
# How does it work?
1. We take input image and style images and resize them to equal shapes.
2. We load a pre-trained CNN (VGG16).
3. Knowing that we can distinguish layers that are responsible for the style (basic shapes, colors etc.) and the ones responsible for the content (image-specific features), we can separate the layers to independently work on the content and style.
4. Then we set our task as an optimization problem where we are going to minimize:
* **content loss** (distance between the input and output images - we strive to preserve the content)
* **style loss** (distance between the style and output images - we strive to apply a new style)
* **total variation loss** (regularization - spatial smoothness to denoise the output image)
5. Finally, we set our gradients and optimize with the [L-BFGS](https://en.wikipedia.org/wiki/Limited-memory_BFGS) algorithm.# Results
## Input
## Style
## Output
### 1 iteration
### 2 iterations
### 5 iterations
### 10 iterations
### 15 iterations
## Other examples
## Author
**Greg (Grzegorz) Surma**
[**PORTFOLIO**](https://gsurma.github.io)
[**GITHUB**](https://github.com/gsurma)
[**BLOG**](https://medium.com/@gsurma)