https://github.com/emilwallner/Screenshot-to-code
A neural network that transforms a design mock-up into a static website.
https://github.com/emilwallner/Screenshot-to-code
cnn cnn-keras deep-learning encoder-decoder floydhub jupyter jupyter-notebook keras lstm machine-learning seq2seq
Last synced: about 1 month ago
JSON representation
A neural network that transforms a design mock-up into a static website.
- Host: GitHub
- URL: https://github.com/emilwallner/Screenshot-to-code
- Owner: emilwallner
- License: other
- Created: 2017-10-16T11:41:48.000Z (over 7 years ago)
- Default Branch: master
- Last Pushed: 2024-08-16T22:05:07.000Z (8 months ago)
- Last Synced: 2024-11-21T08:04:56.736Z (5 months ago)
- Topics: cnn, cnn-keras, deep-learning, encoder-decoder, floydhub, jupyter, jupyter-notebook, keras, lstm, machine-learning, seq2seq
- Language: HTML
- Homepage:
- Size: 49.3 MB
- Stars: 16,420
- Watchers: 537
- Forks: 1,557
- Open Issues: 21
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome - emilwallner/Screenshot-to-code - A neural network that transforms a design mock-up into a static website. (HTML)
- awesome-github-star - Screenshot-to-code - up into a static website. | emilwallner | 14121 | (HTML)
- awesome-keras - Screenshot-to-code - A neural network that transforms a design mock-up into a static website. (Examples/Notebooks)
- awesome-list - Screenshot-to-code - A neural network that transforms a design mock-up into a static website. (Other Machine Learning Applications / Others)
- awesome-generative-models - Screenshoot to Code
- awesome-lesscode - Screenshot-to-code - up into a static website (็ฒพ้ LessCode ้กน็ฎ)
- StarryDivineSky - emilwallner/Screenshot-to-code
- awesome - emilwallner/Screenshot-to-code - A neural network that transforms a design mock-up into a static website. (HTML)
- awesome - emilwallner/Screenshot-to-code - A neural network that transforms a design mock-up into a static website. (HTML)
- AiTreasureBox - emilwallner/Screenshot-to-code - 04-15_16525_0](https://img.shields.io/github/stars/emilwallner/Screenshot-to-code.svg)|A neural network that transforms a design mock-up into a static website.| (Repos)
README
---
**A detailed tutorial covering the code in this repository:** [Turning design mockups into code with deep learning](https://emilwallner.medium.com/how-you-can-train-an-ai-to-convert-your-design-mockups-into-html-and-css-cc7afd82fed4).
**Plug:** ๐ Check out my 60-page guide, [No ML Degree](https://www.emilwallner.com/p/no-ml-degree), on how to land a machine learning job without a degree.
The neural network is built in three iterations. Starting with a Hello World version, followed by the main neural network layers, and ending by training it to generalize.
The models are based on Tony Beltramelli's [pix2code](https://github.com/tonybeltramelli/pix2code), and inspired by Airbnb's [sketching interfaces](https://airbnb.design/sketching-interfaces/), and Harvard's [im2markup](https://github.com/harvardnlp/im2markup).
**Note:** only the Bootstrap version can generalize on new design mock-ups. It uses 16 domain-specific tokens which are translated into HTML/CSS. It has a 97% accuracy. The best model uses a GRU instead of an LSTM. This version can be trained on a few GPUs. The raw HTML version has potential to generalize, but is still unproven and requires a significant amount of GPUs to train. The current model is also trained on a homogeneous and small dataset, thus it's hard to tell how well it behaves on more complex layouts.
Dataset: https://github.com/tonybeltramelli/pix2code/tree/master/datasets
A quick overview of the process:
### 1) Give a design image to the trained neural network

### 2) The neural network converts the image into HTML markup
### 3) Rendered output

## Installation
### FloydHub
[](https://floydhub.com/run?template=https://github.com/floydhub/pix2code-template)
Click this button to open a [Workspace](https://blog.floydhub.com/workspaces/) on [FloydHub](https://www.floydhub.com/?utm_medium=readme&utm_source=pix2code&utm_campaign=aug_2018) where you will find the same environment and dataset used for the *Bootstrap version*. You can also find the trained models for testing.
### Local
``` bash
pip install keras tensorflow pillow h5py jupyter
```
```
git clone https://github.com/emilwallner/Screenshot-to-code.git
cd Screenshot-to-code/
jupyter notebook
```
Go do the desired notebook, files that end with '.ipynb'. To run the model, go to the menu then click on Cell > Run allThe final version, the Bootstrap version, is prepared with a small set to test run the model. If you want to try it with all the data, you need to download the data here: https://www.floydhub.com/emilwallner/datasets/imagetocode, and specify the correct ```dir_name```.
## Folder structure
``` bash
| |-Bootstrap #The Bootstrap version
| | |-compiler #A compiler to turn the tokens to HTML/CSS (by pix2code)
| | |-resources
| | | |-eval_light #10 test images and markup
| |-Hello_world #The Hello World version
| |-HTML #The HTML version
| | |-Resources_for_index_file #CSS,images and scripts to test index.html file
| | |-html #HTML files to train it on
| | |-images #Screenshots for training
|-readme_images #Images for the readme page
```## Hello World
## HTML
## Bootstrap
## Model weights
- [Bootstrap](https://www.floydhub.com/emilwallner/datasets/imagetocode) (The pre-trained model uses GRUs instead of LSTMs)
- [HTML](https://www.floydhub.com/emilwallner/datasets/html_models)## Acknowledgments
- Thanks to IBM for donating computing power through their PowerAI platform
- The code is largely influenced by Tony Beltramelli's pix2code paper. [Code](https://github.com/tonybeltramelli/pix2code) [Paper](https://arxiv.org/abs/1705.07962)
- The structure and some of the functions are from Jason Brownlee's [excellent tutorial](https://machinelearningmastery.com/develop-a-caption-generation-model-in-keras/)