Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/sshane/konverter
Convert simple Keras models to pure Python π+ NumPy
https://github.com/sshane/konverter
keras keras-tensorflow model-converter neural-networks python tensorflow
Last synced: 29 days ago
JSON representation
Convert simple Keras models to pure Python π+ NumPy
- Host: GitHub
- URL: https://github.com/sshane/konverter
- Owner: sshane
- License: mit
- Created: 2020-04-19T03:44:27.000Z (over 4 years ago)
- Default Branch: master
- Last Pushed: 2021-07-14T09:02:03.000Z (over 3 years ago)
- Last Synced: 2024-11-23T16:36:03.251Z (about 1 month ago)
- Topics: keras, keras-tensorflow, model-converter, neural-networks, python, tensorflow
- Language: Python
- Homepage: https://sshane.github.io/Konverter/
- Size: 7.52 MB
- Stars: 41
- Watchers: 2
- Forks: 9
- Open Issues: 6
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Konverter ![Konverter Tests](https://github.com/ShaneSmiskol/Konverter/workflows/Konverter%20Tests/badge.svg)
### Convert your Keras models into pure Python π+ NumPy.The goal of this tool is to provide a quick and easy way to execute Keras models on machines or setups where utilizing TensorFlow/Keras is impossible. Specifically, in my case, to replace SNPE (Snapdragon Neural Processing Engine) for inference on phones with Python.
## Supported Keras Model Attributes
- Models:
- Sequential
- Layers:
- Dense
- Dropout
- Will be ignored during inference (SNPE 1.19 does NOT support dropout with Keras!)
- SimpleRNN
- Batch predictions do not currently work correctly.
- GRU
- **Important:** The current GRU support is based on [`GRU v3`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GRU) in tf.keras 2.1.0. It will not work correctly with older versions of TensorFlow if not using [`implementation=2` (example)](https://github.com/ShaneSmiskol/Konverter/blob/master/tests/build_test_models.py#L47).
- Batch prediction untested
- BatchNormalization
- Works with all supported layers
- Activations:
- ReLU
- LeakyReLU (supports custom alphas)
- Sigmoid
- Softmax
- Tanh
- Linear/None#### Roadmap π£
The project to do list can be [found here](https://github.com/ShaneSmiskol/Konverter/projects/1).## Features π€
- Super quick conversion of your models. Takes less than a second. π±βπ€
- Usually reduces the size of Keras models by about 69.37%. π
- In some cases, prediction is quicker than Keras or SNPE (dense models). π
- RNNs: Since we lose the GPU using NumPy, predictions may be slower
- Stores the weights and biases of your model in a separate compressed NumPy file. π## Benchmarks π
Benchmarks can be found in [BENCHMARKS.md](BENCHMARKS.md).## Installation & Usage π
### Install Konverter using pip:
`pip install keras-konverter`### Konverting using the CLI: π₯
`konverter examples/test_model.h5 examples/test_model.py` (py suffix is optional)Type `konverter` to get all possible arguments and flags!
- Arguments π’:
- `input_model`: Either the the location of your tf.keras .h5 model, or a preloaded Sequential model if using with Python. This is required
- `output_file`: Optional file path for your output model, along with the weights file. Default is same name, same directory
- Flags π:
- `--indent, -i`: How many spaces to use for indentation, default is 2
- `--silent, -i`: Whether you want Konverter to silently Konvert
- `--no-watermark, -nw`: Removes the watermark prepended to the output model file### Konverting programmatically: π€
All parameters with defaults: `konverter.konvert(input_model, output_file=None, indent=2, silent=False, no_watermark=False, tf_verbose=False)`
```python
>>> import konverter
>>> konverter.konvert('examples/test_model.h5', output_file='examples/test_model')
```*Note: The model file will be saved as `f'{output_file}.py'` and the weights will be saved as `f'{output_file}_weights.npz'` in the same directory.* ***Make sure to change the path inside the model wrapper if you move the files after Konversion.***
---
That's it! If your model is supported (check [Supported Keras Model Attributes](#Supported-Keras-Model-Attributes)), then your newly converted Konverter model should be ready to go.To predict: Import your model wrapper and run the `predict()` function. **βAlways double check that the outputs closely match your Keras model'sβ** Automatic verification will come soon. **For the integrity of the predictions, always make sure your input is a `np.float32` array.**
```python
import numpy as np
from examples.test_model import predict
predict([np.random.rand(3).astype(np.float32)])
```[See limitations and issues.](#Current-Limitations-and-Issues)
## Demo
## Dependencies
Thanks to [@apiad](https://github.com/apiad) you can now use [Poetry](https://github.com/python-poetry/poetry) to install all the needed dependencies for this tool! However the requirements are a pretty short list:
- It seems most versions of TensorFlow that include Keras work perfectly fine. Tested from 1.14 to 2.2 using Actions and no issues have occurred. **(Make sure you use implementation 2/v3 with GRU layers if not on TF 2.x)**
- **Important**: You must create your models with tf.keras currently (not keras)
- Python >= 3.6 (for the glorious f-strings!)To install all needed dependencies, simply `cd` into the base directory of Konverter, and run:
```
poetry install --no-dev
```If you would like to use this version of Konverter (not from pip), then you may need to also run `poetry shell` after to enter poetry's virtualenv environment. **If you go down this path, make sure to remove `--no-dev` so TensorFlow installs in the venv!**
## Current Limitations and Issues π¬
- Dimensionality of input data:When working with models using `softmax`, the dimensionality of the input data matters. For example, predicting on the same data with different input dimensionality sometimes results in different outputs:
```python
>>> model.predict([[1, 3, 5]]) # keras model, correct output
array([[14.792273, 15.59787 , 15.543163]])
>>> predict([[1, 3, 5]]) # Konverted model, wrong output
array([[11.97839948, 18.09931636, 15.48014805]])
>>> predict([1, 3, 5]) # And correct output
array([14.79227209, 15.59786987, 15.54316282])
```If trying a batch prediction with classes and `softmax`, the model fails completely:
```python
>>> predict([[0.5], [0.5]])
array([[0.5, 0.5, 0.5, 0.5], [0.5, 0.5, 0.5, 0.5]])
```Always double check that predictions are working correctly before deploying the model.
- Batch prediction with SimpleRNN (and possibly all RNN) layersCurrently, the converted model has no way of determining if you're feeding a single prediction or a batch of predictions, and it will fail to give the correct output in certain cases (more likely with recurrent layers and softmax dense outputs layers). Support will be added soon.