Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/mlomb/onnx2code
Convert ONNX models to plain C++ code (without dependencies)
https://github.com/mlomb/onnx2code
assembly cpp inference machine-learning onnx python
Last synced: 3 months ago
JSON representation
Convert ONNX models to plain C++ code (without dependencies)
- Host: GitHub
- URL: https://github.com/mlomb/onnx2code
- Owner: mlomb
- Created: 2022-09-16T06:22:50.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2023-03-27T23:27:37.000Z (almost 2 years ago)
- Last Synced: 2024-10-06T10:41:00.542Z (4 months ago)
- Topics: assembly, cpp, inference, machine-learning, onnx, python
- Language: Python
- Homepage:
- Size: 3.34 MB
- Stars: 17
- Watchers: 5
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# onnx2code
Generate plain C++ code for inference of ONNX models without dependencies
This project was made as an alternative to a final exam for the assignment "Computer Organization II". You can read the writeup in [docs/TP Final onnx2code.pdf](docs/TP%20Final%20onnx2code.pdf) (in Spanish).
## Model support
The following models have been tested and work as expected.
| Model | Size |
|---|---|
| [mnist](https://github.com/onnx/models/tree/main/vision/classification/mnist) | 26 KB |
| [Super_Resolution](https://github.com/onnx/models/tree/main/vision/super_resolution/sub_pixel_cnn_2016) | 240 KB |
| [squeezenet1.1](https://github.com/onnx/models/tree/main/vision/classification/squeezenet) | 9 MB |
| [emotion_ferplus](https://github.com/onnx/models/tree/main/vision/body_analysis/emotion_ferplus) | 34 MB |
| [inception-v2](https://github.com/onnx/models/tree/main/vision/classification/inception_and_googlenet/inception_v2) | 44 MB |
| [resnet50-caffe2-v1](https://github.com/onnx/models/tree/main/vision/classification/resnet) | 98 MB |
| [VGG 16 and VGG 16-bn](https://github.com/onnx/models/tree/main/vision/classification/vgg) | 527 MB |
| [VGG 19 and VGG 19-bn](https://github.com/onnx/models/tree/main/vision/classification/vgg) | 548 MB |
| [VGG 19-caffe2](https://github.com/onnx/models/tree/main/vision/classification/vgg) | 561 MB |* Minimum ONNX opset version: **7**
* Quantized models are not supported## Operator support
Only `float` data type is supported.
| Operator | Attribute support |
|---|---|
| [Add](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Add), [Div](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Div), [Mul](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Mul), [Sub](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Sub) | ✅ with broadcasting |
| [Concat](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Concat) | ✅ with multiple inputs
✅ axis |
| [Conv](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Conv) | ✅ bias
✅ stride
✅ padding (and `auto_pad`)
❌ dilations
❌ depthwise (group != 1) |
| [Sum](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Sum) | ✅ with multiple inputs
❌ with broadcasting |
| [Relu](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Relu), [Tanh](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Tanh), [Sigmoid](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Sigmoid), [Clip](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Clip) | ✅ |
| [Gemm](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Gemm) | ✅ with bias
❌ transpose A
✅ tranpose B
❌ alpha != 1
❌ beta != 1 |
| [Identity](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Identity) | ✅ |
| [MaxPool](https://github.com/onnx/onnx/blob/main/docs/Operators.md#MaxPool), [AveragePool](https://github.com/onnx/onnx/blob/main/docs/Operators.md#AveragePool) | ✅ stride
✅ padding (and `auto_pad`)
❌ dilations
❌ storage_order != 0
❌ count_include_pad != 0 |
| [Softmax](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Softmax) | ✅ stride
✅ axis |
| [Transpose](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Transpose) | ✅ perm |## Setting up with Docker
We provide a ready to use [Docker image](https://hub.docker.com/r/mlomb/onnx2code):
```sh
docker run --rm -it -v $pwd/mnist.onnx:/app/input.onnx:ro -v $pwd/output:/app/output:rw mlomb/onnx2code:latest --variations=im2col,loop-tiling --checks=3
```The command above will generate C++ code for the `mnist.onnx` model in the `output` folder.
## Setting up locally
### Prerequisites
* gcc (required if checking models)
* Python 3.10
* [pipenv](https://pypi.org/project/pipenv/)Clone and install dependencies with `pipenv install`.
### Run
To generate code from an ONNX model, run the following command inside a pipenv shell:
```sh
python -m onnx2code --variation=im2col,loop-tiling mnist.onnx output_folder --checks=3
```