{"id":15828358,"url":"https://github.com/mlomb/onnx2code","last_synced_at":"2025-04-01T16:30:38.757Z","repository":{"id":61150831,"uuid":"537331549","full_name":"mlomb/onnx2code","owner":"mlomb","description":"Convert ONNX models to plain C++ code (without dependencies)","archived":false,"fork":false,"pushed_at":"2023-03-27T23:27:37.000Z","size":3500,"stargazers_count":17,"open_issues_count":0,"forks_count":1,"subscribers_count":5,"default_branch":"main","last_synced_at":"2024-10-06T10:41:00.542Z","etag":null,"topics":["assembly","cpp","inference","machine-learning","onnx","python"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/mlomb.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2022-09-16T06:22:50.000Z","updated_at":"2024-08-06T19:13:02.000Z","dependencies_parsed_at":"2024-10-26T15:14:22.759Z","dependency_job_id":"d6dbd8fa-a3ea-4d76-a097-72f520c21e08","html_url":"https://github.com/mlomb/onnx2code","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mlomb%2Fonnx2code","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mlomb%2Fonnx2code/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mlomb%2Fonnx2code/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mlomb%2Fonnx2code/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/mlomb","download_url":"https://codeload.github.com/mlomb/onnx2code/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":246620224,"owners_count":20806722,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["assembly","cpp","inference","machine-learning","onnx","python"],"created_at":"2024-10-05T10:40:22.928Z","updated_at":"2025-04-01T16:30:38.327Z","avatar_url":"https://github.com/mlomb.png","language":"Python","readme":"# onnx2code\n\nGenerate plain C++ code for inference of ONNX models without dependencies\n\nThis project was made as an alternative to a final exam for the assignment \"Computer Organization II\". You can read the writeup in [docs/TP Final onnx2code.pdf](docs/TP%20Final%20onnx2code.pdf) (in Spanish).\n\n## Model support\n\nThe following models have been tested and work as expected.\n\n| Model | Size |\n|---|---|\n| [mnist](https://github.com/onnx/models/tree/main/vision/classification/mnist) | 26 KB |\n| [Super_Resolution](https://github.com/onnx/models/tree/main/vision/super_resolution/sub_pixel_cnn_2016) | 240 KB |\n| [squeezenet1.1](https://github.com/onnx/models/tree/main/vision/classification/squeezenet) | 9 MB |\n| [emotion_ferplus](https://github.com/onnx/models/tree/main/vision/body_analysis/emotion_ferplus) | 34 MB |\n| [inception-v2](https://github.com/onnx/models/tree/main/vision/classification/inception_and_googlenet/inception_v2) | 44 MB |\n| [resnet50-caffe2-v1](https://github.com/onnx/models/tree/main/vision/classification/resnet) | 98 MB |\n| [VGG 16 and VGG 16-bn](https://github.com/onnx/models/tree/main/vision/classification/vgg) | 527 MB |\n| [VGG 19 and VGG 19-bn](https://github.com/onnx/models/tree/main/vision/classification/vgg) | 548 MB |\n| [VGG 19-caffe2](https://github.com/onnx/models/tree/main/vision/classification/vgg) | 561 MB |\n\n* Minimum ONNX opset version: **7**\n* Quantized models are not supported\n\n## Operator support\n\nOnly `float` data type is supported.\n\n| Operator | Attribute support |\n|---|---|\n| [Add](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Add), [Div](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Div), [Mul](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Mul), [Sub](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Sub) | ✅ with broadcasting |\n| [Concat](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Concat) | ✅ with multiple inputs\u003cbr/\u003e✅ axis |\n| [Conv](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Conv) | ✅ bias\u003cbr/\u003e✅ stride\u003cbr/\u003e✅ padding (and `auto_pad`)\u003cbr/\u003e❌ dilations\u003cbr/\u003e❌ depthwise (group != 1) |\n| [Sum](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Sum) | ✅ with multiple inputs\u003cbr/\u003e❌ with broadcasting |\n| [Relu](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Relu), [Tanh](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Tanh), [Sigmoid](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Sigmoid),  [Clip](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Clip) | ✅ |\n| [Gemm](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Gemm) | ✅ with bias\u003cbr/\u003e❌ transpose A\u003cbr/\u003e✅ tranpose B\u003cbr/\u003e❌ alpha != 1\u003cbr/\u003e❌ beta != 1 |\n| [Identity](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Identity) | ✅ |\n| [MaxPool](https://github.com/onnx/onnx/blob/main/docs/Operators.md#MaxPool), [AveragePool](https://github.com/onnx/onnx/blob/main/docs/Operators.md#AveragePool) | ✅ stride\u003cbr/\u003e✅  padding (and `auto_pad`)\u003cbr/\u003e❌ dilations\u003cbr/\u003e❌ storage_order != 0\u003cbr/\u003e❌ count_include_pad != 0 |\n| [Softmax](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Softmax) | ✅ stride\u003cbr/\u003e✅ axis |\n| [Transpose](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Transpose) | ✅ perm |\n\n\n## Setting up with Docker\n\nWe provide a ready to use [Docker image](https://hub.docker.com/r/mlomb/onnx2code):\n\n```sh\ndocker run --rm -it -v $pwd/mnist.onnx:/app/input.onnx:ro -v $pwd/output:/app/output:rw mlomb/onnx2code:latest --variations=im2col,loop-tiling --checks=3\n```\n\nThe command above will generate C++ code for the `mnist.onnx` model in the `output` folder.\n\n## Setting up locally\n\n### Prerequisites\n\n* gcc (required if checking models)\n* Python 3.10\n* [pipenv](https://pypi.org/project/pipenv/)\n\nClone and install dependencies with `pipenv install`.\n\n### Run\n\nTo generate code from an ONNX model, run the following command inside a pipenv shell:\n\n```sh\npython -m onnx2code --variation=im2col,loop-tiling mnist.onnx output_folder --checks=3\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmlomb%2Fonnx2code","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmlomb%2Fonnx2code","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmlomb%2Fonnx2code/lists"}