https://github.com/tum-ei-eda/tflm-offline-interpreter
This project runs the TFLM interpreter off-target to produce static code that is able to run inference on the processed model. This avoids interpretation and code size overheads on a target that does not need to process different models dynamically.
https://github.com/tum-ei-eda/tflm-offline-interpreter
Last synced: 2 months ago
JSON representation
This project runs the TFLM interpreter off-target to produce static code that is able to run inference on the processed model. This avoids interpretation and code size overheads on a target that does not need to process different models dynamically.
- Host: GitHub
- URL: https://github.com/tum-ei-eda/tflm-offline-interpreter
- Owner: tum-ei-eda
- License: apache-2.0
- Created: 2020-05-11T12:29:18.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2020-06-02T11:31:54.000Z (over 5 years ago)
- Last Synced: 2023-03-05T23:04:26.666Z (over 2 years ago)
- Language: C++
- Size: 29.3 KB
- Stars: 4
- Watchers: 3
- Forks: 2
- Open Issues: 1