https://github.com/parlaynu/inference-tvm
Export ONNX to ApacheTVM and run inference in containerized environments.
https://github.com/parlaynu/inference-tvm
apache-tvm cuda docker jetson-nano onnx raspberrypi4 x86-64
Last synced: 10 months ago
JSON representation
Export ONNX to ApacheTVM and run inference in containerized environments.
- Host: GitHub
- URL: https://github.com/parlaynu/inference-tvm
- Owner: parlaynu
- License: mit
- Created: 2023-11-30T05:18:47.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2023-12-01T08:30:09.000Z (about 2 years ago)
- Last Synced: 2024-01-26T03:40:29.414Z (about 2 years ago)
- Topics: apache-tvm, cuda, docker, jetson-nano, onnx, raspberrypi4, x86-64
- Language: Dockerfile
- Homepage:
- Size: 16.6 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE