https://github.com/justinchuby/onnxruntime-easy
Simplified APIs for onnxruntime
https://github.com/justinchuby/onnxruntime-easy
ai machine-learning onnx onnxruntime
Last synced: about 2 months ago
JSON representation
Simplified APIs for onnxruntime
- Host: GitHub
- URL: https://github.com/justinchuby/onnxruntime-easy
- Owner: justinchuby
- License: mit
- Created: 2025-03-18T23:34:53.000Z (3 months ago)
- Default Branch: main
- Last Pushed: 2025-03-19T00:39:57.000Z (3 months ago)
- Last Synced: 2025-03-19T01:25:48.296Z (3 months ago)
- Topics: ai, machine-learning, onnx, onnxruntime
- Language: Python
- Homepage:
- Size: 6.84 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# onnxruntime-easy
Simplified APIs for onnxruntime
## Usage
```py
import onnxruntime_easy as ort_easy
import numpy as np
import ml_dtypes# Simple `load` method that handles setting up ONNX Runtime inference session
# All session options discoverable in the load function.
model = ort_easy.load("model.onnx", device="cpu") # You can control the providers if the default is not what you need
# Supports all ONNX dtypes via ml_dtypes or dlpack
input = ort_easy.ort_value(np.random.rand(1, 3, 299, 299).astype(ml_dtypes.bfloat16))
output = model(input)# Works with any ndarray that implements the __array__ interface
# Or automatically share data on device (like cuda) with dlpack
import torch
model = ort_easy.load("model.onnx", device="cuda")
input_tensor = ort_easy.ort_value(torch.rand(1, 3, 299, 299, device="cuda"))
output = model(input_tensor)# Use a context manager to control the outputs you get
with model.set_outputs("output1"):
output1 = model(input_tensor)
```