https://github.com/ami-iit/onnx-cpp-benchmark
Simple tool to profile onnx inference with C++ APIs.
https://github.com/ami-iit/onnx-cpp-benchmark
gpu onnx onnxruntime onnxruntime-gpu pixi
Last synced: 2 months ago
JSON representation
Simple tool to profile onnx inference with C++ APIs.
- Host: GitHub
- URL: https://github.com/ami-iit/onnx-cpp-benchmark
- Owner: ami-iit
- License: bsd-3-clause
- Created: 2023-06-29T12:29:53.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2023-11-15T12:51:17.000Z (almost 2 years ago)
- Last Synced: 2024-04-16T01:34:43.041Z (over 1 year ago)
- Topics: gpu, onnx, onnxruntime, onnxruntime-gpu, pixi
- Language: C++
- Homepage:
- Size: 80.1 KB
- Stars: 1
- Watchers: 4
- Forks: 1
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# onnx-cpp-benchmark
Simple tool to profile onnx inference with C++ APIs.
## Installation
### With conda-forge dependencies
#### Linux/macOS
~~~
mamba create -n onnxcppbenchmark compilers cli11 onnxruntime=*=*cuda cmake ninja pkg-config cudnn cudatoolkit onnxruntime-cpp=*=*cuda
mamba activate onnxcppbenchmark
git clone https://github.com/ami-iit/onnx-cpp-benchmark
cd onnx-cpp-benchmark
mkdir build
cd build
cmake -GNinja -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX ..
ninja
~~~#### Windows
~~~
mamba create -n onnxcppbenchmark compilers cli11 onnxruntime=*=*cuda cmake ninja pkg-config cudnn cudatoolkit onnxruntime-cpp=*=*cuda
mamba activate onnxcppbenchmark
git clone https://github.com/ami-iit/onnx-cpp-benchmark
cd onnx-cpp-benchmark
mkdir build
cd build
cmake -GNinja -DCMAKE_INSTALL_PREFIX=%CONDA_PREFIX%\Library ..
ninja
~~~## Usage
Download a simple `.onnx` file and run the benchmark on it.
```shell
curl -L https://huggingface.co/ami-iit/mann/resolve/3a6fa8fe38d39deae540e4aca06063e9f2b53380/ergocubSN000_26j_49e.onnx -o ergocubSN000_26j_49e.onnx
# Use default options
onnx-cpp-benchmark ergocubSN000_26j_49e.onnx# Specify custom options
onnx-cpp-benchmark ergocubSN000_26j_49e.onnx --iterations 100 --batch_size 5 --backend onnxruntimecpu
```Current supported backends:
* `onnxruntimecpu` : [ONNX Runtime](https://onnxruntime.ai/) with CPU
* `onnxruntimecuda` : [ONNX Runtime](https://onnxruntime.ai/) with CUDA## Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
## License
[BSD 3-Clause](https://choosealicense.com/licenses/bsd-3-clause/)