https://github.com/princep/tensorrt-sample-on-threads
A tutorial for getting started on running Tensorrt engine and Deep Learning Accelerator (DLA) models on threads
https://github.com/princep/tensorrt-sample-on-threads
cpp deep-learning-accelerator dla mnist nvcc tensorrt tensorrt-inference threads
Last synced: 4 months ago
JSON representation
A tutorial for getting started on running Tensorrt engine and Deep Learning Accelerator (DLA) models on threads
- Host: GitHub
- URL: https://github.com/princep/tensorrt-sample-on-threads
- Owner: PrinceP
- Created: 2024-09-14T12:00:35.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2024-09-14T12:07:10.000Z (10 months ago)
- Last Synced: 2025-02-22T13:44:01.925Z (4 months ago)
- Topics: cpp, deep-learning-accelerator, dla, mnist, nvcc, tensorrt, tensorrt-inference, threads
- Language: C++
- Homepage:
- Size: 2.93 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Tensorrt sample for running engines on different threads
Here is a sample to run GPU and DLAs at the same time.
1. Prepare TensorRT engine of GPU and DLA with trtexec. For example
For GPU
```sh
trtexec --onnx=/usr/src/tensorrt/data/mnist/mnist.onnx --saveEngine=gpu.engine
```For DLA
```sh
trtexec --onnx=/usr/src/tensorrt/data/mnist/mnist.onnx --useDLACore=0 --allowgPUFallback --saveEngine=dla.engine
```2. Compile the repo
```sh
make
```3. Test
Please put the gu.engine and da.engine generated in step l to the cloned repo.
The command runs like this
```sh
./test ... # -1: GPU, 0: DLAO, 1: DLA1
```Ex: Run GPU+DLA0+DLA1
```sh
./test -1 0 1
```