Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/PrinceP/tensorrt-cpp-for-onnx
Tensorrt codebase to inference in c++ for all major neural arch using onnx
https://github.com/PrinceP/tensorrt-cpp-for-onnx
rt-detr yolo-world yolo-world-v2 yolov10 yolov8 yolov8-classification yolov8-detection yolov8-obb yolov8-pose yolov8-seg yolov9
Last synced: 3 months ago
JSON representation
Tensorrt codebase to inference in c++ for all major neural arch using onnx
- Host: GitHub
- URL: https://github.com/PrinceP/tensorrt-cpp-for-onnx
- Owner: PrinceP
- License: gpl-3.0
- Created: 2024-03-08T18:57:48.000Z (12 months ago)
- Default Branch: main
- Last Pushed: 2024-08-04T10:29:42.000Z (7 months ago)
- Last Synced: 2024-08-07T05:05:06.410Z (7 months ago)
- Topics: rt-detr, yolo-world, yolo-world-v2, yolov10, yolov8, yolov8-classification, yolov8-detection, yolov8-obb, yolov8-pose, yolov8-seg, yolov9
- Language: C++
- Homepage:
- Size: 22.5 MB
- Stars: 4
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-yolo-object-detection - PrinceP/tensorrt-cpp-for-onnx - cpp-for-onnx?style=social"/> : Tensorrt codebase to inference in c++ for all major neural arch using onnx. (Lighter and Deployment Frameworks)
- awesome-yolo-object-detection - PrinceP/tensorrt-cpp-for-onnx - cpp-for-onnx?style=social"/> : Tensorrt codebase to inference in c++ for all major neural arch using onnx. (Lighter and Deployment Frameworks)
README
#
TENSORRT CPP FOR ONNX
Tensorrt codebase in c++ to inference for all major neural arch using onnx and dynamic batching
##
NVIDIA Driver```bash
wget https://developer.download.nvidia.com/compute/cuda/12.4.0/local_installers/cuda_12.4.0_550.54.14_linux.run
sudo sh cuda_12.4.0_550.54.14_linux.run
```
##
Docker```bash
sudo docker build -t trt_24.02_opencv .
sudo docker run --rm --network="host" -v $(pwd):/app -it --runtime nvidia trt_24.02_opencv bash
```##
Models###
YOLO-WorldModel Conversion
url = https://github.com/AILab-CVC/YOLO-World
- Clone the YOLO-World
```bashgit clone https://github.com/AILab-CVC/YOLO-World
Follow steps for installation
https://github.com/AILab-CVC/YOLO-World?tab=readme-ov-file#1-installationDefine custom classes(Any name can be defined)
echo '[["helmet"], ["head"],["sunglasses"]]' > custom_class.jsonPYTHONPATH=./ python3 deploy/export_onnx.py configs/pretrain/yolo_world_v2_s_vlpan_bn_2e-3_100e_4x8gpus_obj365v1_goldg_train_lvis_minival.py yolo_world_v2_s_obj365v1_goldg_pretrain-55b943ea.pth --custom-text custom_class.json --opset 12 --without-nms
After the fix https://github.com/AILab-CVC/YOLO-World/pull/416
python3 deploy/onnx_demo.py ./work_dirs/yolo_world_v2_s_obj365v1_goldg_pretrain-55b943ea.onnx ~/disk1/uncanny/projects/tensorrt-cpp-for-onnx/data/ custom_class.json --onnx-nmsgit clone https://github.com/PrinceP/tensorrt-cpp-for-onnx
Adjust settings in the ./examples/yolo-world/main.cpp
0.019/*score_threshold*/, 0.7/*iou_threshold*/, 300/*max_detections*/// Move .onnx file to 'examples/yolo-world'
cp cp ./work_dirs/.onnx /app/examples/yolo-worldmkdir build
cd build
cmake ..
make -j4./yolo-world /app/examples/yolo-world/.onnx /app/data/
// Check the results folder
```Results
**Results [yolo_world_v2_s_obj365v1_goldg_pretrain, Batchsize = 1, Model size = 640x640]**
![]()
![]()
![]()
###
RT-DETRModel Conversion
url = https://github.com/lyuwenyu/RT-DETR.git
- Clone the RT-DETR
```bashgit clone https://github.com/lyuwenyu/RT-DETR.git
Follow steps from here
Version1
https://github.com/lyuwenyu/RT-DETR/blob/main/rtdetr_pytorch/README.md#todoVersion2
https://github.com/lyuwenyu/RT-DETR/tree/main/rtdetrv2_pytorch#quick-startAny of above can run.
git clone https://github.com/PrinceP/tensorrt-cpp-for-onnx
// Move .onnx file to 'examples/rt-detr'
cp .onnx /app/examples/rt-detrmkdir build
cd build
cmake ..
make -j4./rt-detr /app/examples/rt-detr/.onnx /app/data/
// Check the results folder
```Results
**Results [RT-DETRv2-S, Batchsize = 2, Model size = 640x640]**
![]()
![]()
![]()
###
YOLOV10Model Conversion
url = https://github.com/THU-MIG/yolov10
- Clone the yolov10
```bashgit clone https://github.com/THU-MIG/yolov10
yolo export model=yolov10n/s/m/b/l/x.pt format=onnx opset=13 simplify dynamic
git clone https://github.com/PrinceP/tensorrt-cpp-for-onnx
// Move .onnx file to 'examples/yolov10'
cp .onnx /app/examples/yolov10mkdir build
cd build
cmake ..
make -j4./yolov10 /app/examples/yolov10/.onnx /app/data/
// Check the results folder
```Results
**Results [YOLOv10m, Batchsize = 2, Model size = 640x640]**
![]()
![]()
![]()
###
YOLOV9Model Conversion
url = https://github.com/WongKinYiu/yolov9.git
commit 380284cb66817e9ffa30a80cad4c1b110897b2fb
- Clone the yolov9
```bashgit clone https://github.com/WongKinYiu/yolov9
python3 export.py --weights .pt --include onnx_end2end
git clone https://github.com/PrinceP/tensorrt-cpp-for-onnx
// Move -end2end.onnx file to 'examples/yolov9'
cp -end2end.onnx /app/examples/yolov9mkdir build
cd build
cmake ..
make -j4./yolov9 /app/examples/yolov9/-end2end.onnx /app/data/
// Check the results folder
```Results
**Results [YOLOv9-C, Batchsize = 2, Model size = 640x640]**
![]()
![]()
![]()
###
YOLOV8-DetectModel Conversion
url = https://github.com/ultralytics/ultralytics
ultralytics==8.1.24
- Install ultralytics package in python
```pythonfrom ultralytics import YOLO
model = YOLO('yolov8s.pt')
model.export(format='onnx', dynamic=True)
```
```bash
git clone https://github.com/PrinceP/tensorrt-cpp-for-onnx// Move .onnx file to 'examples/yolov8'
cp .onnx /app/examples/yolov8mkdir build
cd build
cmake ..
make -j4./yolov8-detect /app/examples/yolov8/.onnx /app/data/
// Check the results folder
```Results
**Results [YOLOv8s, Batchsize = 2, Model size = 640x640]**
![]()
![]()
![]()
###
YOLOV8-SegmentModel Conversion
url = https://github.com/ultralytics/ultralytics
ultralytics==8.1.24
- Install ultralytics package in python
```pythonfrom ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-seg.pt')# Export the model
model.export(format='onnx', dynamic=True)
```
```bash
git clone https://github.com/PrinceP/tensorrt-cpp-for-onnx// Move .onnx file to 'examples/yolov8'
cp .onnx /app/examples/yolov8mkdir build
cd build
cmake ..
make -j4./yolov8-segment /app/examples/yolov8/.onnx /app/data/
// Check the results folder
```Results
**Results [YOLOv8n, Batchsize = 2, Model size = 640x640]**
![]()
![]()
![]()
###
YOLOV8-PoseModel Conversion
url = https://github.com/ultralytics/ultralytics
ultralytics==8.1.24
- Install ultralytics package in python
```pythonfrom ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-pose.pt')# Export the model
model.export(format='onnx', dynamic=True)
```
```bash
git clone https://github.com/PrinceP/tensorrt-cpp-for-onnx// Move .onnx file to 'examples/yolov8'
cp .onnx /app/examples/yolov8mkdir build
cd build
cmake ..
make -j4./yolov8-pose /app/examples/yolov8/.onnx /app/data/
// Check the results folder
```Results
**Results [YOLOv8n, Batchsize = 2, Model size = 640x640]**
![]()
![]()
![]()
###
YOLOV8-OBBModel Conversion
url = https://github.com/ultralytics/ultralytics
ultralytics==8.1.24
- Install ultralytics package in python
```pythonfrom ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-obb.pt')# Export the model
model.export(format='onnx', dynamic=True)```
```bash
git clone https://github.com/PrinceP/tensorrt-cpp-for-onnx// Move .onnx file to 'examples/yolov8'
cp .onnx /app/examples/yolov8mkdir build
cd build
cmake ..
make -j4./yolov8-obb /app/examples/yolov8/.onnx /app/data/obb/
// Check the results folder
```Results
**Results [YOLOv8n, Batchsize = 2, Model size = 640x640]**
![]()
![]()
![]()
###
YOLOV8-ClassifyModel Conversion
url = https://github.com/ultralytics/ultralytics
ultralytics==8.1.24
- Install ultralytics package in python
```pythonfrom ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-cls.pt')# Export the model
model.export(format='onnx', dynamic=True)```
```bash
git clone https://github.com/PrinceP/tensorrt-cpp-for-onnx// Move .onnx file to 'examples/yolov8'
cp .onnx /app/examples/yolov8mkdir build
cd build
cmake ..
make -j4./yolov8-classify /app/examples/yolov8/.onnx /app/data/classify/
// Check the results folder
```Results
**Results [YOLOv8n, Batchsize = 2, Model size = 224x224]**
![]()
![]()
###
YOLOV5-FaceModel Conversion
url = https://github.com/deepcam-cn/yolov5-face
- Install onnx==1.16.2, tqdm, thop, seaborn, torch==1.9, torchvision==0.10
- Changes for dynamic shape
```txt
diff --git a/export.py b/export.py
index 1aa7dae..2502e23 100644
--- a/export.py
+++ b/export.py
@@ -75,14 +75,22 @@ if __name__ == '__main__':
print('\nStarting ONNX export with onnx %s...' % onnx.__version__)
f = opt.weights.replace('.pt', '.onnx') # filename
model.fuse() # only for ONNX
- input_names=['input']
- output_names=['output']
- torch.onnx.export(model, img, f, verbose=False, opset_version=12,
+
+ # Dynamic batching support
+ input_names = [ "input" ]
+ output_names = [ "output" ]
+ dynamic_axes={'input' : {0 : 'batch_size'}, 'output' : {0 : 'batch_size'}}
+ dynamic_axes['input'][2] = 'height'
+ dynamic_axes['input'][3] = 'width'
+
+ torch.onnx.export(model, img, f, verbose=True, opset_version=12,
input_names=input_names,
output_names=output_names,
+ # dynamic_axes = dynamic_axes
dynamic_axes = {'input': {0: 'batch'},
'output': {0: 'batch'}
} if opt.dynamic else None)
+ # )
# Checks
onnx_model = onnx.load(f) # load onnx model
```
- Convert pytorch to onnx
```bash
python3 export.py --weights ./yolov5s-face.pt --dynamic
``````bash
git clone https://github.com/PrinceP/tensorrt-cpp-for-onnx// Move .onnx file to 'examples/yolov5-face'
cp .onnx /app/examples/yolov5-facemkdir build
cd build
cmake ..
make -j4./yolov5-face /app/examples/yolov5-face/.onnx /app/data/yolov5-face/
// Check the results folder
```Results
**Results [YOLOv5s-face, Batchsize = 2, Model size = 640x640]**
![]()
###
NOTESIssues
- Dynamic batching is supported. The batchsize and image sizes can be updated in the codebase.
- Dynamic batch issue resolved for yolov10: https://github.com/THU-MIG/yolov10/issues/27
- Dynamic batch not present for Yolo-World
- If size issue happens while building. Increase the workspaceSize
```bash
Internal error: plugin node /end2end/EfficientNMS_TRT requires XXX bytes of scratch space, but only XXX is available. Try increasing the workspace size with IBuilderConfig::setMemoryPoolLimit().
```
```cpp
config->setMaxWorkspaceSize(1U << 26)
//The current memory is 2^26 bytes
```