An open API service indexing awesome lists of open source software.

Projects in Awesome Lists tagged with tensorrt-inference

A curated list of projects in awesome lists tagged with tensorrt-inference .

https://github.com/LCH1238/bevdet-tensorrt-cpp

BEVDet implemented by TensorRT, C++; Achieving real-time performance on Orin

3ddetection bev tensorrt-inference

Last synced: 19 Mar 2025

https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference

NVIDIA-accelerated DNN model inference ROS 2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU

ai deep-learning deeplearning dnn gpu jetson nvidia ros ros2 ros2-humble tao tensorrt tensorrt-inference triton triton-inference-server

Last synced: 18 Mar 2025

https://github.com/yuvraj108c/ComfyUI-Depth-Anything-Tensorrt

ComfyUI Depth Anything (v1/v2) Tensorrt Custom Node (up to 14x faster)

comfyui comfyui-nodes depth-anything depth-map onnx stable-diffusion tensorrt tensorrt-inference

Last synced: 19 Dec 2024

https://github.com/emptysoal/TensorRT-YOLOv8

Based on tensorrt v8.0+, deploy detect, pose, segment, tracking of YOLOv8 with C++ and python api.

bytetrack cpp deep-learning detection pose python3 segmentation tensorrt tensorrt-conversion tensorrt-inference tracking yolov8

Last synced: 15 Dec 2024

https://github.com/qengineering/yolov8-tensorrt-jetson_nano

A lightweight C++ implementation of YoloV8 running on NVIDIAs TensorRT engine

jetson jetson-nano jetson-orin jetson-orin-nano nvidia tensorrt tensorrt-engine tensorrt-inference yolov5 yolov8 yolov8s

Last synced: 13 Apr 2025

https://github.com/lona-cn/vision-simple

a lightweight C++ cross-platform vision inference library,support YOLOv10 YOLOv11 PaddleOCR EasyOCR ,using ONNXRuntime/TVM with multiple exectuion providers.

cuda directml easyocr ocr onnxruntime paddleocr tensorrt-inference tvm yolo

Last synced: 02 Feb 2025

https://github.com/parlaynu/inference-tensorrt

Convert ONNX models to TensorRT engines and run inference in containerized environments

docker jetson-nano nvidia-gpu onnx python pyzmq tensorrt-inference zeromq

Last synced: 09 May 2025

https://github.com/maximedebarbat/dolphin

Dolphin is a python toolkit meant to speed up inference of TensorRT by providing CUDA-Accelerated processing.

cuda python tensorrt-inference

Last synced: 15 Apr 2025

https://github.com/cuixing158/yolo-tensorrt-cpp

部署量化库,适合pc,jetson,int8量化, yolov3/v4/v5

tensorrt tensorrt-engine tensorrt-inference yolov3 yolov4 yolov5

Last synced: 28 Feb 2025

https://github.com/princep/tensorrt-sample-on-threads

A tutorial for getting started on running Tensorrt engine and Deep Learning Accelerator (DLA) models on threads

cpp deep-learning-accelerator dla mnist nvcc tensorrt tensorrt-inference threads

Last synced: 23 Feb 2025

https://github.com/asitiaf/llm-getting-started

Practical, beginner-friendly LLM projects using Python, LangChain, and LangSmith. Modular, reusable, and easy to run.

ai chatgpt cloudnative course emacs genai jupyter-notebook obsidian-md productivity python self-hosted stt tensorrt-inference whatsapp-ai

Last synced: 30 Mar 2025

https://github.com/mohamedsamirx/yolov12-tensorrt-cpp

YOLOv12 Inference Using CPP, Tensorrt, And CUDA

cpp cuda tensorrt tensorrt-inference yolo yolov12

Last synced: 22 Feb 2025