{"id":13436023,"url":"https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference","last_synced_at":"2025-03-18T12:30:57.941Z","repository":{"id":41369675,"uuid":"416922955","full_name":"NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference","owner":"NVIDIA-ISAAC-ROS","description":"NVIDIA-accelerated DNN model inference ROS 2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU","archived":false,"fork":false,"pushed_at":"2025-02-28T01:46:59.000Z","size":415,"stargazers_count":109,"open_issues_count":14,"forks_count":16,"subscribers_count":4,"default_branch":"main","last_synced_at":"2025-02-28T08:14:52.927Z","etag":null,"topics":["ai","deep-learning","deeplearning","dnn","gpu","jetson","nvidia","ros","ros2","ros2-humble","tao","tensorrt","tensorrt-inference","triton","triton-inference-server"],"latest_commit_sha":null,"homepage":"https://developer.nvidia.com/isaac-ros-gems","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/NVIDIA-ISAAC-ROS.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-10-13T23:06:48.000Z","updated_at":"2025-02-28T00:39:01.000Z","dependencies_parsed_at":"2024-05-31T10:26:42.128Z","dependency_job_id":"47e300fe-68d3-48df-8093-02de526afb89","html_url":"https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference","commit_stats":null,"previous_names":[],"tags_count":14,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NVIDIA-ISAAC-ROS%2Fisaac_ros_dnn_inference","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NVIDIA-ISAAC-ROS%2Fisaac_ros_dnn_inference/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NVIDIA-ISAAC-ROS%2Fisaac_ros_dnn_inference/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NVIDIA-ISAAC-ROS%2Fisaac_ros_dnn_inference/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/NVIDIA-ISAAC-ROS","download_url":"https://codeload.github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":244222156,"owners_count":20418459,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","deep-learning","deeplearning","dnn","gpu","jetson","nvidia","ros","ros2","ros2-humble","tao","tensorrt","tensorrt-inference","triton","triton-inference-server"],"created_at":"2024-07-31T03:00:42.484Z","updated_at":"2025-03-18T12:30:57.934Z","avatar_url":"https://github.com/NVIDIA-ISAAC-ROS.png","language":"C++","readme":"# Isaac ROS DNN Inference\n\nNVIDIA-accelerated DNN model inference ROS 2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU.\n\n\u003cdiv align=\"center\"\u003e\u003cimg alt=\"bounding box for people detection\" src=\"https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/main/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_dnn_inference/isaac_ros_dnn_peoplenet.jpg/\" width=\"300px\"/\u003e\n\u003cimg alt=\"segementation mask for people detection\" src=\"https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/main/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_dnn_inference/isaac_ros_dnn_inference_peoplesemsegnet.jpg/\" width=\"300px\"/\u003e\u003c/div\u003e\n\n## Webinar Available\n\nLearn how to use this package by watching our on-demand webinar:\n[Accelerate YOLOv5 and Custom AI Models in ROS with NVIDIA Isaac](https://gateway.on24.com/wcc/experience/elitenvidiabrill/1407606/3998202/isaac-ros-webinar-series)\n\n---\n\n## Overview\n\nIsaac ROS DNN Inference contains ROS 2 packages for performing DNN\ninference, providing AI-based perception for robotics applications. DNN\ninference uses a pre-trained DNN model to ingest an input Tensor and\noutput a prediction to an output Tensor.\n\n\u003cdiv align=\"center\"\u003e\u003ca class=\"reference internal image-reference\" href=\"https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/main/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_dnn_inference/isaac_ros_dnn_inference_nodegraph.png/\"\u003e\u003cimg alt=\"image\" src=\"https://media.githubusercontent.com/media/NVIDIA-ISAAC-ROS/.github/main/resources/isaac_ros_docs/repositories_and_packages/isaac_ros_dnn_inference/isaac_ros_dnn_inference_nodegraph.png/\" width=\"800px\"/\u003e\u003c/a\u003e\u003c/div\u003e\n\nAbove is a typical graph of nodes for DNN inference on image data. The\ninput image is resized to match the input resolution of the DNN; the\nimage resolution may be reduced to improve DNN inference performance\n,which typically scales directly with the number of pixels in the image.\nDNN inference requires input Tensors, so a DNN encoder node is used to\nconvert from an input image to Tensors, including any data\npre-processing that is required for the DNN model. Once DNN inference is\nperformed, the DNN decoder node is used to convert the output Tensors to\nresults that can be used by the application.\n\nTensorRT and Triton are two separate ROS nodes to perform DNN inference.\nThe TensorRT node uses\n[TensorRT](https://developer.nvidia.com/tensorrt) to provide\nhigh-performance deep learning inference. TensorRT optimizes the DNN\nmodel for inference on the target hardware, including Jetson and\ndiscrete GPUs. It also supports specific operations that are commonly\nused by DNN models. For newer or bespoke DNN models, TensorRT may not\nsupport inference on the model. For these models, use the Triton node.\n\nThe Triton node uses the [Triton Inference\nServer](https://developer.nvidia.com/nvidia-triton-inference-server),\nwhich provides a compatible frontend supporting a combination of\ndifferent inference backends (e.g. ONNX Runtime, TensorRT Engine Plan,\nTensorFlow, PyTorch). In-house benchmark results measure little\ndifference between using TensorRT directly or configuring Triton to use\nTensorRT as a backend.\n\nSome DNN models may require custom DNN encoders to convert the input\ndata to the Tensor format needed for the model, and custom DNN decoders\nto convert from output Tensors into results that can be used in the\napplication. Leverage the DNN encoder and DNN decoder node(s) for image\nbounding box detection and image segmentation, or your own custom\nnode(s).\n\n\u003e [!Note]\n\u003e DNN inference can be performed on different types of input\n\u003e data, including audio, video, text, and various sensor data, such as\n\u003e LIDAR, camera, and RADAR. This package provides implementations for\n\u003e DNN encode and DNN decode functions for images, which are commonly\n\u003e used for perception in robotics. The DNNs operate on Tensors for\n\u003e their input, output, and internal transformations, so the input image\n\u003e needs to be converted to a Tensor for DNN inferencing.\n\n## Isaac ROS NITROS Acceleration\n\nThis package is powered by [NVIDIA Isaac Transport for ROS (NITROS)](https://developer.nvidia.com/blog/improve-perception-performance-for-ros-2-applications-with-nvidia-isaac-transport-for-ros/), which leverages type adaptation and negotiation to optimize message formats and dramatically accelerate communication between participating nodes.\n\n## Performance\n\n| Sample Graph\u003cbr/\u003e\u003cbr/\u003e                                                                                                                                                                                       | Input Size\u003cbr/\u003e\u003cbr/\u003e     | AGX Orin\u003cbr/\u003e\u003cbr/\u003e                                                                                                                                                       | Orin NX\u003cbr/\u003e\u003cbr/\u003e                                                                                                                                                       | Orin Nano Super 8GB\u003cbr/\u003e\u003cbr/\u003e                                                                                                                                           | x86_64 w/ RTX 4090\u003cbr/\u003e\u003cbr/\u003e                                                                                                                                              |\n|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [TensorRT Node](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/benchmarks/isaac_ros_tensor_rt_benchmark/scripts/isaac_ros_tensor_rt_dope_node.py)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003eDOPE\u003cbr/\u003e\u003cbr/\u003e            | VGA\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e  | [30.8 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_dope_node-agx_orin.json)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e37 ms @ 30Hz\u003cbr/\u003e\u003cbr/\u003e   | [15.5 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_dope_node-orin_nx.json)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e55 ms @ 30Hz\u003cbr/\u003e\u003cbr/\u003e   | [20.8 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_dope_node-orin_nano.json)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e51 ms @ 30Hz\u003cbr/\u003e\u003cbr/\u003e | [298 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_dope_node-x86-4090.json)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e5.3 ms @ 30Hz\u003cbr/\u003e\u003cbr/\u003e    |\n| [Triton Node](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/benchmarks/isaac_ros_triton_benchmark/scripts/isaac_ros_triton_dope_node.py)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003eDOPE\u003cbr/\u003e\u003cbr/\u003e                    | VGA\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e  | [31.2 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_triton_dope_node-agx_orin.json)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e340 ms @ 30Hz\u003cbr/\u003e\u003cbr/\u003e     | [15.5 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_triton_dope_node-orin_nx.json)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e55 ms @ 30Hz\u003cbr/\u003e\u003cbr/\u003e      | [22.2 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_triton_dope_node-orin_nano.json)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e490 ms @ 30Hz\u003cbr/\u003e\u003cbr/\u003e   | [277 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_triton_dope_node-x86-4090.json)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e4.7 ms @ 30Hz\u003cbr/\u003e\u003cbr/\u003e       |\n| [TensorRT Node](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/benchmarks/isaac_ros_tensor_rt_benchmark/scripts/isaac_ros_tensor_rt_ps_node.py)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003ePeopleSemSegNet\u003cbr/\u003e\u003cbr/\u003e   | 544p\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e | [489 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_ps_node-agx_orin.json)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e4.6 ms @ 30Hz\u003cbr/\u003e\u003cbr/\u003e     | [258 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_ps_node-orin_nx.json)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e7.1 ms @ 30Hz\u003cbr/\u003e\u003cbr/\u003e     | [269 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_ps_node-orin_nano.json)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e6.2 ms @ 30Hz\u003cbr/\u003e\u003cbr/\u003e   | [619 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_ps_node-x86-4090.json)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e2.2 ms @ 30Hz\u003cbr/\u003e\u003cbr/\u003e      |\n| [Triton Node](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/benchmarks/isaac_ros_triton_benchmark/scripts/isaac_ros_triton_ps_node.py)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003ePeopleSemSegNet\u003cbr/\u003e\u003cbr/\u003e           | 544p\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e | [216 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_triton_ps_node-agx_orin.json)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e5.5 ms @ 30Hz\u003cbr/\u003e\u003cbr/\u003e        | [143 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_triton_ps_node-orin_nx.json)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e8.2 ms @ 30Hz\u003cbr/\u003e\u003cbr/\u003e        | –\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e                                                                                                                                                   | [585 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_triton_ps_node-x86-4090.json)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e2.5 ms @ 30Hz\u003cbr/\u003e\u003cbr/\u003e         |\n| [DNN Image Encoder Node](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/benchmarks/isaac_ros_dnn_image_encoder_benchmark/scripts/isaac_ros_dnn_image_encoder_node.py)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e | VGA\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e  | [339 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_dnn_image_encoder_node-agx_orin.json)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e13 ms @ 30Hz\u003cbr/\u003e\u003cbr/\u003e | [375 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_dnn_image_encoder_node-orin_nx.json)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e12 ms @ 30Hz\u003cbr/\u003e\u003cbr/\u003e | –\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e                                                                                                                                                   | [480 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_dnn_image_encoder_node-x86-4090.json)\u003cbr/\u003e\u003cbr/\u003e\u003cbr/\u003e6.0 ms @ 30Hz\u003cbr/\u003e\u003cbr/\u003e |\n\n---\n\n## Documentation\n\nPlease visit the [Isaac ROS Documentation](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_dnn_inference/index.html) to learn how to use this repository.\n\n---\n\n## Packages\n\n* [`isaac_ros_dnn_image_encoder`](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_dnn_inference/isaac_ros_dnn_image_encoder/index.html)\n  * [Migration Guide](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_dnn_inference/isaac_ros_dnn_image_encoder/index.html#migration-guide)\n  * [API](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_dnn_inference/isaac_ros_dnn_image_encoder/index.html#api)\n* [`isaac_ros_tensor_proc`](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_dnn_inference/isaac_ros_tensor_proc/index.html)\n  * [API](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_dnn_inference/isaac_ros_tensor_proc/index.html#api)\n* [`isaac_ros_tensor_rt`](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_dnn_inference/isaac_ros_tensor_rt/index.html)\n  * [Quickstart](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_dnn_inference/isaac_ros_tensor_rt/index.html#quickstart)\n  * [Troubleshooting](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_dnn_inference/isaac_ros_tensor_rt/index.html#troubleshooting)\n  * [API](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_dnn_inference/isaac_ros_tensor_rt/index.html#api)\n* [`isaac_ros_triton`](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_dnn_inference/isaac_ros_triton/index.html)\n  * [Quickstart](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_dnn_inference/isaac_ros_triton/index.html#quickstart)\n  * [Troubleshooting](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_dnn_inference/isaac_ros_triton/index.html#troubleshooting)\n  * [API](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_dnn_inference/isaac_ros_triton/index.html#api)\n\n## Latest\n\nUpdate 2024-12-10: Update to be compatible with JetPack 6.1\n","funding_links":[],"categories":["C++"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FNVIDIA-ISAAC-ROS%2Fisaac_ros_dnn_inference","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FNVIDIA-ISAAC-ROS%2Fisaac_ros_dnn_inference","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FNVIDIA-ISAAC-ROS%2Fisaac_ros_dnn_inference/lists"}