{"id":14958702,"url":"https://github.com/tensorflow/tensorrt","last_synced_at":"2025-10-24T16:31:35.104Z","repository":{"id":38896761,"uuid":"157606202","full_name":"tensorflow/tensorrt","owner":"tensorflow","description":"TensorFlow/TensorRT integration","archived":false,"fork":false,"pushed_at":"2023-11-30T17:40:09.000Z","size":3540,"stargazers_count":740,"open_issues_count":108,"forks_count":226,"subscribers_count":33,"default_branch":"master","last_synced_at":"2025-02-02T17:54:05.821Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tensorflow.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2018-11-14T20:22:24.000Z","updated_at":"2025-01-27T23:01:13.000Z","dependencies_parsed_at":"2024-09-24T13:28:41.803Z","dependency_job_id":"a476387d-00cb-4bdc-a9b0-c8b556ba89b5","html_url":"https://github.com/tensorflow/tensorrt","commit_stats":{"total_commits":324,"total_committers":43,"mean_commits":7.534883720930233,"dds":0.6697530864197531,"last_synced_commit":"13b14ef7384f1eb497ba4787b55e27597507a329"},"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Ftensorrt","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Ftensorrt/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Ftensorrt/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Ftensorrt/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tensorflow","download_url":"https://codeload.github.com/tensorflow/tensorrt/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":238004414,"owners_count":19400549,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-09-24T13:17:52.721Z","updated_at":"2025-10-24T16:31:33.527Z","avatar_url":"https://github.com/tensorflow.png","language":"Jupyter Notebook","readme":"# Documentation for TensorRT in TensorFlow (TF-TRT)\n\nTensorFlow-TensorRT (TF-TRT) is an integration of TensorFlow and TensorRT that leverages inference optimization on NVIDIA GPUs within the TensorFlow ecosystem. It provides a simple API that delivers substantial performance gains on NVIDIA GPUs with minimal effort. The documentation on how to accelerate inference in TensorFlow with TensorRT (TF-TRT) is here: https://docs.nvidia.com/deeplearning/dgx/tf-trt-user-guide/index.html\n\nCheck out this [gentle introduction](https://www.youtube.com/watch?v=w7871kMiAs8) to TensorFlow TensorRT or watch this [quick walkthrough](https://www.youtube.com/watch?v=O-_K42EAlP0) example for more!\n# Examples for TensorRT in TensorFlow (TF-TRT)\n\nThis repository contains a number of different examples\nthat show how to use TF-TRT.\nTF-TRT is a part of TensorFlow\nthat optimizes TensorFlow graphs using\n[TensorRT](https://developer.nvidia.com/tensorrt).\nWe have used these examples to verify the accuracy and\nperformance of TF-TRT. For more information see\n[Verified Models](https://docs.nvidia.com/deeplearning/dgx/tf-trt-user-guide/index.html#verified-models).\n\n## Examples\n\n* [Image Classification](tftrt/benchmarking-python/image_classification)\n* [Object Detection](tftrt/benchmarking-python/object_detection)\n\n\n# Using TensorRT in TensorFlow (TF-TRT)\n\nThis module provides necessary bindings and introduces\n`TRTEngineOp` operator that wraps a subgraph in TensorRT.\nThis module is under active development.\n\n\n## Installing TF-TRT\n\nCurrently Tensorflow nightly builds include TF-TRT by default,\nwhich means you don't need to install TF-TRT separately.\nYou can pull the latest TF containers from docker hub or\ninstall the latest TF pip package to get access to the latest TF-TRT.\n\nIf you want to use TF-TRT on NVIDIA Jetson platform, you can find\nthe download links for the relevant Tensorflow pip packages here:\nhttps://docs.nvidia.com/deeplearning/dgx/index.html#installing-frameworks-for-jetson\n\nYou can also use [NVIDIA's Tensorflow container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorflow)(tested and published monthly).\n\n## Installing TensorRT\n\nIn order to make use of TF-TRT, you will need a local installation\nof TensorRT from the\n[NVIDIA Developer website](https://developer.nvidia.com/tensorrt).\nInstallation instructions for compatibility with TensorFlow are provided on the\n[TensorFlow GPU support](https://www.tensorflow.org/install/gpu) guide.\n\n\n## Documentation\n\n[TF-TRT documentaion](https://docs.nvidia.com/deeplearning/dgx/tf-trt-user-guide/index.html)\ngives an overview of the supported functionalities, provides tutorials\nand verified models, explains best practices with troubleshooting guides.\n\n\n## Tests\n\nTF-TRT includes both Python tests and C++ unit tests.\nMost of Python tests are located in the test directory\nand they can be executed uring `bazel test` or directly\nwith the Python command. Most of the C++ unit tests are\nused to test the conversion functions that convert each TF op to\na number of TensorRT layers.\n\n\n## Compilation\n\nIn order to compile the module, you need to have a local TensorRT installation\n(libnvinfer.so and respective include files). During the configuration step,\nTensorRT should be enabled and installation path should be set. If installed\nthrough package managers (deb,rpm), configure script should find the necessary\ncomponents from the system automatically. If installed from tar packages, user\nhas to set path to location where the library is installed during configuration.\n\n```shell\nbazel build --config=cuda --config=opt //tensorflow/tools/pip_package:build_pip_package\nbazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/\n```\n\n\n## License\n\n[Apache License 2.0](LICENSE)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorflow%2Ftensorrt","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftensorflow%2Ftensorrt","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorflow%2Ftensorrt/lists"}