{"id":15157729,"url":"https://github.com/kanglang123/tensorrt-deploy","last_synced_at":"2026-03-07T03:31:34.601Z","repository":{"id":254602416,"uuid":"847011724","full_name":"kanglang123/tensorRT-deploy","owner":"kanglang123","description":"使用TensorRT部署模型的demo","archived":false,"fork":false,"pushed_at":"2024-08-25T02:10:09.000Z","size":1787,"stargazers_count":2,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-01-31T01:58:47.856Z","etag":null,"topics":["deep-learning","deployment","pytorch"],"latest_commit_sha":null,"homepage":"https://www.zhihu.com/column/c_1497987564452114432","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/kanglang123.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-08-24T15:42:00.000Z","updated_at":"2024-12-17T02:02:23.000Z","dependencies_parsed_at":"2024-09-22T06:01:24.591Z","dependency_job_id":"26e3f151-243a-4870-9bb8-59505e4b653d","html_url":"https://github.com/kanglang123/tensorRT-deploy","commit_stats":{"total_commits":4,"total_committers":2,"mean_commits":2.0,"dds":0.25,"last_synced_commit":"a4aec07bfaaeab4f3721e673f7cfa26183327ee0"},"previous_names":["kanglang123/tensorrt-deploy"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kanglang123%2FtensorRT-deploy","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kanglang123%2FtensorRT-deploy/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kanglang123%2FtensorRT-deploy/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kanglang123%2FtensorRT-deploy/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/kanglang123","download_url":"https://codeload.github.com/kanglang123/tensorRT-deploy/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":237982048,"owners_count":19397184,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","deployment","pytorch"],"created_at":"2024-09-26T20:02:08.772Z","updated_at":"2025-10-24T14:30:42.684Z","avatar_url":"https://github.com/kanglang123.png","language":"Python","readme":"# tensorRT-deploy\n模型pth-onnx-engine格式转换。\n\n使用TensorRT部署模型的demo.\n\n现在这个代码中的很多函数官方已经不用了，所以直接跑不起来。我根据最新的官方文档做了一些函数的更改，可以跑起来。贴在这里，仅供参考。\n\n## Installation\n1. 安装TensorRT库：\n\n    下载地址：[NVIDIA TensorRT 10.x Download](https://developer.nvidia.com/tensorrt/download/10x)，根据自己电脑配置选择相应的下载链接。\n    我的电脑是Ubuntu20.04，所以下载的链接为：[TensorRT 10.0 EA for Linux x86_64 and CUDA 12.0 to 12.4 TAR Package](https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.0.0/tars/TensorRT-10.0.0.6.Linux.x86_64-gnu.cuda-12.4.tar.gz)\n\n    ```\n    tar -zxvf TensorRT-10.0.0.6.Linux.x86_64-gnu.cuda-12.4.tar.gz\n    ```\n    在.bashrc中添加环境变量：\n    ```\n    export LD_LIBRARY_PATH=/home/wyk/TensorRT-10.0.0.6/lib:$LD_LIBRARY_PATH\n    export LIBRARY_PATH=/home/wyk/TensorRT-10.0.0.6/lib::$LIBRARY_PATH\n    ```\n    安装wheel文件：\n    ```\n    pip install onnx_graphsurgeon-0.5.0-py2.py3-none-any.whl\n    pip install tensorrt-10.0.0b6-cp310-none-linux_x86_64.whl # 根据自己的python版本安装，我的python是3.10.0版本\n    ```\n\n2. 下载仓库：\n    ```\n    git clone https://github.com/kanglang123/tensorRT-deploy\n    ```\n## Using Tutorials\n1. pth2onnx.py\n   \n   下载开源模型和图像，进行推理，并将pth模型转换为onnx格式保存。\n\n2. check_onnx.py\n\n    加载保存的onnx格式模型，根据模型推理后输出的图像，检查保存onnx模型是否正确。\n\n3. onnx2tensorrt.py\n\n    使用pytorch定义一个模型，转换为onnx格式保存。\n    加载保存的onnx模型并使用TensorRT将onnx格式模型转为engine格式模型。\n\n4. infer_tensorrt.py\n\n    使用保存的engine格式模型进行推理。\n\n## 参考链接：\n\n1. [知乎：模型部署那些事](https://zhuanlan.zhihu.com/p/547624036)\n\n2. [NVIDIA TensorRT API Docmentation](https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/infer/Core/pyCore.html)\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fkanglang123%2Ftensorrt-deploy","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fkanglang123%2Ftensorrt-deploy","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fkanglang123%2Ftensorrt-deploy/lists"}