{"id":31365758,"url":"https://github.com/developer0hye/yolov8-tensorrt-inference-docker-image","last_synced_at":"2025-10-14T19:09:40.417Z","repository":{"id":219967346,"uuid":"749383324","full_name":"developer0hye/yolov8-tensorrt-inference-docker-image","owner":"developer0hye","description":"Simply run your YOLOv8 faster by using TensorRT on a docker container","archived":false,"fork":false,"pushed_at":"2024-01-30T15:30:40.000Z","size":10402,"stargazers_count":4,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-09-27T09:59:24.610Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/developer0hye.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2024-01-28T12:32:54.000Z","updated_at":"2025-05-12T01:37:41.000Z","dependencies_parsed_at":"2024-01-30T17:09:54.979Z","dependency_job_id":null,"html_url":"https://github.com/developer0hye/yolov8-tensorrt-inference-docker-image","commit_stats":null,"previous_names":["developer0hye/yolov8-tensorrt-inference-docker-image"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/developer0hye/yolov8-tensorrt-inference-docker-image","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/developer0hye%2Fyolov8-tensorrt-inference-docker-image","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/developer0hye%2Fyolov8-tensorrt-inference-docker-image/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/developer0hye%2Fyolov8-tensorrt-inference-docker-image/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/developer0hye%2Fyolov8-tensorrt-inference-docker-image/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/developer0hye","download_url":"https://codeload.github.com/developer0hye/yolov8-tensorrt-inference-docker-image/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/developer0hye%2Fyolov8-tensorrt-inference-docker-image/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":279020646,"owners_count":26086895,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-14T02:00:06.444Z","response_time":60,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-09-27T09:59:18.371Z","updated_at":"2025-10-14T19:09:40.412Z","avatar_url":"https://github.com/developer0hye.png","language":"Python","readme":"# yolov8-tensorrt-python-docker-image\nrun your yolov8 faster simply using tensorrt on docker image\n\n# Setup\n\n## 1. Build an image and run a container\n```bash\ndocker build . -t yolov8trt:1.0\ndocker run --name yolov8trt-container -it --runtime=nvidia --gpus all  --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 yolov8trt:1.0\n```\n\nIf you want to access your dataset on a container, mount a volume using `-v` flag.\n\n## 2. Upload your model and data on a container\n```bash\ndocker cp {your_model.pt} yolov8trt-container:/app/\ndocker cp {your_data} yolov8trt-container:/app/\n```\n\n## 3. Export your model with TensorRT\n```bash\nyolo export model={your model}.pt format=engine device=0 half=True dynamic=True\n```\n\nEnter `yolov8x` in {your model} for simple demo\n\n## 4. Inference with TensorRT engine\n```bash\npython app.py --source {dataset path} --model {model path}\n```\n\nEnter `sample.mp4` in {dataset path} for simple demo\n\nEnter `yolov8x.engine` in {model path} for simple demo\n\n### Output Example\n\n{the name of input video or image}.txt\n```\nframe_idx(start from 1), class index, confidence score, top left x, top left y, bottom right x, bottom right y\n...\n```\n![output_example](output_example.png)\n\n## Experiment\n\n### Hardware\n\n| Component  | Specific Model |\n| ------------- | ------------- |\n| CPU  |  Intel i5-10400 |\n| GPU  |  NVIDIA RTX 3070 |\n| RAM  |  32GB |\n\n### Total Processing Time\n\n- Input: sample.mp4\n    - Resoultion: 1080 x 1920\n    - Frame Rate: 30 FPS\n    - Total Frames: 682\n    - Duration: 22.73 sec\n\n| Model  | Inference Engine |  Total Processing Time (sec) |\n| ------------- | ------------- | ------------- |\n| YOLOv8x  | PyTorch CUDA | 26.46 |\n| YOLOv8x  | TensorRT | 20.35 |\n| YOLOv8l  | PyTorch CUDA | 26.42 |\n| YOLOv8l  | TensorRT | 19.17 |\n| YOLOv8m  | PyTorch CUDA | 24.88  |\n| YOLOv8m  | TensorRT |20.18   |\n| YOLOv8s  | PyTorch CUDA |  23.53  |\n| YOLOv8s  | TensorRT | 17.58  |\n| YOLOv8n  | PyTorch CUDA  |  23.92  |\n| YOLOv8n  | TensorRT | 19.88 |\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdeveloper0hye%2Fyolov8-tensorrt-inference-docker-image","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdeveloper0hye%2Fyolov8-tensorrt-inference-docker-image","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdeveloper0hye%2Fyolov8-tensorrt-inference-docker-image/lists"}