{"id":15029887,"url":"https://github.com/sharpiless/yolov5-deepsort-inference","last_synced_at":"2025-04-14T19:47:00.870Z","repository":{"id":37652906,"uuid":"325680583","full_name":"Sharpiless/Yolov5-deepsort-inference","owner":"Sharpiless","description":"Yolov5 deepsort inference，使用YOLOv5+Deepsort实现车辆行人追踪和计数，代码封装成一个Detector类，更容易嵌入到自己的项目中","archived":false,"fork":false,"pushed_at":"2024-12-26T08:04:15.000Z","size":81868,"stargazers_count":1330,"open_issues_count":22,"forks_count":285,"subscribers_count":11,"default_branch":"master","last_synced_at":"2025-04-07T17:01:49.169Z","etag":null,"topics":["deepsort","mot","object-detection","tracking","yolov5","yolov5-deepsort-inference"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Sharpiless.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2020-12-31T01:00:28.000Z","updated_at":"2025-04-07T14:31:57.000Z","dependencies_parsed_at":"2025-01-05T18:10:27.344Z","dependency_job_id":null,"html_url":"https://github.com/Sharpiless/Yolov5-deepsort-inference","commit_stats":{"total_commits":15,"total_committers":2,"mean_commits":7.5,"dds":"0.19999999999999996","last_synced_commit":"dca6f6a662a6141ff6614de2cb3d33e8b8105c26"},"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Sharpiless%2FYolov5-deepsort-inference","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Sharpiless%2FYolov5-deepsort-inference/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Sharpiless%2FYolov5-deepsort-inference/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Sharpiless%2FYolov5-deepsort-inference/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Sharpiless","download_url":"https://codeload.github.com/Sharpiless/Yolov5-deepsort-inference/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248950987,"owners_count":21188377,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deepsort","mot","object-detection","tracking","yolov5","yolov5-deepsort-inference"],"created_at":"2024-09-24T20:11:55.006Z","updated_at":"2025-04-14T19:47:00.847Z","avatar_url":"https://github.com/Sharpiless.png","language":"Python","readme":"\n# **YOLOv5 + DeepSort 用于目标跟踪与计数**  \n🚗🚶‍♂️ **使用 YOLOv5 和 DeepSort 实现车辆与行人实时跟踪与计数**\n\n[![GitHub stars](https://img.shields.io/github/stars/Sharpiless/Yolov5-deepsort-inference?style=social)](https://github.com/Sharpiless/Yolov5-deepsort-inference)  [![GitHub forks](https://img.shields.io/github/forks/Sharpiless/Yolov5-deepsort-inference?style=social)](https://github.com/Sharpiless/Yolov5-deepsort-inference)  [![License](https://img.shields.io/github/license/Sharpiless/Yolov5-deepsort-inference)](https://github.com/Sharpiless/Yolov5-deepsort-inference/blob/main/LICENSE)\n\n最新版本：[https://github.com/Sharpiless/YOLOv11-DeepSort](https://github.com/Sharpiless/YOLOv11-DeepSort)\n\n---\n\n## **📌 项目简介**\n\n本项目将 **YOLOv5** 与 **DeepSort** 相结合，实现了对目标的实时跟踪与计数。提供了一个封装的 `Detector` 类，方便将此功能嵌入到自定义项目中。  \n\n🔗 **阅读完整博客**：[【小白CV教程】YOLOv5+Deepsort实现车辆行人的检测、追踪和计数](https://blog.csdn.net/weixin_44936889/article/details/112002152)\n\n---\n\n## **🚀 核心功能**\n\n- **目标跟踪**：实时跟踪车辆与行人。\n- **计数功能**：轻松统计视频流中的车辆或行人数。\n- **封装式接口**：`Detector` 类封装了检测与跟踪逻辑，便于集成。\n- **高度自定义**：支持训练自己的 YOLOv5 模型并无缝接入框架。\n\n---\n\n## **🔧 使用说明**\n\n### **安装依赖**\n```bash\npip install -r requirements.txt\n```\n\n确保安装了 `requirements.txt` 文件中列出的所有依赖。\n### **运行 Demo**\n```bash\npython demo.py\n```\n---\n\n## **🛠️ 开发说明**\n\n### **YOLOv5 检测器**\n\n```python\nclass Detector(baseDet):\n\n    def __init__(self):\n        super(Detector, self).__init__()\n        self.init_model()\n        self.build_config()\n\n    def init_model(self):\n\n        self.weights = 'weights/yolov5m.pt'\n        self.device = '0' if torch.cuda.is_available() else 'cpu'\n        self.device = select_device(self.device)\n        model = attempt_load(self.weights, map_location=self.device)\n        model.to(self.device).eval()\n        model.half()\n        # torch.save(model, 'test.pt')\n        self.m = model\n        self.names = model.module.names if hasattr(\n            model, 'module') else model.names\n\n    def preprocess(self, img):\n\n        img0 = img.copy()\n        img = letterbox(img, new_shape=self.img_size)[0]\n        img = img[:, :, ::-1].transpose(2, 0, 1)\n        img = np.ascontiguousarray(img)\n        img = torch.from_numpy(img).to(self.device)\n        img = img.half()  # 半精度\n        img /= 255.0  # 图像归一化\n        if img.ndimension() == 3:\n            img = img.unsqueeze(0)\n\n        return img0, img\n\n    def detect(self, im):\n\n        im0, img = self.preprocess(im)\n\n        pred = self.m(img, augment=False)[0]\n        pred = pred.float()\n        pred = non_max_suppression(pred, self.threshold, 0.4)\n\n        pred_boxes = []\n        for det in pred:\n\n            if det is not None and len(det):\n                det[:, :4] = scale_coords(\n                    img.shape[2:], det[:, :4], im0.shape).round()\n\n                for *x, conf, cls_id in det:\n                    lbl = self.names[int(cls_id)]\n                    if not lbl in ['person', 'car', 'truck']:\n                        continue\n                    x1, y1 = int(x[0]), int(x[1])\n                    x2, y2 = int(x[2]), int(x[3])\n                    pred_boxes.append(\n                        (x1, y1, x2, y2, lbl, conf))\n\n        return im, pred_boxes\n```\n- 调用 `self.detect()` 方法返回图像和预测结果\n### **DeepSort 追踪器**\n\n```python\ndeepsort = DeepSort(cfg.DEEPSORT.REID_CKPT,\n                    max_dist=cfg.DEEPSORT.MAX_DIST, min_confidence=cfg.DEEPSORT.MIN_CONFIDENCE,\n                    nms_max_overlap=cfg.DEEPSORT.NMS_MAX_OVERLAP, max_iou_distance=cfg.DEEPSORT.MAX_IOU_DISTANCE,\n                    max_age=cfg.DEEPSORT.MAX_AGE, n_init=cfg.DEEPSORT.N_INIT, nn_budget=cfg.DEEPSORT.NN_BUDGET,\n                    use_cuda=True)\n```\n- 调用 `self.update()` 方法更新追踪结果\n---\n\n## **📊 训练自己的模型**\n\n如果需要训练自定义的 YOLOv5 模型，请参考以下教程：  \n[【小白CV】手把手教你用YOLOv5训练自己的数据集（从Windows环境配置到模型部署）](https://blog.csdn.net/weixin_44936889/article/details/110661862)\n\n训练完成后，将模型权重文件放置于 `weights` 文件夹中。\n\n---\n\n## **📦 API 调用**\n\n### **初始化检测器**\n```python\nfrom AIDetector_pytorch import Detector\n\ndet = Detector()\n```\n\n### **调用检测接口**\n```python\nfunc_status = {}\nfunc_status['headpose'] = None\n\nresult = det.feedCap(im, func_status)\n```\n\n- `im`: 输入的 BGR 图像。\n- `result['frame']`: 检测结果的可视化图像。\n\n---\n\n## **✨ 可视化效果**\n\n![效果图](https://img-blog.csdnimg.cn/20201231090541223.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl80NDkzNjg4OQ==,size_16,color_FFFFFF,t_70)\n\n---\n\n## **📚 联系作者** \n  - Bilibili: [https://space.bilibili.com/470550823](https://space.bilibili.com/470550823)  \n  - CSDN: [https://blog.csdn.net/weixin_44936889](https://blog.csdn.net/weixin_44936889)  \n  - AI Studio: [https://aistudio.baidu.com/aistudio/personalcenter/thirdview/67156](https://aistudio.baidu.com/aistudio/personalcenter/thirdview/67156)  \n  - GitHub: [https://github.com/Sharpiless](https://github.com/Sharpiless)  \n\n---\n\n\u003cpicture\u003e\n  \u003csource\n    media=\"(prefers-color-scheme: dark)\"\n    srcset=\"\n      https://api.star-history.com/svg?repos=Sharpiless/Yolov5-deepsort-inference\u0026type=Date\u0026theme=dark\n    \"\n  /\u003e\n  \u003csource\n    media=\"(prefers-color-scheme: light)\"\n    srcset=\"\n      https://api.star-history.com/svg?repos=Sharpiless/Yolov5-deepsort-inference\u0026type=Date\n    \"\n  /\u003e\n  \u003cimg\n    alt=\"Star History Chart\"\n    src=\"https://api.star-history.com/svg?repos=Sharpiless/Yolov5-deepsort-inference\u0026type=Date\"\n  /\u003e\n\u003c/picture\u003e\n\n## **💡 许可证**\n\n本项目遵循 **GNU General Public License v3.0** 协议。  \n**标明目标检测部分来源**：[https://github.com/ultralytics/yolov5](https://github.com/ultralytics/yolov5)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsharpiless%2Fyolov5-deepsort-inference","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsharpiless%2Fyolov5-deepsort-inference","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsharpiless%2Fyolov5-deepsort-inference/lists"}