{"id":17632692,"url":"https://github.com/dancing-ui/uestc_vhm","last_synced_at":"2025-07-29T18:12:29.932Z","repository":{"id":252773242,"uuid":"840741566","full_name":"dancing-ui/uestc_vhm","owner":"dancing-ui","description":"使用yolov8、fast-reid、deepsort完成目标跟踪，使用yolov8、fast-reid、Faiss完成行人重识别","archived":false,"fork":false,"pushed_at":"2025-02-15T08:36:00.000Z","size":418,"stargazers_count":18,"open_issues_count":0,"forks_count":2,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-03-31T00:23:55.796Z","etag":null,"topics":["cuda","deepsort","dockerfile","faiss","fast-reid","tensorrt","yolov8n"],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/dancing-ui.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-08-10T14:45:45.000Z","updated_at":"2025-03-21T07:02:03.000Z","dependencies_parsed_at":"2024-10-22T12:48:11.726Z","dependency_job_id":null,"html_url":"https://github.com/dancing-ui/uestc_vhm","commit_stats":null,"previous_names":["dancing-ui/uestc_vhm"],"tags_count":3,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dancing-ui%2Fuestc_vhm","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dancing-ui%2Fuestc_vhm/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dancing-ui%2Fuestc_vhm/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dancing-ui%2Fuestc_vhm/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/dancing-ui","download_url":"https://codeload.github.com/dancing-ui/uestc_vhm/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":252588082,"owners_count":21772600,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cuda","deepsort","dockerfile","faiss","fast-reid","tensorrt","yolov8n"],"created_at":"2024-10-23T01:45:10.032Z","updated_at":"2025-05-05T22:37:07.960Z","avatar_url":"https://github.com/dancing-ui.png","language":"C++","readme":"# uestc_vhm\n\u003cdiv align=\"center\"\u003e\n\n  [![Cuda](https://img.shields.io/badge/CUDA-11.8-%2376B900?logo=nvidia)](https://developer.nvidia.com/cuda-toolkit-archive)\n  [![](https://img.shields.io/badge/TensorRT-8.6.1.6-%2376B900.svg?style=flat\u0026logo=tensorrt)](https://developer.nvidia.com/nvidia-tensorrt-8x-download)\n  [![](https://img.shields.io/badge/windows-11-blue.svg?style=flat\u0026logo=windows)](https://www.microsoft.com/)\n  [![](https://img.shields.io/badge/ubuntu-22.04-orange.svg?style=flat\u0026logo=ubuntu)](https://releases.ubuntu.com/22.04/)\n  [![GitHub stars](https://img.shields.io/github/stars/dancing-ui/uestc_vhm.svg?style=flat-square\u0026logo=github\u0026label=Stars\u0026logoColor=white)](https://github.com/dancing-ui/uestc_vhm)\n\u003cbr\u003e\n\u003c/div\u003e\n\n# 项目介绍\n- 本项目的开发目标是为了快速部署目标检测、特征提取等算法模型，并将其通过流媒体的方式推流到前端页面进行显示。\n- 本项目在开发过程中深入贯彻“高内聚、低耦合”的编码原则，力争很轻松地对目标检测、目标跟踪、流媒体等算法和技术进行扩展。\n# 效果展示\n\u003cdiv align=center\u003e\n\t\u003cimg src=\"doc/image/app_test.gif\"/\u003e\n\u003c/div\u003e\n\n# 技术路线 \n- 本项目主要使用了yolo系列模型完成目标检测，然后使用fast-reid提取目标框的特征向量，最后使用deepsort算法完成目标跟踪。\n# 支持特性\n- 流媒体：支持使用RTMP对摄像头或者视频进行推拉流，后续考虑添加RTSP以支持实时监控功能。\n- 特征向量提取：支持Fast-Reid提取目标框的特征向量。\n- 目标检测：支持Yolov8n模型实现目标检测。\n- 目标跟踪：支持DeepSORT算法实现目标跟踪，后续考虑添加ByteTrack算法。\n## 行人重识别服务\n![alt text](doc/image/person_reid_service.png)\n检测到行人之后\n- 首先在数据库里检索是否有相似度比较高的人\n  - 这一步需要用向量数据库来做，若出现，则给出对应的id，否则id为无效值\n- 之后给系统后端发送一个http消息，http消息的json格式如下：\n  ```json\n  {\n    \"PersonId\": 1 // 行人id\n    \"Picture\": \"xxx\"// base64编码后的行人图片\n    \"CameraIP\": \"192.169.0.0\" //摄像头地址\n    \"TimeStamp\": \"xxx\"//时间戳\n  }\n  ```\n- 判定这段视频里后面几秒内是否重复出现该行人，注意以下事项：\n  - 视频可见范围内，只发一次http消息\n  - 相似度高的人不管在摄像头中消失多少次，都是一个相同的ID\n# 开发环境\n- CPU: 13th Gen Intel(R) Core(TM) i5-13500HX   2.50 GHz\n- GPU: NVIDIA GeForce RTX 4060 Laptop GPU\n- RAM: 32GB\n- OS platform: WSL 2.0(Ubuntu 22.04.3 LTS)\n- Docker Development Environment: Ubuntu 20.04\n- Architecture: x86_64\n- Development editor: VSCode\n- Compiler: Clang 18.1.8\n- TensorRT: 8.6.1.6\n- Ffmpeg: 4.2.7\n# Quick Start\n## 前置安装\n1. 安装nvidia驱动，[安装教程](https://io.web3miner.io/worker-guides/install_with_windows/windows-an-zhuang-nvidia-qu-dong-cheng-xu)\n2. 安装WSL2.0，并在WSL2.0中安装Ubuntu 22.04.3 LTS，[安装教程](https://learn.microsoft.com/zh-cn/windows/wsl/install-manual)\n3. 安装docker-desktop\n4. 在powershell中使用如下命令检验WSL安装情况，下图中的Ubuntu-22.04-UESTC是我自己取的名字\n\u003cdiv align=center\u003e\n\t\u003cimg src=\"doc/image/wsl.png\"/\u003e\n\u003c/div\u003e\n\n## 环境检查\n- 检验docker环境\n  - 本项目使用Dockerfile管理开发环境，环境中已自带所有可用的TensorRT、Cuda、Cudann、Opencv、Ffmpeg等依赖。按理来说，无需下载多余的包，仅需下载安装docker-desktop保证docker环境，并使用项目中的Dockerfile构建镜像即可。\n  - 由于WSL使用的是docker-desktop，所以无需在Ubuntu 22.04.3 LTS安装docker，只需要保证docker-desktop在后台运行即可\n  - 进入安装好的Ubuntu 22.04.3 LTS，使用以下命令在WSL中检验当前安装的docker-desktop是否可以调用GPU，，如果调用不了，尝试更新docker-desktop版本至v4.34.2及以上，或查看[文档](https://arvas2ztsq.feishu.cn/docx/KdMLdL3oyozvksxS7jgcOOi2nUs?from=from_copylink)寻找解决方案，也可利用后记中的联系方式与我取得联系，我会尽力帮助解决问题。\n```bash\n# 检验nvidia驱动是否正常，这里右上角输出的CUDA Version必须大于等于11.3\nnvidia-smi\n# 检验docker是否可以调用GPU\ndocker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark\n```\n    - 正常输出应该有你的显卡型号，如下图输出 RTX 4060 Laptop GPU\n\u003cdiv align=center\u003e\n\t\u003cimg src=\"doc/image/gpu_test.png\"/\u003e\n\u003c/div\u003e\n\n## 项目下载\n- git相关文档\n  - 安装与配置：https://zhuanlan.zhihu.com/p/512099806\n  - 常用命令：https://zhuanlan.zhihu.com/p/678347984\n```bash\n# https\ngit clone https://github.com/dancing-ui/uestc_vhm.git\n# or ssh\ngit clone git@github.com:dancing-ui/uestc_vhm.git\n```\n## 环境配置\n- 构建镜像：使用以下命令构建docker镜像，该镜像占硬盘空间大约15GB，请确保硬盘空间足够。\n```bash\n# docker开发环境配置\ncd /home/xy/work/uestc_vhm/docker/Dockerfile\ndocker build -t uestc_vhm:v0 .\ndocker run --gpus all -it --name uestc_vhm -v /home/xy/work/uestc_vhm:/workspace -d uestc_vhm:v0\n```\n- 打开容器：使用VSCode的docker插件，进入docker容器中进行开发，VSCode环境配置见[链接](https://zhuanlan.zhihu.com/p/715594507)，只需要看插件配置那里，把插件安装上即可。\n- zsh配置：初次进入容器，会进入zsh配置界面，无需惊慌，按照屏幕提示选择即可。\n\n- 检验TensorRT是否正常\n```bash\ncd /usr/local/TensorRT-8.6.1.6/samples/sampleOnnxMNIST\nmake -j20\ncd ../../bin\n./sample_onnx_mnist\n```\n    - 正常截图如下图，有PASSED TensorRT.sample_onnx_mnist输出\n\u003cdiv align=center\u003e\n\t\u003cimg src=\"doc/image/tensorrt_test.png\"/\u003e\n\u003c/div\u003e\n\n## 模型配置\n- 本项目所使用模型均通过官方仓库本地编译得来，且由于本项目使用了git lfs来管理模型文件、视频文件和第三方库文件，因此初期阶段（若更换TensorRT版本，需要重新生成模型，[模型生成及动态库编译安装教程](model_file/how_to_generate_model.md)）无需在其他地方下载模型文件和测试视频，用仓库中现成的就行。\n  - 模型文件存放在model_file目录下，视频文件在test_video目录下。\n- 可以通过修改src/etc/uestc_vhm_cfg.json配置文件的路径来自定义配置（流媒体配置、模型配置）。\n## 第三方库编译\n- 本项目使用了fast-reid、cnpy第三方库，如果仓库里的库文件无法使用，需要在容器中重新编译一下，替换仓库中的原始库文件。\n  - fast-reid、cnpy本地编译教程：[链接](https://github.com/JDAI-CV/fast-reid/blob/master/projects/FastRT/README.md)\n## 项目编译\n- 提供两种编译方式，一种是用VSCode的.vscode配置文件，一种是使用脚本文件。\n```bash\n# 脚本文件编译\ncd /workspace\nchmod +x ./local_build.sh\n./local_build.sh\n```\n## 程序运行\n- 使用以下命令启动程序\n```bash\ncd /workspace/build/src\n./uestc_vhm --config=/workspace/src/etc/uestc_vhm_cfg.json\n```\n## 程序退出\n共有2种方式退出程序\n1. 使用ctrl+c退出uestc_vhm程序\n2. 使用 kill -2 uestc_vhm_pid 命令退出uestc_vhm程序，如下命令：\n```bash\nps -e|grep uestc_vhm|awk '{print $1}'|xargs kill -2\n```\n## 程序资源占用\n\u003cdiv align=center\u003e\n\t\u003cimg src=\"doc/image/runtime.png\"/\u003e\n\u003c/div\u003e\n\n# 参考资料\n- [理论指导](https://blog.csdn.net/LuohenYJ/article/details/122491044)\n- 本项目参考了以下开源项目：\n  - [fast-reid](https://github.com/JDAI-CV/fast-reid)\n  - [deepsort](https://github.com/linghu8812/yolov5_fastreid_deepsort_tensorrt)\n  - [yolo](https://github.com/FeiYull/TensorRT-Alpha)\n- [错误记录文档链接（附详细开发流程）](https://arvas2ztsq.feishu.cn/drive/folder/ErYgf1ynRl0ZsNdICxzc45eVnWe?from=from_copylink)\n# 后记\n- 有任何问题都可以联系我，欢迎提issues和pr\n- 联系方式：\n  - Email: fengx02@163.com\n  - QQ: 2779856074\n- 如果该项目有帮助到您，请点一下右上角的小星星，感谢您的支持 :)","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdancing-ui%2Fuestc_vhm","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdancing-ui%2Fuestc_vhm","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdancing-ui%2Fuestc_vhm/lists"}