{"id":28807230,"url":"https://github.com/xlite-dev/yolop-toolkit","last_synced_at":"2025-09-25T15:11:56.692Z","repository":{"id":103962074,"uuid":"412716403","full_name":"xlite-dev/yolop-toolkit","owner":"xlite-dev","description":"YOLOP with ONNXRuntime C++/MNN/TNN/NCNN","archived":false,"fork":false,"pushed_at":"2021-12-18T08:38:23.000Z","size":28736,"stargazers_count":9,"open_issues_count":0,"forks_count":0,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-07-10T19:57:25.056Z","etag":null,"topics":["mnn","ncnn","onnxruntime","tnn","yolop"],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/xlite-dev.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-10-02T06:54:13.000Z","updated_at":"2025-07-01T05:50:59.000Z","dependencies_parsed_at":null,"dependency_job_id":"1265e741-7665-4537-aa70-7803a5e05875","html_url":"https://github.com/xlite-dev/yolop-toolkit","commit_stats":null,"previous_names":["xlite-dev/yolop-toolkit","deftruth/yolop.lite.ai.toolkit"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/xlite-dev/yolop-toolkit","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xlite-dev%2Fyolop-toolkit","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xlite-dev%2Fyolop-toolkit/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xlite-dev%2Fyolop-toolkit/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xlite-dev%2Fyolop-toolkit/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/xlite-dev","download_url":"https://codeload.github.com/xlite-dev/yolop-toolkit/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/xlite-dev%2Fyolop-toolkit/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":276937442,"owners_count":25731680,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-09-25T02:00:09.612Z","response_time":80,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["mnn","ncnn","onnxruntime","tnn","yolop"],"created_at":"2025-06-18T11:35:07.343Z","updated_at":"2025-09-25T15:11:56.685Z","avatar_url":"https://github.com/xlite-dev.png","language":"C++","readme":"# yolop.lite.ai.toolkit\n使用Lite.AI.ToolKit 🚀🚀🌟 C++工具箱来跑YOLOP的一些案例(https://github.com/DefTruth/lite.ai.toolkit) , 包含ONNXRuntime C++、MNN、NCNN和TNN版本。\n\n\u003cdiv align='center'\u003e\n  \u003cimg src='resources/yolop1.png' height=\"100px\" width=\"160px\"\u003e\n  \u003cimg src='resources/yolop1.gif' height=\"100px\" width=\"160px\"\u003e\n  \u003cimg src='resources/yolop2.png' height=\"100px\" width=\"160px\"\u003e\n  \u003cimg src='resources/yolop2.gif' height=\"100px\" width=\"160px\"\u003e\n\n\u003c/div\u003e   \n\n若是有用，❤️不妨给个⭐️🌟支持一下吧~ 🙃🤪🍀\n\n## 2. C++版本源码\n\nYOLOP C++ 版本的源码包含ONNXRuntime、MNN、NCNN和TNN四个版本，可以在 [lite.ai.toolkit](https://github.com/DefTruth/lite.ai.toolkit) 工具箱中找到。本项目主要介绍如何基于 [lite.ai.toolkit](https://github.com/DefTruth/lite.ai.toolkit) 工具箱，直接使用YOLOP来跑全景分割和检测。需要说明的是，本项目是基于MacOS下编译的 [liblite.ai.toolkit.v0.1.0.dylib](https://github.com/DefTruth/yolop.lite.ai.toolkit/blob/main/lite.ai.toolkit/lib) 来实现的，对于使用MacOS的用户，可以直接下载本项目包含的*liblite.ai.toolkit.v0.1.0*动态库和其他依赖库进行使用。而非MacOS用户，则需要从[lite.ai.toolkit](https://github.com/DefTruth/lite.ai.toolkit) 中下载源码进行编译。[lite.ai.toolkit](https://github.com/DefTruth/lite.ai.toolkit) c++工具箱目前包含70+流行的开源模型。\n* [yolop.cpp](https://github.com/DefTruth/lite.ai.toolkit/blob/main/lite/ort/cv/yolop.cpp)\n* [yolop.h](https://github.com/DefTruth/lite.ai.toolkit/blob/main/lite/ort/cv/yolop.h)\n* [mnn_yolop.cpp](https://github.com/DefTruth/lite.ai.toolkit/blob/main/lite/mnn/cv/mnn_yolop.cpp)\n* [mnn_yolop.h](https://github.com/DefTruth/lite.ai.toolkit/blob/main/lite/mnn/cv/mnn_yolop.h)\n* [tnn_yolop.cpp](https://github.com/DefTruth/lite.ai.toolkit/blob/main/lite/tnn/cv/tnn_yolop.cpp)\n* [tnn_yolop.h](https://github.com/DefTruth/lite.ai.toolkit/blob/main/lite/tnn/cv/tnn_yolop.h)\n* [ncnn_yolop.cpp](https://github.com/DefTruth/lite.ai.toolkit/blob/main/lite/ncnn/cv/ncnn_yolop.cpp)\n* [ncnn_yolop.h](https://github.com/DefTruth/lite.ai.toolkit/blob/main/lite/ncnn/cv/ncnn_yolop.h)\n\nONNXRuntime C++、MNN、NCNN和TNN版本的推理实现均已测试通过，欢迎白嫖~  \n\n\n## 3. 模型文件\n\n### 3.1 ONNX模型文件\n可以从我提供的链接下载 ([Baidu Drive](https://pan.baidu.com/s/1elUGcx7CZkkjEoYhTMwTRQ) code: 8gin)。\n\n\n|                 Class                 |      Pretrained ONNX Files      |              Rename or Converted From (Repo)              | Size  |\n| :-----------------------------------: | :-----------------------------: | :-------------------------------------------------------: | :---: |  \n| *lite::cv::detection::YOLOP* |    yolop-1280-1280.onnx     |       [YOLOP](https://github.com/hustvl/YOLOP)       | 30Mb  |\n| *lite::cv::detection::YOLOP* |    yolop-640-640.onnx     |       [YOLOP](https://github.com/hustvl/YOLOP)       | 30Mb  |\n| *lite::cv::detection::YOLOP* |    yolop-320-320.onnx     |       [YOLOP](https://github.com/hustvl/YOLOP)       | 30Mb  |\n\n\n### 3.2 MNN模型文件\nMNN模型文件下载地址，([Baidu Drive](https://pan.baidu.com/s/1KyO-bCYUv6qPq2M8BH_Okg) code: 9v63)。\n\n|                 Class                 |      Pretrained MNN Files      |              Rename or Converted From (Repo)              | Size  |\n| :-----------------------------------: | :-----------------------------: | :-------------------------------------------------------: | :---: |\n|     *lite::mnn::cv::detection::YOLOP*      |       yolop-320-320.mnn           |  [YOLOP](https://github.com/hustvl/YOLOP)   | 30Mb  |\n|     *lite::mnn::cv::detection::YOLOP*      |       yolop-640-640.mnn         |  [YOLOP](https://github.com/hustvl/YOLOP)   | 30Mb  |\n|     *lite::mnn::cv::detection::YOLOP*      |       yolop-1280-1280.mnn         |  [YOLOP](https://github.com/hustvl/YOLOP)   | 30Mb |\n\n\n### 3.3 TNN模型文件\nTNN模型文件下载地址，([Baidu Drive](https://pan.baidu.com/s/1lvM2YKyUbEc5HKVtqITpcw) code: 6o6k)。\n\n|                 Class                 |      Pretrained TNN Files      |              Rename or Converted From (Repo)              | Size  |\n| :-----------------------------------: | :-----------------------------: | :-------------------------------------------------------: | :---: |\n|     *lite::tnn::cv::detection::YOLOP*      |          yolop-320-320.opt.tnnproto\u0026tnnmodel           |  [YOLOP](https://github.com/hustvl/YOLOP)   | 30Mb |\n|     *lite::tnn::cv::detection::YOLOP*      |          yolop-640-640.opt.tnnproto\u0026tnnmodel           |  [YOLOP](https://github.com/hustvl/YOLOP)   | 30Mb |\n|     *lite::tnn::cv::detection::YOLOP*      |          yolop-1280-1280.opt.tnnproto\u0026tnnmodel           |  [YOLOP](https://github.com/hustvl/YOLOP)   | 30Mb  |\n\n\n### 3.4 NCNN模型文件\nTNN模型文件下载地址，([Baidu Drive](https://pan.baidu.com/s/1hlnqyNsFbMseGFWscgVhgQ) code: sc7f)。\n\n|                 Class                 |      Pretrained NCNN Files      |              Rename or Converted From (Repo)              | Size  |\n| :-----------------------------------: | :-----------------------------: | :-------------------------------------------------------: | :---: |\n|     *lite::ncnn::cv::detection::YOLOP*      |          yolop-640-640.opt.param\u0026bin           |  [YOLOP](https://github.com/hustvl/YOLOP)   | 30Mb |\n\n\n## 4. 接口文档\n\n在[lite.ai.toolkit](https://github.com/DefTruth/lite.ai.toolkit) 中，YOLOP的实现类为：\n\n```c++\nclass LITE_EXPORTS lite::cv::detection::YOLOP;\nclass LITE_EXPORTS lite::mnn::cv::detection::YOLOP;\nclass LITE_EXPORTS lite::tnn::cv::detection::YOLOP;\nclass LITE_EXPORTS lite::ncnn::cv::detection::YOLOP;\n```  \n\n该类型目前包含1公共接口`detect`用于进行目标检测。\n```c++\npublic:\n    void detect(const cv::Mat \u0026mat,\n                std::vector\u003ctypes::Boxf\u003e \u0026detected_boxes,\n                types::SegmentContent \u0026da_seg_content,\n                types::SegmentContent \u0026ll_seg_content,\n                float score_threshold = 0.25f, float iou_threshold = 0.45f,\n                unsigned int topk = 100, unsigned int nms_type = NMS::OFFSET);\n```\n`detect`接口的输入参数说明：\n* mat: cv::Mat类型，BGR格式。\n* detected_boxes: Boxf向量，包含被检测到的车辆框，Boxf中包含x1,y1,x2,y2,label,score等成员\n* da_seg_content: SegmentContent用来保存预测的可驾驶区域结果\n* ll_seg_content: SegmentContent用来保存预测的航道区域结果  \n* score_threshold：分类得分（质量得分）阈值，默认0.45，小于该阈值的框将被丢弃。\n* iou_threshold：NMS中的iou阈值，默认0.3。\n* topk：默认100，只保留前k个检测到的结果。\n* nms_type：NMS算法的类型，默认为不同的类别各自做NMS。\n\n\n## 5. 使用案例\n\n### 5.1 ONNXRuntime版本\n```c++\n#include \"lite/lite.h\"\n\nstatic void test_default()\n{\n    std::string onnx_path = \"..//hub/onnx/cv/yolop-640-640.onnx\";\n    std::string test_img_path = \"../resources/1.jpg\";\n    std::string save_det_path = \"../logs/1_det.jpg\";\n    std::string save_da_path = \"../logs/1_da.jpg\";\n    std::string save_ll_path = \"../logs/1_ll.jpg\";\n    std::string save_merge_path = \"../logs/1_merge.jpg\";\n    \n    auto *yolop = new lite::cv::detection::YOLOP(onnx_path, 16); // 16 threads\n    \n    lite::types::SegmentContent da_seg_content;\n    lite::types::SegmentContent ll_seg_content;\n    std::vector\u003clite::types::Boxf\u003e detected_boxes;\n    cv::Mat img_bgr = cv::imread(test_img_path);\n    yolop-\u003edetect(img_bgr, detected_boxes, da_seg_content, ll_seg_content);\n    \n    if (!detected_boxes.empty() \u0026\u0026 da_seg_content.flag \u0026\u0026 ll_seg_content.flag)\n    {\n        // boxes.\n        cv::Mat img_det = img_bgr.clone();\n        lite::utils::draw_boxes_inplace(img_det, detected_boxes);\n        cv::imwrite(save_det_path, img_det);\n        std::cout \u003c\u003c \"Saved \" \u003c\u003c save_det_path \u003c\u003c \" done!\" \u003c\u003c \"\\n\";\n        // da \u0026\u0026 ll seg\n        cv::imwrite(save_da_path, da_seg_content.class_mat);\n        cv::imwrite(save_ll_path, ll_seg_content.class_mat);\n        std::cout \u003c\u003c \"Saved \" \u003c\u003c save_da_path \u003c\u003c \" done!\" \u003c\u003c \"\\n\";\n        std::cout \u003c\u003c \"Saved \" \u003c\u003c save_ll_path \u003c\u003c \" done!\" \u003c\u003c \"\\n\";\n        // merge\n        cv::Mat img_merge = img_bgr.clone();\n        cv::Mat color_seg = da_seg_content.color_mat + ll_seg_content.color_mat;\n        \n        cv::addWeighted(img_merge, 0.5, color_seg, 0.5, 0., img_merge);\n        lite::utils::draw_boxes_inplace(img_merge, detected_boxes);\n        cv::imwrite(save_merge_path, img_merge);\n        std::cout \u003c\u003c \"Saved \" \u003c\u003c save_merge_path \u003c\u003c \" done!\" \u003c\u003c \"\\n\";\n        \n        // label\n        if (!da_seg_content.names_map.empty() \u0026\u0026 !ll_seg_content.names_map.empty())\n        {\n        \n            for (auto it = da_seg_content.names_map.begin(); it != da_seg_content.names_map.end(); ++it)\n            {\n                std::cout \u003c\u003c \"Default Version Detected Label: \"\n                \u003c\u003c it-\u003efirst \u003c\u003c \" Name: \" \u003c\u003c it-\u003esecond \u003c\u003c std::endl;\n            }\n            \n            for (auto it = ll_seg_content.names_map.begin(); it != ll_seg_content.names_map.end(); ++it)\n            {\n                std::cout \u003c\u003c \"Default Version Detected Label: \"\n                \u003c\u003c it-\u003efirst \u003c\u003c \" Name: \" \u003c\u003c it-\u003esecond \u003c\u003c std::endl;\n            }\n            \n        }\n    }\n    \n    delete yolop;\n}\n\n```  \n\n### 5.2 MNN版本\n```c++\n#include \"lite/lite.h\"\n\nstatic void test_mnn()\n{\n#ifdef ENABLE_MNN\n    std::string mnn_path = \"../hub/mnn/cv/yolop-640-640.mnn\";\n    std::string test_img_path = \"../resources/1.jpg\";\n    std::string save_det_path = \"../logs/1_det_mnn.jpg\";\n    std::string save_da_path = \"../logs/1_da_mnn.jpg\";\n    std::string save_ll_path = \"../logs/1_ll_mnn.jpg\";\n    std::string save_merge_path = \"../logs/1_merge_mnn.jpg\";\n    \n    auto *yolop = new lite::mnn::cv::detection::YOLOP(mnn_path, 16); // 16 threads\n    \n    lite::types::SegmentContent da_seg_content;\n    lite::types::SegmentContent ll_seg_content;\n    std::vector\u003clite::types::Boxf\u003e detected_boxes;\n    cv::Mat img_bgr = cv::imread(test_img_path);\n    yolop-\u003edetect(img_bgr, detected_boxes, da_seg_content, ll_seg_content);\n    \n    if (!detected_boxes.empty() \u0026\u0026 da_seg_content.flag \u0026\u0026 ll_seg_content.flag)\n    {\n        // boxes.\n        cv::Mat img_det = img_bgr.clone();\n        lite::utils::draw_boxes_inplace(img_det, detected_boxes);\n        cv::imwrite(save_det_path, img_det);\n        std::cout \u003c\u003c \"Saved \" \u003c\u003c save_det_path \u003c\u003c \" done!\" \u003c\u003c \"\\n\";\n        // da \u0026\u0026 ll seg\n        cv::imwrite(save_da_path, da_seg_content.class_mat);\n        cv::imwrite(save_ll_path, ll_seg_content.class_mat);\n        std::cout \u003c\u003c \"Saved \" \u003c\u003c save_da_path \u003c\u003c \" done!\" \u003c\u003c \"\\n\";\n        std::cout \u003c\u003c \"Saved \" \u003c\u003c save_ll_path \u003c\u003c \" done!\" \u003c\u003c \"\\n\";\n        // merge\n        cv::Mat img_merge = img_bgr.clone();\n        cv::Mat color_seg = da_seg_content.color_mat + ll_seg_content.color_mat;\n        \n        cv::addWeighted(img_merge, 0.5, color_seg, 0.5, 0., img_merge);\n        lite::utils::draw_boxes_inplace(img_merge, detected_boxes);\n        cv::imwrite(save_merge_path, img_merge);\n        std::cout \u003c\u003c \"Saved \" \u003c\u003c save_merge_path \u003c\u003c \" done!\" \u003c\u003c \"\\n\";\n        \n        // label\n        if (!da_seg_content.names_map.empty() \u0026\u0026 !ll_seg_content.names_map.empty())\n        {\n        \n            for (auto it = da_seg_content.names_map.begin(); it != da_seg_content.names_map.end(); ++it)\n            {\n                std::cout \u003c\u003c \"MNN Version Detected Label: \"\n                \u003c\u003c it-\u003efirst \u003c\u003c \" Name: \" \u003c\u003c it-\u003esecond \u003c\u003c std::endl;\n            }\n            \n            for (auto it = ll_seg_content.names_map.begin(); it != ll_seg_content.names_map.end(); ++it)\n            {\n                std::cout \u003c\u003c \"MNN Version Detected Label: \"\n                \u003c\u003c it-\u003efirst \u003c\u003c \" Name: \" \u003c\u003c it-\u003esecond \u003c\u003c std::endl;\n            }\n        \n        }\n    }\n    \n    delete yolop;\n#endif\n}\n```  \n\n### 5.3 TNN版本\n```c++\n#include \"lite/lite.h\"\n\nstatic void test_tnn()\n{\n#ifdef ENABLE_TNN\n    std::string proto_path = \"../hub/tnn/cv/yolop-640-640.opt.tnnproto\";\n    std::string model_path = \"../hub/tnn/cv/yolop-640-640.opt.tnnmodel\";\n    std::string test_img_path = \"../resources/1.jpg\";\n    std::string save_det_path = \"../logs/1_det_tnn.jpg\";\n    std::string save_da_path = \"../logs/1_da_tnn.jpg\";\n    std::string save_ll_path = \"../logs/1_ll_tnn.jpg\";\n    std::string save_merge_path = \"../logs/1_merge_tnn.jpg\";\n    \n    auto *yolop = new lite::tnn::cv::detection::YOLOP(proto_path, model_path, 16); // 16 threads\n    \n    lite::types::SegmentContent da_seg_content;\n    lite::types::SegmentContent ll_seg_content;\n    std::vector\u003clite::types::Boxf\u003e detected_boxes;\n    cv::Mat img_bgr = cv::imread(test_img_path);\n    yolop-\u003edetect(img_bgr, detected_boxes, da_seg_content, ll_seg_content);\n    \n    if (!detected_boxes.empty() \u0026\u0026 da_seg_content.flag \u0026\u0026 ll_seg_content.flag)\n    {\n        // boxes.\n        cv::Mat img_det = img_bgr.clone();\n        lite::utils::draw_boxes_inplace(img_det, detected_boxes);\n        cv::imwrite(save_det_path, img_det);\n        std::cout \u003c\u003c \"Saved \" \u003c\u003c save_det_path \u003c\u003c \" done!\" \u003c\u003c \"\\n\";\n        // da \u0026\u0026 ll seg\n        cv::imwrite(save_da_path, da_seg_content.class_mat);\n        cv::imwrite(save_ll_path, ll_seg_content.class_mat);\n        std::cout \u003c\u003c \"Saved \" \u003c\u003c save_da_path \u003c\u003c \" done!\" \u003c\u003c \"\\n\";\n        std::cout \u003c\u003c \"Saved \" \u003c\u003c save_ll_path \u003c\u003c \" done!\" \u003c\u003c \"\\n\";\n        // merge\n        cv::Mat img_merge = img_bgr.clone();\n        cv::Mat color_seg = da_seg_content.color_mat + ll_seg_content.color_mat;\n        \n        cv::addWeighted(img_merge, 0.5, color_seg, 0.5, 0., img_merge);\n        lite::utils::draw_boxes_inplace(img_merge, detected_boxes);\n        cv::imwrite(save_merge_path, img_merge);\n        std::cout \u003c\u003c \"Saved \" \u003c\u003c save_merge_path \u003c\u003c \" done!\" \u003c\u003c \"\\n\";\n        \n        // label\n        if (!da_seg_content.names_map.empty() \u0026\u0026 !ll_seg_content.names_map.empty())\n        {\n        \n            for (auto it = da_seg_content.names_map.begin(); it != da_seg_content.names_map.end(); ++it)\n            {\n                std::cout \u003c\u003c \"TNN Version Detected Label: \"\n                \u003c\u003c it-\u003efirst \u003c\u003c \" Name: \" \u003c\u003c it-\u003esecond \u003c\u003c std::endl;\n            }\n        \n            for (auto it = ll_seg_content.names_map.begin(); it != ll_seg_content.names_map.end(); ++it)\n            {\n                std::cout \u003c\u003c \"TNN Version Detected Label: \"\n                \u003c\u003c it-\u003efirst \u003c\u003c \" Name: \" \u003c\u003c it-\u003esecond \u003c\u003c std::endl;\n            }\n        \n        }\n    }\n    \n    delete yolop;\n#endif\n}\n```  \n\n### 5.4 NCNN版本\n```c++\n#include \"lite/lite.h\"\n\nstatic void test_ncnn()\n{\n#ifdef ENABLE_NCNN\n    std::string param_path = \"../hub/ncnn/cv/yolop-640-640.opt.param\";\n    std::string bin_path = \"../hub/ncnn/cv/yolop-640-640.opt.bin\";\n    std::string test_img_path = \"../resources/1.jpg\";\n    std::string save_det_path = \"../logs/1_det_ncnn.jpg\";\n    std::string save_da_path = \"../logs/1_da_ncnn.jpg\";\n    std::string save_ll_path = \"../logs/1_ll_ncnn.jpg\";\n    std::string save_merge_path = \"../logs/1_merge_ncnn.jpg\";\n    \n    auto *yolop = new lite::ncnn::cv::detection::YOLOP(param_path, bin_path, 16); // 16 threads\n    \n    lite::types::SegmentContent da_seg_content;\n    lite::types::SegmentContent ll_seg_content;\n    std::vector\u003clite::types::Boxf\u003e detected_boxes;\n    cv::Mat img_bgr = cv::imread(test_img_path);\n    yolop-\u003edetect(img_bgr, detected_boxes, da_seg_content, ll_seg_content);\n    \n    if (!detected_boxes.empty() \u0026\u0026 da_seg_content.flag \u0026\u0026 ll_seg_content.flag)\n    {\n        // boxes.\n        cv::Mat img_det = img_bgr.clone();\n        lite::utils::draw_boxes_inplace(img_det, detected_boxes);\n        cv::imwrite(save_det_path, img_det);\n        std::cout \u003c\u003c \"Saved \" \u003c\u003c save_det_path \u003c\u003c \" done!\" \u003c\u003c \"\\n\";\n        // da \u0026\u0026 ll seg\n        cv::imwrite(save_da_path, da_seg_content.class_mat);\n        cv::imwrite(save_ll_path, ll_seg_content.class_mat);\n        std::cout \u003c\u003c \"Saved \" \u003c\u003c save_da_path \u003c\u003c \" done!\" \u003c\u003c \"\\n\";\n        std::cout \u003c\u003c \"Saved \" \u003c\u003c save_ll_path \u003c\u003c \" done!\" \u003c\u003c \"\\n\";\n        // merge\n        cv::Mat img_merge = img_bgr.clone();\n        cv::Mat color_seg = da_seg_content.color_mat + ll_seg_content.color_mat;\n        \n        cv::addWeighted(img_merge, 0.5, color_seg, 0.5, 0., img_merge);\n        lite::utils::draw_boxes_inplace(img_merge, detected_boxes);\n        cv::imwrite(save_merge_path, img_merge);\n        std::cout \u003c\u003c \"Saved \" \u003c\u003c save_merge_path \u003c\u003c \" done!\" \u003c\u003c \"\\n\";\n        \n        // label\n        if (!da_seg_content.names_map.empty() \u0026\u0026 !ll_seg_content.names_map.empty())\n        {\n        \n            for (auto it = da_seg_content.names_map.begin(); it != da_seg_content.names_map.end(); ++it)\n            {\n                std::cout \u003c\u003c \"NCNN Version Detected Label: \"\n                \u003c\u003c it-\u003efirst \u003c\u003c \" Name: \" \u003c\u003c it-\u003esecond \u003c\u003c std::endl;\n            }\n        \n            for (auto it = ll_seg_content.names_map.begin(); it != ll_seg_content.names_map.end(); ++it)\n            {\n                std::cout \u003c\u003c \"NCNN Version Detected Label: \"\n                \u003c\u003c it-\u003efirst \u003c\u003c \" Name: \" \u003c\u003c it-\u003esecond \u003c\u003c std::endl;\n            }\n        \n        }\n    }\n    \n    delete yolop;\n#endif\n}\n```  \n\n* 输出结果为:  \n\n\u003cdiv align='center'\u003e\n  \u003cimg src='resources/yolop1.png' height=\"100px\" width=\"160px\"\u003e\n  \u003cimg src='resources/yolop1.gif' height=\"100px\" width=\"160px\"\u003e\n  \u003cimg src='resources/yolop2.png' height=\"100px\" width=\"160px\"\u003e\n  \u003cimg src='resources/yolop2.gif' height=\"100px\" width=\"160px\"\u003e\n\n\u003c/div\u003e   \n\n\n## 6. 编译运行\n在MacOS下可以直接编译运行本项目，无需下载其他依赖库。其他系统则需要从[lite.ai.toolkit](https://github.com/DefTruth/lite.ai.toolkit) 中下载源码先编译*lite.ai.toolkit.v0.1.0*动态库。\n```shell\ngit clone --depth=1 https://github.com/DefTruth/yolop.lite.ai.toolkit.git\ncd yolop.lite.ai.toolkit \nsh ./build.sh\n```  \n\n* CMakeLists.txt设置\n\n```cmake\ncmake_minimum_required(VERSION 3.17)\nproject(yolop.lite.ai.toolkit)\n\nset(CMAKE_CXX_STANDARD 11)\n\n# setting up lite.ai.toolkit\nset(LITE_AI_DIR ${CMAKE_SOURCE_DIR}/lite.ai.toolkit)\nset(LITE_AI_INCLUDE_DIR ${LITE_AI_DIR}/include)\nset(LITE_AI_LIBRARY_DIR ${LITE_AI_DIR}/lib)\ninclude_directories(${LITE_AI_INCLUDE_DIR})\nlink_directories(${LITE_AI_LIBRARY_DIR})\n\nset(OpenCV_LIBS\n        opencv_highgui\n        opencv_core\n        opencv_imgcodecs\n        opencv_imgproc\n        opencv_video\n        opencv_videoio\n        )\n# add your executable\nset(EXECUTABLE_OUTPUT_PATH ${CMAKE_SOURCE_DIR}/examples/build)\n\nadd_executable(lite_yolop examples/test_lite_yolop.cpp)\ntarget_link_libraries(lite_yolop\n        lite.ai.toolkit\n        onnxruntime\n        MNN  # need, if built lite.ai.toolkit with ENABLE_MNN=ON,  default OFF\n        ncnn # need, if built lite.ai.toolkit with ENABLE_NCNN=ON, default OFF\n        TNN  # need, if built lite.ai.toolkit with ENABLE_TNN=ON,  default OFF\n        ${OpenCV_LIBS})  # link lite.ai.toolkit \u0026 other libs.\n```\n\n* building \u0026\u0026 testing information:\n```shell\n[ 50%] Building CXX object CMakeFiles/lite_yolop.dir/examples/test_lite_yolop.cpp.o\n[100%] Linking CXX executable lite_yolop\n[100%] Built target lite_yolop\nTesting Start ...\nLITEORT_DEBUG LogId: ..//hub/onnx/cv/yolop-640-640.onnx\n=============== Input-Dims ==============\ninput_node_dims: 1\ninput_node_dims: 3\ninput_node_dims: 640\ninput_node_dims: 640\n=============== Output-Dims ==============\nOutput: 0 Name: det_out Dim: 0 :1\nOutput: 0 Name: det_out Dim: 1 :25200\nOutput: 0 Name: det_out Dim: 2 :6\nOutput: 1 Name: drive_area_seg Dim: 0 :1\nOutput: 1 Name: drive_area_seg Dim: 1 :2\nOutput: 1 Name: drive_area_seg Dim: 2 :640\nOutput: 1 Name: drive_area_seg Dim: 3 :640\nOutput: 2 Name: lane_line_seg Dim: 0 :1\nOutput: 2 Name: lane_line_seg Dim: 1 :2\nOutput: 2 Name: lane_line_seg Dim: 2 :640\nOutput: 2 Name: lane_line_seg Dim: 3 :640\n========================================\ndetected num_anchors: 25200\ngenerate_bboxes num: 62\nSaved ../logs/1_det.jpg done!\nSaved ../logs/1_da.jpg done!\nSaved ../logs/1_ll.jpg done!\nSaved ../logs/1_merge.jpg done!\nDefault Version Detected Label: 255 Name: drivable area\nDefault Version Detected Label: 255 Name: lane line\nLITEORT_DEBUG LogId: ..//hub/onnx/cv/yolop-640-640.onnx\n=============== Input-Dims ==============\ninput_node_dims: 1\ninput_node_dims: 3\ninput_node_dims: 640\ninput_node_dims: 640\n=============== Output-Dims ==============\nOutput: 0 Name: det_out Dim: 0 :1\nOutput: 0 Name: det_out Dim: 1 :25200\nOutput: 0 Name: det_out Dim: 2 :6\nOutput: 1 Name: drive_area_seg Dim: 0 :1\nOutput: 1 Name: drive_area_seg Dim: 1 :2\nOutput: 1 Name: drive_area_seg Dim: 2 :640\nOutput: 1 Name: drive_area_seg Dim: 3 :640\nOutput: 2 Name: lane_line_seg Dim: 0 :1\nOutput: 2 Name: lane_line_seg Dim: 1 :2\nOutput: 2 Name: lane_line_seg Dim: 2 :640\nOutput: 2 Name: lane_line_seg Dim: 3 :640\n========================================\ndetected num_anchors: 25200\ngenerate_bboxes num: 62\nSaved ../logs/1_det_onnx.jpg done!\nSaved ../logs/1_da_onnx.jpg done!\nSaved ../logs/1_ll_onnx.jpg done!\nSaved ../logs/1_merge_onnx.jpg done!\nONNXRuntime Version Detected Label: 255 Name: drivable area\nONNXRuntime Version Detected Label: 255 Name: lane line\nLITEMNN_DEBUG LogId: ../hub/mnn/cv/yolop-640-640.mnn\n=============== Input-Dims ==============\n        **Tensor shape**: 1, 3, 640, 640, \nDimension Type: (CAFFE/PyTorch/ONNX)NCHW\n=============== Output-Dims ==============\ngetSessionOutputAll done!\nOutput: det_out:        **Tensor shape**: 1, 25200, 6, \nOutput: drive_area_seg:         **Tensor shape**: 1, 2, 640, 640, \nOutput: lane_line_seg:  **Tensor shape**: 1, 2, 640, 640, \n========================================\ndetected num_anchors: 25200\ngenerate_bboxes num: 62\nSaved ../logs/1_det_mnn.jpg done!\nSaved ../logs/1_da_mnn.jpg done!\nSaved ../logs/1_ll_mnn.jpg done!\nSaved ../logs/1_merge_mnn.jpg done!\nMNN Version Detected Label: 255 Name: drivable area\nMNN Version Detected Label: 255 Name: lane line\nLITENCNN_DEBUG LogId: ../hub/ncnn/cv/yolop-640-640.opt.param\n=============== Input-Dims ==============\nInput: images: shape: c=0 h=0 w=0\n=============== Output-Dims ==============\nOutput: det_stride_8: shape: c=0 h=0 w=0\nOutput: det_stride_16: shape: c=0 h=0 w=0\nOutput: det_stride_32: shape: c=0 h=0 w=0\nOutput: drive_area_seg: shape: c=0 h=0 w=0\nOutput: lane_line_seg: shape: c=0 h=0 w=0\n========================================\ngenerate_bboxes num: 62\nSaved ../logs/1_det_ncnn.jpg done!\nSaved ../logs/1_da_ncnn.jpg done!\nSaved ../logs/1_ll_ncnn.jpg done!\nSaved ../logs/1_merge_ncnn.jpg done!\nNCNN Version Detected Label: 255 Name: drivable area\nNCNN Version Detected Label: 255 Name: lane line\nLITETNN_DEBUG LogId: ../hub/tnn/cv/yolop-640-640.opt.tnnproto\n=============== Input-Dims ==============\nimages: [1 3 640 640 ]\nInput Data Format: NCHW\n=============== Output-Dims ==============\ndet_out: [1 25200 6 ]\ndrive_area_seg: [1 2 640 640 ]\nlane_line_seg: [1 2 640 640 ]\n========================================\ndetected num_anchors: 25200\ngenerate_bboxes num: 62\nSaved ../logs/1_det_tnn.jpg done!\nSaved ../logs/1_da_tnn.jpg done!\nSaved ../logs/1_ll_tnn.jpg done!\nSaved ../logs/1_merge_tnn.jpg done!\nTNN Version Detected Label: 255 Name: drivable area\nTNN Version Detected Label: 255 Name: lane line\nTesting Successful !\n\n```  \n\n\u003cdiv align='center'\u003e\n  \u003cimg src='resources/1_da.jpg' height=\"100px\" width=\"160px\"\u003e\n  \u003cimg src='resources/1_det.jpg' height=\"100px\" width=\"160px\"\u003e\n  \u003cimg src='resources/1_ll.jpg' height=\"100px\" width=\"160px\"\u003e\n  \u003cimg src='resources/1_merge.jpg' height=\"100px\" width=\"160px\"\u003e\n\n\u003c/div\u003e   \n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fxlite-dev%2Fyolop-toolkit","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fxlite-dev%2Fyolop-toolkit","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fxlite-dev%2Fyolop-toolkit/lists"}