{"id":13438619,"url":"https://github.com/PaddlePaddle/Paddle-Lite","last_synced_at":"2025-03-20T06:30:56.055Z","repository":{"id":37359513,"uuid":"104208128","full_name":"PaddlePaddle/Paddle-Lite","owner":"PaddlePaddle","description":"PaddlePaddle High Performance Deep Learning Inference Engine for Mobile and Edge (飞桨高性能深度学习端侧推理引擎）","archived":false,"fork":false,"pushed_at":"2024-12-04T02:46:01.000Z","size":329711,"stargazers_count":6995,"open_issues_count":196,"forks_count":1610,"subscribers_count":337,"default_branch":"develop","last_synced_at":"2024-12-23T16:44:32.056Z","etag":null,"topics":["arm","baidu","deep-learning","embedded","fpga","mali","mdl","mobile","mobile-deep-learning","neural-network"],"latest_commit_sha":null,"homepage":"https://www.paddlepaddle.org.cn/lite","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/PaddlePaddle.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":".github/CODEOWNERS","security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2017-09-20T11:41:42.000Z","updated_at":"2024-12-23T07:07:29.000Z","dependencies_parsed_at":"2022-07-12T16:17:37.142Z","dependency_job_id":"7e8d1915-156f-410e-92e1-96cb8b609a81","html_url":"https://github.com/PaddlePaddle/Paddle-Lite","commit_stats":null,"previous_names":["paddlepaddle/paddle-mobile"],"tags_count":39,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PaddlePaddle%2FPaddle-Lite","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PaddlePaddle%2FPaddle-Lite/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PaddlePaddle%2FPaddle-Lite/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PaddlePaddle%2FPaddle-Lite/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/PaddlePaddle","download_url":"https://codeload.github.com/PaddlePaddle/Paddle-Lite/tar.gz/refs/heads/develop","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":243652694,"owners_count":20325609,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["arm","baidu","deep-learning","embedded","fpga","mali","mdl","mobile","mobile-deep-learning","neural-network"],"created_at":"2024-07-31T03:01:06.895Z","updated_at":"2025-03-20T06:30:56.025Z","avatar_url":"https://github.com/PaddlePaddle.png","language":"C++","readme":"#  Paddle Lite\n\n[English](README_en.md) | 简体中文\n\n  [![Documentation Status](https://img.shields.io/badge/中文文档-最新-brightgreen.svg)](https://www.paddlepaddle.org.cn/lite)  [![Release](https://img.shields.io/github/release/PaddlePaddle/Paddle-Lite.svg)](https://github.com/PaddlePaddle/Paddle-Lite/releases)  [![License](https://img.shields.io/badge/license-Apache%202-blue.svg)](LICENSE) \n\nPaddle Lite 是一个高性能、轻量级、灵活性强且易于扩展的深度学习推理框架，定位于支持包括移动端、嵌入式以及边缘端在内的多种硬件平台。\n\n当前 Paddle Lite 不仅在百度内部业务中得到全面应用，也成功支持了众多外部用户和企业的生产任务。\n\n## 快速入门\n\n使用 Paddle Lite，只需几个简单的步骤，就可以把模型部署到多种终端设备中，运行高性能的推理任务，使用流程如下所示：\n\n**一. 准备模型**\n\nPaddle Lite 框架直接支持模型结构为 [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) 深度学习框架产出的模型格式。目前 PaddlePaddle 用于推理的模型是通过 [save_inference_model](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/static/save_inference_model_cn.html#save-inference-model) 这个 API 保存下来的。\n如果您手中的模型是由诸如 Caffe、Tensorflow、PyTorch 等框架产出的，那么您可以使用 [X2Paddle](https://github.com/PaddlePaddle/X2Paddle) 工具将模型转换为 PaddlePaddle 格式。\n\n**二. 模型优化**\n\nPaddle Lite 框架拥有优秀的加速、优化策略及实现，包含量化、子图融合、Kernel 优选等优化手段。优化后的模型更轻量级，耗费资源更少，并且执行速度也更快。\n这些优化通过 Paddle Lite 提供的 opt 工具实现。opt 工具还可以统计并打印出模型中的算子信息，并判断不同硬件平台下 Paddle Lite 的支持情况。您获取 PaddlePaddle 格式的模型之后，一般需要通过该 opt 工具做模型优化。opt 工具的下载和使用，请参考[模型优化方法](https://www.paddlepaddle.org.cn/lite/develop/user_guides/model_optimize_tool.html)。\n\n**三. 下载或编译**\n\nPaddle Lite 提供了 Android/iOS/x86/macOS 平台的官方 Release 预测库下载，我们优先推荐您直接下载 [Paddle Lite 预编译库](https://www.paddlepaddle.org.cn/lite/develop/quick_start/release_lib.html)，或者从 Release notes 处获取最新的[预编译编译库](https://github.com/PaddlePaddle/Paddle-Lite/releases)。\n\nPaddle Lite 已支持多种环境下的源码编译，为了避免复杂、繁琐的环境搭建过程，我们建议您使用 [Docker 统一编译环境搭建](https://www.paddlepaddle.org.cn/lite/develop/source_compile/docker_env.html) 进行编译。当然，您也可以根据宿主机和目标设备的 CPU 架构和操作系统，在[源码编译](https://www.paddlepaddle.org.cn/lite/develop/source_compile/compile_env.html)中找到相应的环境搭建及编译指南，自行完成编译环境的搭建。\n\n**四. 预测示例**\n\nPaddle Lite 提供了 C++、Java、Python 三种 API，并且提供了相应 API 的完整使用示例:\n\n- [C++ 完整示例](https://www.paddlepaddle.org.cn/lite/develop/user_guides/cpp_demo.html)\n- [Java 完整示例](https://www.paddlepaddle.org.cn/lite/develop/user_guides/java_demo.html)\n- [Python 完整示例](https://www.paddlepaddle.org.cn/lite/develop/user_guides/python_demo.html)\n\n您可以参考示例中的说明快速了解使用方法，并集成到您自己的项目中去。\n\n针对不同的硬件平台，Paddle Lite 提供了各个平台的完整示例：\n\n- [Android apps](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/android_app_demo.html) [[图像分类]](https://paddlelite-demo.bj.bcebos.com/apps/android/mobilenet_classification_demo.apk)  [[目标检测]](https://paddlelite-demo.bj.bcebos.com/apps/android/yolo_detection_demo.apk) [[口罩检测]](https://paddlelite-demo.bj.bcebos.com/apps/android/mask_detection_demo.apk)  [[人脸关键点]](https://paddlelite-demo.bj.bcebos.com/apps/android/face_keypoints_detection_demo.apk) [[人像分割]](https://paddlelite-demo.bj.bcebos.com/apps/android/human_segmentation_demo.apk)\n- [iOS apps](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/ios_app_demo.html)\n- [Linux apps](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/linux_arm_demo.html)\n- [Arm](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/arm_cpu.html)\n- [x86](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/x86.html)\n- [OpenCL](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/opencl.html)\n- [Metal](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/metal.html)\n- [华为麒麟 NPU](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/huawei_kirin_npu.html)\n- [华为昇腾 NPU](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/huawei_ascend_npu.html)\n- [昆仑芯 XPU](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/kunlunxin_xpu.html)\n- [昆仑芯 XTCL](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/kunlunxin_xtcl.html)\n- [高通 QNN](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/qualcomm_qnn.html)\n- [寒武纪 MLU](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/cambricon_mlu.html)\n- [(瑞芯微/晶晨/恩智浦) 芯原 TIM-VX](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/verisilicon_timvx.html)\n- [Android NNAPI](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/android_nnapi.html)\n- [联发科 APU](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/mediatek_apu.html)\n- [颖脉 NNA](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/imagination_nna.html)\n- [Intel OpenVINO](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/intel_openvino.html)\n- [亿智 NPU](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/eeasytech_npu.html)\n\n\n\n## 主要特性\n\n- 支持多平台：涵盖 Android、iOS、嵌入式 Linux 设备、Windows、macOS 和 Linux 主机\n- 支持多种语言：包括 Java、Python、C++\n- 轻量化和高性能：针对移动端设备的机器学习进行优化，压缩模型和二进制文件体积，高效推理，降低内存消耗\n\n## 持续集成\n\n| System | x86 Linux | ARM Linux | Android (GCC/Clang) | iOS |\n|:-:|:-:|:-:|:-:|:-:|\n| CPU(32bit) | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) |\n| CPU(64bit) | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) |\n| OpenCL | - | - | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | - |\n| Metal | - | - | - | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) |\n| 华为麒麟 NPU | - | - | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | - |\n| 华为昇腾 NPU | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | - | - |\n| 昆仑芯 XPU | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | - | - |\n| 昆仑芯 XTCL | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | - | - |\n| 高通 QNN | - | - | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | - |\n| 寒武纪 MLU | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | - | - | - |\n| (瑞芯微/晶晨/恩智浦) 芯原 TIM-VX | - | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | - |\n| Android\tNNAPI | - | - | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | - |\n| 联发科 APU | - | - | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | - |\n| 颖脉 NPU | - | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | - | - |\n| Intel OpenVINO | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | - | - | - |\n| 亿智 NPU | - | ![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg) | - | - |\n\n## 架构设计\n\nPaddle Lite 的架构设计着重考虑了对多硬件和平台的支持，并且强化了多个硬件在一个模型中混合执行的能力，多个层面的性能优化处理，以及对端侧应用的轻量化设计。\n\n\u003cp align=\"center\"\u003e\u003cimg width=\"500\" src=\"https://paddlelite-demo.bj.bcebos.com/devices/generic/paddle_lite_with_nnadapter.jpg\"/\u003e\u003c/p\u003e\n\n其中，Analysis Phase 包括了 MIR(Machine IR) 相关模块，能够对原有的模型的计算图针对具体的硬件列表进行算子融合、计算裁剪 在内的多种优化。Execution Phase 只涉及到 Kernel 的执行，且可以单独部署，以支持极致的轻量级部署。\n\n## 进一步了解 Paddle Lite\n\n如果您想要进一步了解 Paddle Lite，下面是进一步学习和使用 Paddle Lite 的相关内容：\n### 文档和示例\n- 完整文档： [Paddle Lite 文档](https://www.paddlepaddle.org.cn/lite)\n-  API文档：\n\t- [C++ API 文档](https://www.paddlepaddle.org.cn/lite/develop/api_reference/cxx_api_doc.html)\n\t- [Java API 文档](https://www.paddlepaddle.org.cn/lite/develop/api_reference/java_api_doc.html)\n\t- [Python API 文档](https://www.paddlepaddle.org.cn/lite/develop/api_reference/python_api_doc.html)\n\t- [CV 图像处理 API 文档](https://www.paddlepaddle.org.cn/lite/develop/api_reference/cv.html)\n- Paddle Lite 工程示例： [Paddle-Lite-Demo](https://github.com/PaddlePaddle/Paddle-Lite-Demo)\n### 关键技术\n- 模型量化：\n\t- [静态离线量化](https://www.paddlepaddle.org.cn/lite/develop/user_guides/quant/quant_post_static.html)\n\t- [动态离线量化](https://www.paddlepaddle.org.cn/lite/develop/user_guides/quant/quant_post_dynamic.html)\n- 调试分析：[调试和性能分析工具](https://www.paddlepaddle.org.cn/lite/develop/user_guides/profiler.html)\n- 移动端模型训练：点击[了解一下](https://www.paddlepaddle.org.cn/lite/develop/demo_guides/cpp_train_demo.html)\n- 飞桨预训练模型库：试试在 [PaddleHub](https://www.paddlepaddle.org.cn/hublist?filter=hot\u0026value=1) 浏览和下载 Paddle 的预训练模型\n- 飞桨推理 AI 硬件统一适配框架 NNAdapter：点击[了解一下](https://www.paddlepaddle.org.cn/lite/develop/develop_guides/nnadapter.html)\n### FAQ\n- FAQ：常见问题，可以访问 [FAQ](https://www.paddlepaddle.org.cn/lite/develop/quick_start/faq.html)、搜索 Issues、或者通过页面底部的联系方式联系我们\n### 贡献代码\n- 贡献代码：如果您想一起参与 Paddle Lite 的开发，贡献代码，请访问[开发者共享文档](https://www.paddlepaddle.org.cn/lite/develop/develop_guides/for-developer.html)\n\n\n##  交流与反馈\n* AIStudio 实训平台端测部署系列课程：https://aistudio.baidu.com/aistudio/course/introduce/22690\n* 欢迎您通过 [Github Issues](https://github.com/PaddlePaddle/Paddle-Lite/issues) 来提交问题、报告与建议\n* 技术交流微信群：添加 wechat id:baidupaddle或扫描下方微信二维码，添加并回复小助手“端侧”，系统自动邀请加入；技术群 QQ 群: 一群696965088（已满） ；二群，959308808；\n\n\u003cp align=\"center\"\u003e\u003cimg width=\"200\" height=\"200\"  src=\"https://user-images.githubusercontent.com/63448337/162189409-6c0ef74f-82fd-48c9-9fa7-fc3473428a63.png\"/\u003e\u0026#8194;\u0026#8194;\u0026#8194;\u0026#8194;\u0026#8194;\u003cimg width=\"200\" height=\"200\" margin=\"500\" src=\"https://github.com/PaddlePaddle/Paddle-Lite/blob/develop/docs/images/qq-group-chat.png\"/\u003e\u003c/p\u003e\n\u003cp align=\"center\"\u003e\u0026#8194;\u0026#8194;\u0026#8194;微信公众号\u0026#8194;\u0026#8194;\u0026#8194;\u0026#8194;\u0026#8194;\u0026#8194;\u0026#8194;\u0026#8194;\u0026#8194;\u0026#8194;\u0026#8194;\u0026#8194;\u0026#8194;\u0026#8194;\u0026#8194;\u0026#8194;官方技术交流QQ群 \n\n\n* 如果您对我们的工作感兴趣，也欢迎[加入我们](https://github.com/PaddlePaddle/Paddle-Lite/issues/6091) ！\n\n## 版权和许可证\nPaddle Lite由 [Apache-2.0 license](LICENSE) 提供。\n","funding_links":[],"categories":["C++","Deep Learning Framework","其他_机器学习与深度学习","C++ (70)","Networks","Libraries","\u003ca id=\"1d9dec1320a5d774dc8e0e7604edfcd3\"\u003e\u003c/a\u003e工具-新添加的"],"sub_categories":["High-Level DL APIs","Inference Framework","\u003ca id=\"8f1b9c5c2737493524809684b934d49a\"\u003e\u003c/a\u003e文章\u0026\u0026视频"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FPaddlePaddle%2FPaddle-Lite","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FPaddlePaddle%2FPaddle-Lite","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FPaddlePaddle%2FPaddle-Lite/lists"}