{"id":13436625,"url":"https://github.com/PaddlePaddle/PaddleNLP","last_synced_at":"2025-03-18T21:30:46.065Z","repository":{"id":36956365,"uuid":"336274588","full_name":"PaddlePaddle/PaddleNLP","owner":"PaddlePaddle","description":"👑 Easy-to-use and powerful NLP and LLM library with 🤗 Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including 🗂Text Classification,  🔍 Neural Search, ❓ Question Answering, ℹ️ Information Extraction, 📄 Document Intelligence, 💌 Sentiment Analysis etc. ","archived":false,"fork":false,"pushed_at":"2025-03-17T12:26:26.000Z","size":108630,"stargazers_count":12435,"open_issues_count":657,"forks_count":3006,"subscribers_count":103,"default_branch":"develop","last_synced_at":"2025-03-17T22:11:27.365Z","etag":null,"topics":["bert","compression","distributed-training","document-intelligence","embedding","ernie","information-extraction","llama","llm","neural-search","nlp","paddlenlp","pretrained-models","question-answering","search-engine","semantic-analysis","sentiment-analysis","transformers","uie"],"latest_commit_sha":null,"homepage":"https://paddlenlp.readthedocs.io","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/PaddlePaddle.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":".github/CONTRIBUTING_en.md","funding":null,"license":"LICENSE","code_of_conduct":".github/CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-02-05T13:07:42.000Z","updated_at":"2025-03-17T13:13:03.000Z","dependencies_parsed_at":"2024-01-17T10:42:27.130Z","dependency_job_id":"e9636967-6795-4ebd-997a-bd21b615888a","html_url":"https://github.com/PaddlePaddle/PaddleNLP","commit_stats":{"total_commits":5025,"total_committers":320,"mean_commits":15.703125,"dds":0.942089552238806,"last_synced_commit":"cfcd079ca16693757a00af250e3e0d76616d7643"},"previous_names":[],"tags_count":53,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PaddlePaddle%2FPaddleNLP","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PaddlePaddle%2FPaddleNLP/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PaddlePaddle%2FPaddleNLP/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PaddlePaddle%2FPaddleNLP/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/PaddlePaddle","download_url":"https://codeload.github.com/PaddlePaddle/PaddleNLP/tar.gz/refs/heads/develop","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":244124195,"owners_count":20401685,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["bert","compression","distributed-training","document-intelligence","embedding","ernie","information-extraction","llama","llm","neural-search","nlp","paddlenlp","pretrained-models","question-answering","search-engine","semantic-analysis","sentiment-analysis","transformers","uie"],"created_at":"2024-07-31T03:00:50.789Z","updated_at":"2025-03-18T21:30:46.059Z","avatar_url":"https://github.com/PaddlePaddle.png","language":"Python","readme":"**简体中文**🀄 | [English🌎](./README_en.md)\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/1371212/175816733-8ec25eb0-9af3-4380-9218-27c154518258.png\" align=\"middle\"  width=\"500\" /\u003e\n\u003c/p\u003e\n\n------------------------------------------------------------------------------------------\n\n\u003cp align=\"center\"\u003e\n    \u003ca href=\"https://paddlenlp.readthedocs.io/en/latest/?badge=latest\"\u003e\u003cimg src=\"https://readthedocs.org/projects/paddlenlp/badge/?version=latest\"\u003e\n    \u003ca href=\"https://github.com/PaddlePaddle/PaddleNLP/releases\"\u003e\u003cimg src=\"https://img.shields.io/github/v/release/PaddlePaddle/PaddleNLP?color=ffa\"\u003e\u003c/a\u003e\n    \u003ca href=\"\"\u003e\u003cimg src=\"https://img.shields.io/badge/python-3.7+-aff.svg\"\u003e\u003c/a\u003e\n    \u003ca href=\"\"\u003e\u003cimg src=\"https://img.shields.io/badge/os-linux%2C%20win%2C%20mac-pink.svg\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://github.com/PaddlePaddle/PaddleNLP/graphs/contributors\"\u003e\u003cimg src=\"https://img.shields.io/github/contributors/PaddlePaddle/PaddleNLP?color=9ea\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://github.com/PaddlePaddle/PaddleNLP/commits\"\u003e\u003cimg src=\"https://img.shields.io/github/commit-activity/m/PaddlePaddle/PaddleNLP?color=3af\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://pypi.org/project/paddlenlp/\"\u003e\u003cimg src=\"https://img.shields.io/pypi/dm/paddlenlp?color=9cf\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://github.com/PaddlePaddle/PaddleNLP/issues\"\u003e\u003cimg src=\"https://img.shields.io/github/issues/PaddlePaddle/PaddleNLP?color=9cc\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://github.com/PaddlePaddle/PaddleNLP/stargazers\"\u003e\u003cimg src=\"https://img.shields.io/github/stars/PaddlePaddle/PaddleNLP?color=ccf\"\u003e\u003c/a\u003e\n    \u003ca href=\"./LICENSE\"\u003e\u003cimg src=\"https://img.shields.io/badge/license-Apache%202-dfd.svg\"\u003e\u003c/a\u003e\n\u003c/p\u003e\n\n\u003ch4 align=\"center\"\u003e\n  \u003ca href=#特性\u003e 特性 \u003c/a\u003e |\n  \u003ca href=#模型支持\u003e 模型支持 \u003c/a\u003e |\n  \u003ca href=#安装\u003e 安装 \u003c/a\u003e |\n  \u003ca href=#快速开始\u003e 快速开始 \u003c/a\u003e |\n  \u003ca href=#社区交流\u003e 社区交流 \u003c/a\u003e\n\u003c/h4\u003e\n\n**PaddleNLP**是一款基于飞桨深度学习框架的大语言模型(LLM)开发套件，支持在多种硬件上进行高效的大模型训练、无损压缩以及高性能推理。PaddleNLP 具备**简单易用**和**性能极致**的特点，致力于助力开发者实现高效的大模型产业级应用。\n\n\u003ca href=\"https://trendshift.io/repositories/2246\" target=\"_blank\"\u003e\u003cimg src=\"https://trendshift.io/api/badge/repositories/2246\" alt=\"PaddlePaddle%2FPaddleNLP | Trendshift\" style=\"width: 250px; height: 55px;\" width=\"250\" height=\"55\"/\u003e\u003c/a\u003e\n\n## News 📢\n\n* **2025.03.17 《DeepSeek-R1满血版单机部署实测》** 🔥🔥🔥 飞桨框架3.0大模型推理部署全面升级，支持多款主流大模型，DeepSeek-R1满血版实现单机部署，吞吐提升一倍！欢迎广大用户开箱体验～现已开启有奖活动：完成DeepSeek-R1-MTP 单机部署任务、提交高质量测评blog，即可实时赢取奖金！💰💰💰\n报名[地址](https://www.wjx.top/vm/OlzzmbG.aspx#)， 活动详情：https://github.com/PaddlePaddle/PaddleNLP/issues/10166 ， 参考文档：https://github.com/PaddlePaddle/PaddleNLP/issues/10157 。\n\n* **2025.03.12 [PaddleNLP v3.0 Beta4](https://github.com/PaddlePaddle/PaddleNLP/releases/tag/v3.0.0-beta4)**：全面支持 DeepSeek V3/R1/R1-Distill, 及QwQ-32B等热门思考模型。**DeepSeek V3/R1完整版支持FP8、INT8、4-bit量化推理，MTP投机解码**。单机FP8推理输出超**1000 tokens/s**; 4-bit推理输出超**2100 tokens/s**! 发布新版推理部署镜像，热门模型[一键部署](https://paddlenlp.readthedocs.io/zh/latest/llm/server/docs/general_model_inference.html)。推理部署[使用文档](https://paddlenlp.readthedocs.io/zh/latest/llm/docs/predict/index.html)全面更新，体验全面提升！自研下一代通用信息抽取模型 PP-UIE [全新发布](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/application/information_extraction)，支持8K长度信息抽取。新增大模型 Embedding 训练，支持INF-CL超大batch size训练。新增[MergeKit](https://paddlenlp.readthedocs.io/zh/latest/llm/docs/mergekit.html)模型融合工具，缓解对齐代价。低资源训练全面优化，16G小显存可以流畅训练。\n\n* **2025.03.06 PaddleNLP 现已支持 Qwen/QwQ-32B 模型**: 其模型参数仅有 32B，但其数学推理、编程能力和通用能力可与具备 671B 参数（其中 37B 被激活）的 DeepSeek-R1 媲美。借助 PaddleNLP 3.0套件，现可实现多种并行策略[微调训练](./llm/README.md)、[高性能推理、低比特量化](./llm/docs/predict/qwen.md)和[服务化部署](./llm/server/README.md)。\n\n* **2025.02.10 PaddleNLP 现已支持 DeepSeek-R1系列模型，[在线使用](https://aistudio.baidu.com/projectdetail/8775758)**：依托全新的 PaddleNLP 3.0套件，DeepSeek-R1系列模型现已全面支持。凭借数据并行、数据分组切分并行、模型并行、流水线并行以及专家并行等一系列先进的分布式训练能力，结合 Paddle 框架独有的列稀疏注意力掩码表示技术——FlashMask 方法，DeepSeek-R1系列模型在训练过程中显著降低了显存消耗，同时取得了卓越的训练性能提升。\n\n\u003cdetails\u003e\u003csummary\u003e \u003cb\u003e点击展开\u003c/b\u003e \u003c/summary\u003e\u003cdiv\u003e\n\n* **2025.02.20 🔥🔥《PP-UIE 信息抽取智能引擎全新升级》** 强化零样本学习能力，支持极少甚至零标注数据实现高效冷启动与迁移学习，显著降低数据标注成本；具备处理长文本能力，支持 8192 个 Token 长度文档信息抽取，实现跨段落识别关键信息，形成完整理解；提供完整可定制化的训练和推理全流程，训练效率相较于 LLama-Factory 实现了1.8倍的提升。\n2月26日（周三）19：00为您深度解析全新 PP-UIE 技术方案及在部署方面的功能、优势与技巧。报名链接：https://www.wjx.top/vm/mBKC6pb.aspx?udsid=606418\n\n* **2024.12.16 [PaddleNLP v3.0 Beta3](https://github.com/PaddlePaddle/PaddleNLP/releases/tag/v3.0.0-beta3)**：大模型功能全新升级，新增了 Llama-3.2、DeepSeekV2模型，升级了 TokenizerFast，快速分词，重构了 SFTTrainer，一键开启 SFT 训练。此外，PaddleNLP 还支持了优化器状态的卸载和重载功能，实现了精细化的重新计算，训练性能提升7%。在 Unified Checkpoint 方面，进一步优化了异步保存逻辑，新增 Checkpoint 压缩功能，可节省78.5%存储空间。\n最后，在大模型推理方面，升级 Append Attention，支持了 FP8量化，支持投机解码。\n\n* **2024.12.13 📚《飞桨大模型套件 Unified Checkpoint 技术》**，加速模型存储95%，节省空间78%。支持全分布式策略调整自适应转换，提升模型训练的灵活性与可扩展性。训练-压缩-推理统一存储协议，无需手动转换提升全流程体验。Checkpoint 无损压缩结合异步保存，实现秒级存储并降低模型存储成本。适用于智能制造、指挥交通、医疗健康、金融服务等产业实际场景。12月24日（周二）19：00直播为您详细解读该技术如何优化大模型训练流程。报名链接：https://www.wjx.top/vm/huZkHn9.aspx?udsid=787976\n\n* **2024.11.28 📚《FlashRAG-Paddle | 基于 PaddleNLP 的高效开发与评测 RAG 框架》**，为文本更快更好构建准确嵌入表示、加速推理生成速度。PaddleNLP 支持超大 Batch 嵌入表示学习与多硬件高性能推理，涵盖 INT8/INT4量化技术及多种高效注意力机制优化与 TensorCore 深度优化。内置全环节算子融合技术，使得 FlashRAG 推理性能相比 transformers 动态图提升70%以上，结合检索增强知识输出结果更加准确，带来敏捷高效的使用体验。直播时间：12月3日（周二）19：00。报名链接：https://www.wjx.top/vm/eaBa1vA.aspx?udsid=682361\n\n* **2024.08.08 📚《飞桨产业级大语言模型开发利器 PaddleNLP 3.0 重磅发布》**，训压推全流程贯通，主流模型全覆盖。大模型自动并行，千亿模型训推全流程开箱即用。提供产业级高性能精调与对齐解决方案，压缩推理领先，多硬件适配。覆盖产业级智能助手、内容创作、知识问答、关键信息抽取等应用场景。直播时间：8月22日（周四）19：00。报名链接：https://www.wjx.top/vm/Y2f7FFY.aspx?udsid=143844\n\n* **2024.06.27 [PaddleNLP v3.0 Beta](https://github.com/PaddlePaddle/PaddleNLP/releases/tag/v3.0.0-beta0)**：拥抱大模型，体验全升级。统一大模型套件，实现国产计算芯片全流程接入；全面支持飞桨4D 并行配置、高效精调策略、高效对齐算法、高性能推理等大模型产业级应用流程；自研极致收敛的 RsLoRA+算法、自动扩缩容存储机制 Unified Checkpoint 和通用化支持的 FastFFN、FusedQKV 助力大模型训推；主流模型持续支持更新，提供高效解决方案。\n\n* **2024.04.24 [PaddleNLP v2.8](https://github.com/PaddlePaddle/PaddleNLP/releases/tag/v2.8.0)**：自研极致收敛的 RsLoRA+算法，大幅提升 PEFT 训练收敛速度以及训练效果；引入高性能生成加速到 RLHF PPO 算法，打破 PPO 训练中生成速度瓶颈，PPO 训练性能大幅领先。通用化支持 FastFFN、FusedQKV 等多个大模型训练性能优化方式，大模型训练更快、更稳定。\n\u003c/div\u003e\u003c/details\u003e\n\n## 特性\n\n### \u003ca href=#多硬件训推一体\u003e 🔧 多硬件训推一体 \u003c/a\u003e\n\n支持英伟达 GPU、昆仑 XPU、昇腾 NPU、燧原 GCU 和海光 DCU 等多个硬件的大模型和自然语言理解模型训练和推理，套件接口支持硬件快速切换，大幅降低硬件切换研发成本。\n当前支持的自然语言理解模型：[多硬件自然语言理解模型列表](./docs/model_zoo/model_list_multy_device.md)\n\n### \u003ca href=#高效易用的预训练\u003e 🚀 高效易用的预训练 \u003c/a\u003e\n\n支持纯数据并行策略、分组参数切片的数据并行策略、张量模型并行策略和流水线模型并行策略的4D 高性能训练，Trainer 支持分布式策略配置化，降低复杂分布式组合带来的使用成本；\n[Unified Checkpoint 大模型存储工具](./llm/docs/unified_checkpoint.md)可以使得训练断点支持机器资源动态扩缩容恢复。此外，异步保存，模型存储可加速95%，Checkpoint 压缩，可节省78.5%存储空间。\n\n### \u003ca href=#高效精调\u003e 🤗 高效精调 \u003c/a\u003e\n\n精调算法深度结合零填充数据流和 [FlashMask](./llm/docs/flashmask.md) 高性能算子，降低训练无效数据填充和计算，大幅提升精调训练吞吐。\n\n### \u003ca href=#无损压缩和高性能推理\u003e 🎛️ 无损压缩和高性能推理 \u003c/a\u003e\n\n大模型套件高性能推理模块内置动态插入和全环节算子融合策略，极大加快并行推理速度。底层实现细节封装化，实现开箱即用的高性能并行推理能力。\n\n## 文档\n更多详细文档, 请访问 [PaddleNLP Documentation](https://paddlenlp.readthedocs.io/).\n\n------------------------------------------------------------------------------------------\n\n## 模型支持\n\n* 模型参数已支持 LLaMA 系列、Baichuan 系列、Bloom 系列、ChatGLM 系列、Gemma 系列、Mistral 系列、OPT 系列和 Qwen 系列，详细列表👉【LLM】模型参数支持列表如下：\n\n|                                                模型系列                                                 | 模型名称                                                                                                                                                                                                                                                                                                                                                                                      |\n|:-------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [PP-UIE](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/application/information_extraction) | paddlenlp/PP-UIE-0.5B, paddlenlp/PP-UIE-1.5B, paddlenlp/PP-UIE-7B, paddlenlp/PP-UIE-14B                                                                                                                                                                                                                                                                                                       |\n|            [LLaMA](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/llama)             | facebook/llama-7b, facebook/llama-13b, facebook/llama-30b, facebook/llama-65b                                                                                                                                                                                                                                                                                                                 |\n|            [Llama2](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/llama)            | meta-llama/Llama-2-7b, meta-llama/Llama-2-7b-chat, meta-llama/Llama-2-13b, meta-llama/Llama-2-13b-chat, meta-llama/Llama-2-70b, meta-llama/Llama-2-70b-chat                                                                                                                                                                                                                                   |\n|            [Llama3](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/llama)            | meta-llama/Meta-Llama-3-8B, meta-llama/Meta-Llama-3-8B-Instruct, meta-llama/Meta-Llama-3-70B, meta-llama/Meta-Llama-3-70B-Instruct                                                                                                                                                                                                                                                            |\n|           [Llama3.1](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/llama)           | meta-llama/Meta-Llama-3.1-8B, meta-llama/Meta-Llama-3.1-8B-Instruct, meta-llama/Meta-Llama-3.1-70B, meta-llama/Meta-Llama-3.1-70B-Instruct, meta-llama/Meta-Llama-3.1-405B, meta-llama/Meta-Llama-3.1-405B-Instruct, meta-llama/Llama-Guard-3-8B                                                                                                                                              |\n|           [Llama3.2](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/llama)           | meta-llama/Llama-3.2-1B, meta-llama/Llama-3.2-1B-Instruct, meta-llama/Llama-3.2-3B, meta-llama/Llama-3.2-3B-Instruct, meta-llama/Llama-Guard-3-1B                                                                                                                                                                                                                                             |\n|           [Llama3.3](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/llama)           | meta-llama/Llama-3.3-70B-Instruct                                                                                                                                                                                                                                                                                                                                                             |\n|         [Baichuan](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/baichuan)          | baichuan-inc/Baichuan-7B, baichuan-inc/Baichuan-13B-Base, baichuan-inc/Baichuan-13B-Chat                                                                                                                                                                                                                                                                                                      |\n|         [Baichuan2](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/baichuan)         | baichuan-inc/Baichuan2-7B-Base, baichuan-inc/Baichuan2-7B-Chat, baichuan-inc/Baichuan2-13B-Base, baichuan-inc/Baichuan2-13B-Chat                                                                                                                                                                                                                                                              |\n|            [Bloom](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/bloom)             | bigscience/bloom-560m, bigscience/bloom-560m-bf16, bigscience/bloom-1b1, bigscience/bloom-3b, bigscience/bloom-7b1, bigscience/bloomz-560m, bigscience/bloomz-1b1, bigscience/bloomz-3b, bigscience/bloomz-7b1-mt, bigscience/bloomz-7b1-p3, bigscience/bloomz-7b1, bellegroup/belle-7b-2m                                                                                                    |\n|          [ChatGLM](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/chatglm/)          | THUDM/chatglm-6b, THUDM/chatglm-6b-v1.1                                                                                                                                                                                                                                                                                                                                                       |\n|         [ChatGLM2](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/chatglm2)          | THUDM/chatglm2-6b                                                                                                                                                                                                                                                                                                                                                                             |\n|         [ChatGLM3](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/chatglm2)          | THUDM/chatglm3-6b                                                                                                                                                                                                                                                                                                                                                                             |\n|       [DeepSeekV2](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/llm/config/deepseek-v2)       | deepseek-ai/DeepSeek-V2, deepseek-ai/DeepSeek-V2-Chat, deepseek-ai/DeepSeek-V2-Lite, deepseek-ai/DeepSeek-V2-Lite-Chat, deepseek-ai/DeepSeek-Coder-V2-Base, deepseek-ai/DeepSeek-Coder-V2-Instruct, deepseek-ai/DeepSeek-Coder-V2-Lite-Base, deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct                                                                                                      |\n|       [DeepSeekV3](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/llm/config/deepseek-v2)       | deepseek-ai/DeepSeek-V3, deepseek-ai/DeepSeek-V3-Base                                                                                                                                                                                                                                                                                                                                         |\n|      [DeepSeek-R1](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/llm/config/deepseek-v2)       | deepseek-ai/DeepSeek-R1, deepseek-ai/DeepSeek-R1-Zero, deepseek-ai/DeepSeek-R1-Distill-Llama-70B, deepseek-ai/DeepSeek-R1-Distill-Llama-8B, deepseek-ai/DeepSeek-R1-Distill-Qwen-14B, deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B, deepseek-ai/DeepSeek-R1-Distill-Qwen-32B, deepseek-ai/DeepSeek-R1-Distill-Qwen-7B                                                                            |\n|            [Gemma](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/gemma)             | google/gemma-7b, google/gemma-7b-it, google/gemma-2b, google/gemma-2b-it                                                                                                                                                                                                                                                                                                                      |\n|          [Mistral](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/mistral)           | mistralai/Mistral-7B-Instruct-v0.3, mistralai/Mistral-7B-v0.1                                                                                                                                                                                                                                                                                                                                 |\n|          [Mixtral](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/mixtral)           | mistralai/Mixtral-8x7B-Instruct-v0.1                                                                                                                                                                                                                                                                                                                                                          |\n|              [OPT](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/opt)               | facebook/opt-125m, facebook/opt-350m, facebook/opt-1.3b, facebook/opt-2.7b, facebook/opt-6.7b, facebook/opt-13b, facebook/opt-30b, facebook/opt-66b, facebook/opt-iml-1.3b, opt-iml-max-1.3b                                                                                                                                                                                                  |\n|             [Qwen](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/qwen/)             | qwen/qwen-7b, qwen/qwen-7b-chat, qwen/qwen-14b, qwen/qwen-14b-chat, qwen/qwen-72b, qwen/qwen-72b-chat,                                                                                                                                                                                                                                                                                        |\n|           [Qwen1.5](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/qwen/)            | Qwen/Qwen1.5-0.5B, Qwen/Qwen1.5-0.5B-Chat, Qwen/Qwen1.5-1.8B, Qwen/Qwen1.5-1.8B-Chat, Qwen/Qwen1.5-4B, Qwen/Qwen1.5-4B-Chat, Qwen/Qwen1.5-7B, Qwen/Qwen1.5-7B-Chat, Qwen/Qwen1.5-14B, Qwen/Qwen1.5-14B-Chat, Qwen/Qwen1.5-32B, Qwen/Qwen1.5-32B-Chat, Qwen/Qwen1.5-72B, Qwen/Qwen1.5-72B-Chat, Qwen/Qwen1.5-110B, Qwen/Qwen1.5-110B-Chat, Qwen/Qwen1.5-MoE-A2.7B, Qwen/Qwen1.5-MoE-A2.7B-Chat |\n|            [Qwen2](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/qwen/)             | Qwen/Qwen2-0.5B, Qwen/Qwen2-0.5B-Instruct, Qwen/Qwen2-1.5B, Qwen/Qwen2-1.5B-Instruct, Qwen/Qwen2-7B, Qwen/Qwen2-7B-Instruct, Qwen/Qwen2-72B, Qwen/Qwen2-72B-Instruct, Qwen/Qwen2-57B-A14B, Qwen/Qwen2-57B-A14B-Instruct                                                                                                                                                                       |\n|          [Qwen2-Math](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/qwen/)          | Qwen/Qwen2-Math-1.5B, Qwen/Qwen2-Math-1.5B-Instruct, Qwen/Qwen2-Math-7B, Qwen/Qwen2-Math-7B-Instruct, Qwen/Qwen2-Math-72B, Qwen/Qwen2-Math-72B-Instruct, Qwen/Qwen2-Math-RM-72B                                                                                                                                                                                                               |\n|           [Qwen2.5](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/qwen/)            | Qwen/Qwen2.5-0.5B, Qwen/Qwen2.5-0.5B-Instruct, Qwen/Qwen2.5-1.5B, Qwen/Qwen2.5-1.5B-Instruct, Qwen/Qwen2.5-3B, Qwen/Qwen2.5-3B-Instruct, Qwen/Qwen2.5-7B, Qwen/Qwen2.5-7B-Instruct, Qwen/Qwen2.5-7B-Instruct-1M, Qwen/Qwen2.5-14B, Qwen/Qwen2.5-14B-Instruct, Qwen/Qwen2.5-14B-Instruct-1M, Qwen/Qwen2.5-32B, Qwen/Qwen2.5-32B-Instruct, Qwen/Qwen2.5-72B, Qwen/Qwen2.5-72B-Instruct          |\n|         [Qwen2.5-Math](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/qwen/)         | Qwen/Qwen2.5-Math-1.5B, Qwen/Qwen2.5-Math-1.5B-Instruct, Qwen/Qwen2.5-Math-7B, Qwen/Qwen2.5-Math-7B-Instruct, Qwen/Qwen2.5-Math-72B, Qwen/Qwen2.5-Math-72B-Instruct, Qwen/Qwen2.5-Math-RM-72B                                                                                                                                                                                                 |\n|        [Qwen2.5-Coder](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/qwen/)         | Qwen/Qwen2.5-Coder-1.5B, Qwen/Qwen2.5-Coder-1.5B-Instruct, Qwen/Qwen2.5-Coder-7B, Qwen/Qwen2.5-Coder-7B-Instruct                                                                                                                                                                                                                                                                              |\n|             [QwQ](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/qwen/)              | Qwen/QwQ-32B, Qwen/QwQ-32B-Preview                                                                                                                                                                                                                                                                                                                                                            |\n|            [Yuan2](https://github.com/PaddlePaddle/PaddleNLP/tree/develop/llm/config/yuan/)             | IEITYuan/Yuan2-2B, IEITYuan/Yuan2-51B, IEITYuan/Yuan2-102B                                                                                                                                                                                                                                                                                                                                    |\n\n* 4D 并行和算子优化已支持 LLaMA 系列、Baichuan 系列、Bloom 系列、ChatGLM 系列、Gemma 系列、Mistral 系列、OPT 系列和 Qwen 系列，【LLM】模型4D 并行和算子支持列表如下：\n\n| 模型名称/并行能力支持 | 数据并行 | 张量模型并行 |          | 参数分片并行 |        |        | 流水线并行 |\n|:---------------------:|:--------:|:------------:|:--------:|:------------:|:------:|:------:|:----------:|\n|                       |          |   基础能力   | 序列并行 |    stage1    | stage2 | stage3 |            |\n|         Llama         |    ✅     |      ✅       |    ✅     |      ✅       |   ✅    |   ✅    |     ✅      |\n|         Qwen          |    ✅     |      ✅       |    ✅     |      ✅       |   ✅    |   ✅    |     ✅      |\n|        Qwen1.5        |    ✅     |      ✅       |    ✅     |      ✅       |   ✅    |   ✅    |     ✅      |\n|         Qwen2         |    ✅     |      ✅       |    ✅     |      ✅       |   ✅    |   ✅    |     ✅      |\n|     Mixtral(moe)      |    ✅     |      ✅       |    ✅     |      ✅       |   ✅    |   ✅    |     🚧     |\n|        Mistral        |    ✅     |      ✅       |    🚧    |      ✅       |   ✅    |   ✅    |     🚧     |\n|       Baichuan        |    ✅     |      ✅       |    ✅     |      ✅       |   ✅    |   ✅    |     ✅      |\n|       Baichuan2       |    ✅     |      ✅       |    ✅     |      ✅       |   ✅    |   ✅    |     ✅      |\n|        ChatGLM        |    ✅     |      ✅       |    🚧    |      ✅       |   ✅    |   ✅    |     🚧     |\n|       ChatGLM2        |    ✅     |      🚧      |    🚧    |      ✅       |   ✅    |   ✅    |     🚧     |\n|       ChatGLM3        |    ✅     |      🚧      |    🚧    |      ✅       |   ✅    |   ✅    |     🚧     |\n|         Bloom         |    ✅     |      ✅       |    🚧    |      ✅       |   ✅    |   ✅    |     🚧     |\n|      GPT-2/GPT-3      |    ✅     |      ✅       |    ✅     |      ✅       |   ✅    |   ✅    |     ✅      |\n|          OPT          |    ✅     |      ✅       |    🚧    |      ✅       |   ✅    |   ✅    |     🚧     |\n|         Gemma         |    ✅     |      ✅       |    ✅     |      ✅       |   ✅    |   ✅    |     ✅      |\n|         Yuan2         |    ✅     |      ✅       |    ✅     |      ✅       |   ✅    |   ✅    |     🚧     |\n\n* 大模型预训练、精调（包含 SFT、PEFT 技术）、对齐、量化已支持 LLaMA 系列、Baichuan 系列、Bloom 系列、ChatGLM 系列、Mistral 系列、OPT 系列和 Qwen 系列，【LLM】模型预训练、精调、对齐、量化支持列表如下：\n\n\n| Model                                      | Pretrain | SFT | LoRA | FlashMask | Prefix Tuning | DPO/SimPO/ORPO/KTO | RLHF | Mergekit | Quantization |\n|--------------------------------------------|:--------:|:---:|:----:|:---------:|:-------------:|:------------------:|:----:|:--------:|:------------:|\n| [Llama](./llm/config/llama)                |    ✅     |  ✅  |  ✅   |     ✅     |       ✅       |         ✅          |  ✅   |    ✅     |      ✅       |\n| [Qwen](./llm/config/qwen)                  |    ✅     |  ✅  |  ✅   |     ✅     |       ✅       |         ✅          |  🚧  |    ✅     |      🚧      |\n| [Mixtral](./llm/config/mixtral)            |    ✅     |  ✅  |  ✅   |    🚧     |      🚧       |         ✅          |  🚧  |    ✅     |      🚧      |\n| [Mistral](./llm/config/mistral)            |    ✅     |  ✅  |  ✅   |    🚧     |       ✅       |         ✅          |  🚧  |    ✅     |      🚧      |\n| [Baichuan/Baichuan2](./llm/config/llama)   |    ✅     |  ✅  |  ✅   |     ✅     |       ✅       |         ✅          |  🚧  |    ✅     |      ✅       |\n| [ChatGLM-6B](./llm/config/chatglm)         |    ✅     |  ✅  |  ✅   |    🚧     |       ✅       |         🚧         |  🚧  |    ✅     |      ✅       |\n| [ChatGLM2/ChatGLM3](./llm/config/chatglm2) |    ✅     |  ✅  |  ✅   |    🚧     |       ✅       |         ✅          |  🚧  |    ✅     |      ✅       |\n| [Bloom](./llm/config/bloom)                |    ✅     |  ✅  |  ✅   |    🚧     |       ✅       |         🚧         |  🚧  |    ✅     |      ✅       |\n| [GPT-3](./llm/config/gpt-3)                |    ✅     |  ✅  |  🚧  |    🚧     |      🚧       |         🚧         |  🚧  |    ✅     |      🚧      |\n| [OPT](./llm/config/opt)                    |    ✅     |  ✅  |  ✅   |    🚧     |      🚧       |         🚧         |  🚧  |    ✅     |      🚧      |\n| [Gemma](./llm/config/gemma)                |    ✅     |  ✅  |  ✅   |    🚧     |      🚧       |         ✅          |  🚧  |    ✅     |      🚧      |\n| [Yuan](./llm/config/yuan)                  |    ✅     |  ✅  |  ✅   |    🚧     |      🚧       |         ✅          |  🚧  |    ✅     |      🚧      |\n* [大模型推理](./llm/docs/predict/inference.md)已支持 LLaMA 系列、Qwen 系列、DeepSeek 系列、Mistral 系列、ChatGLM 系列、Bloom 系列和 Baichuan 系列，支持 Weight Only INT8及 INT4推理，支持 WAC（权重、激活、Cache KV）进行 INT8、FP8量化的推理，【LLM】模型推理支持列表如下：\n\n|           模型名称/量化类型支持            | FP16/BF16 | WINT8 | WINT4 | INT8-A8W8 | FP8-A8W8 | INT8-A8W8C8 |\n|:------------------------------------------:|:---------:|:-----:|:-----:|:---------:|:--------:|:-----------:|\n|    [LLaMA](./llm/docs/predict/llama.md)    |     ✅     |   ✅   |   ✅   |     ✅     |    ✅     |      ✅      |\n|     [Qwen](./llm/docs/predict/qwen.md)     |     ✅     |   ✅   |   ✅   |     ✅     |    ✅     |      ✅      |\n| [DeepSeek](./llm/docs/predict/deepseek.md) |     ✅     |   ✅   |   ✅   |    🚧     |    ✅     |     🚧      |\n|   [Qwen-Moe](./llm/docs/predict/qwen.md)   |     ✅     |   ✅   |   ✅   |    🚧     |    🚧    |     🚧      |\n|  [Mixtral](./llm/docs/predict/mixtral.md)  |     ✅     |   ✅   |   ✅   |    🚧     |    🚧    |     🚧      |\n|                  ChatGLM                   |     ✅     |   ✅   |   ✅   |    🚧     |    🚧    |     🚧      |\n|                   Bloom                    |     ✅     |   ✅   |   ✅   |    🚧     |    🚧    |     🚧      |\n|                  BaiChuan                  |     ✅     |   ✅   |   ✅   |     ✅     |    ✅     |     🚧      |\n\n## 安装\n\n### 环境依赖\n\n* python \u003e= 3.8\n* paddlepaddle \u003e= 3.0.0rc1\n\n如果您尚未安装 PaddlePaddle，请参考 [飞桨官网](https://www.paddlepaddle.org.cn/) 进行安装。\n\n### pip 安装\n\n```shell\npip install --upgrade paddlenlp==3.0.0b4\n```\n\n或者可通过以下命令安装最新 develop 分支代码：\n\n```shell\npip install --pre --upgrade paddlenlp -f https://www.paddlepaddle.org.cn/whl/paddlenlp.html\n```\n\n更多关于 PaddlePaddle 和 PaddleNLP 安装的详细教程请查看[Installation](./docs/get_started/installation.rst)。\n\n------------------------------------------------------------------------------------------\n\n## 快速开始\n\n### 大模型文本生成\n\nPaddleNLP 提供了方便易用的 Auto API，能够快速的加载模型和 Tokenizer。这里以使用 `Qwen/Qwen2-0.5B` 模型做文本生成为例：\n\n```python\n\u003e\u003e\u003e from paddlenlp.transformers import AutoTokenizer, AutoModelForCausalLM\n\u003e\u003e\u003e tokenizer = AutoTokenizer.from_pretrained(\"Qwen/Qwen2-0.5B\")\n\u003e\u003e\u003e model = AutoModelForCausalLM.from_pretrained(\"Qwen/Qwen2-0.5B\", dtype=\"float16\")\n\u003e\u003e\u003e input_features = tokenizer(\"你好！请自我介绍一下。\", return_tensors=\"pd\")\n\u003e\u003e\u003e outputs = model.generate(**input_features, max_length=128)\n\u003e\u003e\u003e print(tokenizer.batch_decode(outputs[0], skip_special_tokens=True))\n['我是一个AI语言模型，我可以回答各种问题，包括但不限于：天气、新闻、历史、文化、科学、教育、娱乐等。请问您有什么需要了解的吗？']\n```\n\n### 大模型预训练\n\n```shell\ngit clone https://github.com/PaddlePaddle/PaddleNLP.git \u0026\u0026 cd PaddleNLP # 如已clone或下载PaddleNLP可跳过\nmkdir -p llm/data \u0026\u0026 cd llm/data\nwget https://bj.bcebos.com/paddlenlp/models/transformers/llama/data/llama_openwebtext_100k.bin\nwget https://bj.bcebos.com/paddlenlp/models/transformers/llama/data/llama_openwebtext_100k.idx\ncd .. # change folder to PaddleNLP/llm\n# 如需使用use_fused_rms_norm=true，需要前往slm/model_zoo/gpt-3/external_ops安装fused_ln\npython -u run_pretrain.py ./config/qwen/pretrain_argument_0p5b.json\n```\n\n### 大模型 SFT 精调\n\n```shell\ngit clone https://github.com/PaddlePaddle/PaddleNLP.git \u0026\u0026 cd PaddleNLP # 如已clone或下载PaddleNLP可跳过\nmkdir -p llm/data \u0026\u0026 cd llm/data\nwget https://bj.bcebos.com/paddlenlp/datasets/examples/AdvertiseGen.tar.gz \u0026\u0026 tar -zxvf AdvertiseGen.tar.gz\ncd .. # change folder to PaddleNLP/llm\npython -u run_finetune.py ./config/qwen/sft_argument_0p5b.json\n```\n\n更多大模型全流程步骤，请参考[飞桨大模型套件](./llm)介绍。\n另外我们还提供了快速微调方式, 无需 clone 源代码：\n\n```python\nfrom paddlenlp.trl import SFTConfig, SFTTrainer\nfrom datasets import load_dataset\n\ndataset = load_dataset(\"ZHUI/alpaca_demo\", split=\"train\")\n\ntraining_args = SFTConfig(output_dir=\"Qwen/Qwen2.5-0.5B-SFT\", device=\"gpu\")\ntrainer = SFTTrainer(\n    args=training_args,\n    model=\"Qwen/Qwen2.5-0.5B-Instruct\",\n    train_dataset=dataset,\n)\ntrainer.train()\n```\n\n更多 PaddleNLP 内容可参考：\n\n* [精选模型库](./slm/model_zoo)，包含优质预训练模型的端到端全流程使用。\n* [多场景示例](./slm/examples)，了解如何使用 PaddleNLP 解决 NLP 多种技术问题，包含基础技术、系统应用与拓展应用。\n* [交互式教程](https://aistudio.baidu.com/aistudio/personalcenter/thirdview/574995)，在🆓免费算力平台 AI Studio 上快速学习 PaddleNLP。\n\n------------------------------------------------------------------------------------------\n\n## 社区交流\n\n* 微信扫描二维码并填写问卷，即可加入交流群与众多社区开发者以及官方团队深度交流.\n\n\u003cdiv align=\"center\"\u003e\n    \u003cimg src=\"https://github.com/user-attachments/assets/3a58cc9f-69c7-4ccb-b6f5-73e966b8051a\" width=\"150\" height=\"150\" /\u003e\n\u003c/div\u003e\n\n## Citation\n\n如果 PaddleNLP 对您的研究有帮助，欢迎引用\n\n```bibtex\n@misc{=paddlenlp,\n    title={PaddleNLP: An Easy-to-use and High Performance NLP Library},\n    author={PaddleNLP Contributors},\n    howpublished = {\\url{https://github.com/PaddlePaddle/PaddleNLP}},\n    year={2021}\n}\n```\n\n## Acknowledge\n\n我们借鉴了 Hugging Face 的[Transformers](https://github.com/huggingface/transformers)🤗关于预训练模型使用的优秀设计，在此对 Hugging Face 作者及其开源社区表示感谢。\n\n## License\n\nPaddleNLP 遵循[Apache-2.0开源协议](./LICENSE)。\n","funding_links":[],"categories":["Uncategorized","Python","Azure Cognitive Search \u0026 OpenAI","微调 Fine-Tuning","Toolkit","HarmonyOS","Deep Learning Framework","其他_NLP自然语言处理","Pretrained Model","Industry Strength Natural Language Processing","sentiment-analysis","开源项目"],"sub_categories":["Uncategorized","Windows Manager","High-Level DL APIs","其他_文本生成、文本对话","辅助开发工具"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FPaddlePaddle%2FPaddleNLP","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FPaddlePaddle%2FPaddleNLP","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FPaddlePaddle%2FPaddleNLP/lists"}