{"id":13585142,"url":"https://github.com/fastnlp/fastNLP","last_synced_at":"2025-04-07T06:32:42.580Z","repository":{"id":37742913,"uuid":"124240220","full_name":"fastnlp/fastNLP","owner":"fastnlp","description":"fastNLP: A Modularized and Extensible NLP Framework. Currently still in incubation.","archived":false,"fork":false,"pushed_at":"2023-06-05T03:00:37.000Z","size":36844,"stargazers_count":3122,"open_issues_count":70,"forks_count":450,"subscribers_count":79,"default_branch":"master","last_synced_at":"2025-04-06T13:07:48.723Z","etag":null,"topics":["chinese-nlp","deep-learning","natural-language-processing","nlp-library","nlp-parsing","text-classification","text-processing"],"latest_commit_sha":null,"homepage":"https://gitee.com/fastnlp/fastNLP","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/fastnlp.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":".github/CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2018-03-07T13:30:20.000Z","updated_at":"2025-04-04T17:55:57.000Z","dependencies_parsed_at":"2022-07-14T04:10:29.262Z","dependency_job_id":"4b827e1b-bde1-4f52-96cb-29ef904e5aa7","html_url":"https://github.com/fastnlp/fastNLP","commit_stats":null,"previous_names":[],"tags_count":13,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fastnlp%2FfastNLP","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fastnlp%2FfastNLP/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fastnlp%2FfastNLP/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fastnlp%2FfastNLP/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/fastnlp","download_url":"https://codeload.github.com/fastnlp/fastNLP/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247607556,"owners_count":20965942,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["chinese-nlp","deep-learning","natural-language-processing","nlp-library","nlp-parsing","text-classification","text-processing"],"created_at":"2024-08-01T15:04:45.736Z","updated_at":"2025-04-07T06:32:42.569Z","avatar_url":"https://github.com/fastnlp.png","language":"Python","readme":"# fastNLP\n\n\n[//]: # ([![Build Status]\u0026#40;https://travis-ci.org/fastnlp/fastNLP.svg?branch=master\u0026#41;]\u0026#40;https://travis-ci.org/fastnlp/fastNLP\u0026#41;)\n\n[//]: # ([![codecov]\u0026#40;https://codecov.io/gh/fastnlp/fastNLP/branch/master/graph/badge.svg\u0026#41;]\u0026#40;https://codecov.io/gh/fastnlp/fastNLP\u0026#41;)\n\n[//]: # ([![Pypi]\u0026#40;https://img.shields.io/pypi/v/fastNLP.svg\u0026#41;]\u0026#40;https://pypi.org/project/fastNLP\u0026#41;)\n\n[//]: # (![Hex.pm]\u0026#40;https://img.shields.io/hexpm/l/plug.svg\u0026#41;)\n\n[//]: # ([![Documentation Status]\u0026#40;https://readthedocs.org/projects/fastnlp/badge/?version=latest\u0026#41;]\u0026#40;http://fastnlp.readthedocs.io/?badge=latest\u0026#41;)\n\n\nfastNLP是一款轻量级的自然语言处理（NLP）工具包，目标是减少用户项目中的工程型代码，例如数据处理循环、训练循环、多卡运行等。\n\nfastNLP具有如下的特性：\n\n- 便捷。在数据处理中可以通过apply函数避免循环、使用多进程提速等；在训练循环阶段可以很方便定制操作。\n- 高效。无需改动代码，实现fp16切换、多卡、ZeRO优化等。\n- 兼容。fastNLP支持多种深度学习框架作为后端。\n\n\u003e :warning: **为了实现对不同深度学习架构的兼容，fastNLP 1.0.0之后的版本重新设计了架构，因此与过去的fastNLP版本不完全兼容，\n\u003e 基于更早的fastNLP代码需要做一定的调整**: \n\n## fastNLP文档\n[中文文档](http://www.fastnlp.top/docs/fastNLP/master/index.html)\n\n## 安装指南\nfastNLP可以通过以下的命令进行安装\n```shell\npip install fastNLP\u003e=1.0.0alpha\n```\n如果需要安装更早版本的fastNLP请指定版本号，例如\n```shell\npip install fastNLP==0.7.1\n```\n另外，请根据使用的深度学习框架，安装相应的深度学习框架。\n\n\u003cdetails\u003e\n\u003csummary\u003ePytorch\u003c/summary\u003e\n下面是使用pytorch来进行文本分类的例子。需要安装torch\u003e=1.6.0。\n\n```python\nfrom fastNLP.io import ChnSentiCorpLoader\nfrom functools import partial\nfrom fastNLP import cache_results\nfrom fastNLP.transformers.torch import BertTokenizer\n\n# 使用cache_results装饰器装饰函数，将prepare_data的返回结果缓存到caches/cache.pkl，再次运行时，如果\n#  该文件还存在，将自动读取缓存文件，而不再次运行预处理代码。\n@cache_results('caches/cache.pkl')\ndef prepare_data():\n    # 会自动下载数据，并且可以通过文档看到返回的 dataset 应该是包含\"raw_words\"和\"target\"两个field的\n    data_bundle = ChnSentiCorpLoader().load()\n    # 使用tokenizer对数据进行tokenize\n    tokenizer = BertTokenizer.from_pretrained('hfl/chinese-bert-wwm')\n    tokenize = partial(tokenizer, max_length=256)  # 限制数据的最大长度\n    data_bundle.apply_field_more(tokenize, field_name='raw_chars', num_proc=4)  # 会新增\"input_ids\", \"attention_mask\"等field进入dataset中\n    data_bundle.apply_field(int, field_name='target', new_field_name='labels')  # 将int函数应用到每个target上，并且放入新的labels field中\n    return data_bundle\ndata_bundle = prepare_data()\nprint(data_bundle.get_dataset('train')[:4])\n\n# 初始化model, optimizer\nfrom fastNLP.transformers.torch import BertForSequenceClassification\nfrom torch import optim\nmodel = BertForSequenceClassification.from_pretrained('hfl/chinese-bert-wwm')\noptimizer = optim.AdamW(model.parameters(), lr=2e-5)\n\n# 准备dataloader\nfrom fastNLP import prepare_dataloader\ndls = prepare_dataloader(data_bundle, batch_size=32)\n\n# 准备训练\nfrom fastNLP import Trainer, Accuracy, LoadBestModelCallback, TorchWarmupCallback, Event\ncallbacks = [\n    TorchWarmupCallback(warmup=0.1, schedule='linear'),   # 训练过程中调整学习率。\n    LoadBestModelCallback()  # 将在训练结束之后，加载性能最优的model\n]\n# 在训练特定时机加入一些操作， 不同时机能够获取到的参数不一样，可以通过Trainer.on函数的文档查看每个时机的参数\n@Trainer.on(Event.on_before_backward())\ndef print_loss(trainer, outputs):\n    if trainer.global_forward_batches % 10 == 0:  # 每10个batch打印一次loss。\n        print(outputs.loss.item())\n\ntrainer = Trainer(model=model, train_dataloader=dls['train'], optimizers=optimizer,\n                  device=0, evaluate_dataloaders=dls['dev'], metrics={'acc': Accuracy()},\n                  callbacks=callbacks, monitor='acc#acc',n_epochs=5,\n                  # Accuracy的update()函数需要pred，target两个参数，它们实际对应的就是以下的field。\n                  evaluate_input_mapping={'labels': 'target'},  # 在评测时，将dataloader中会输入到模型的labels重新命名为target\n                  evaluate_output_mapping={'logits': 'pred'}  # 在评测时，将model输出中的logits重新命名为pred\n                  )\ntrainer.run()\n\n# 在测试集合上进行评测\nfrom fastNLP import Evaluator\nevaluator = Evaluator(model=model, dataloaders=dls['test'], metrics={'acc': Accuracy()},\n                      # Accuracy的update()函数需要pred，target两个参数，它们实际对应的就是以下的field。\n                      output_mapping={'logits': 'pred'},\n                      input_mapping={'labels': 'target'})\nevaluator.run()\n```\n\n更多内容可以参考如下的链接\n### 快速入门\n\n- [0. 10 分钟快速上手 fastNLP torch](http://www.fastnlp.top/docs/fastNLP/master/tutorials/torch/fastnlp_torch_tutorial.html)\n\n### 详细使用教程\n\n- [1. Trainer 和 Evaluator 的基本使用](http://www.fastnlp.top/docs/fastNLP/master/tutorials/basic/fastnlp_tutorial_0.html)\n- [2. DataSet 和 Vocabulary 的基本使用](http://www.fastnlp.top/docs/fastNLP/master/tutorials/basic/fastnlp_tutorial_1.html)\n- [3. DataBundle 和 Tokenizer 的基本使用](http://www.fastnlp.top/docs/fastNLP/master/tutorials/basic/fastnlp_tutorial_2.html)\n- [4. TorchDataloader 的内部结构和基本使用](http://www.fastnlp.top/docs/fastNLP/master/tutorials/basic/fastnlp_tutorial_3.html)\n- [5. fastNLP 中的预定义模型](http://www.fastnlp.top/docs/fastNLP/master/tutorials/basic/fastnlp_tutorial_4.html)\n- [6. Trainer 和 Evaluator 的深入介绍](http://www.fastnlp.top/docs/fastNLP/master/tutorials/basic/fastnlp_tutorial_4.html)\n- [7. fastNLP 与 paddle 或 jittor 的结合](http://www.fastnlp.top/docs/fastNLP/master/tutorials/basic/fastnlp_tutorial_5.html)\n- [8. 使用 Bert + fine-tuning 完成 SST-2 分类](http://www.fastnlp.top/docs/fastNLP/master/tutorials/basic/fastnlp_tutorial_e1.html)\n- [9. 使用 Bert + prompt 完成 SST-2 分类](http://www.fastnlp.top/docs/fastNLP/master/tutorials/basic/fastnlp_tutorial_e2.html)\n\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003ePaddle\u003c/summary\u003e\n下面是使用paddle来进行文本分类的例子。需要安装paddle\u003e=2.2.0以及paddlenlp\u003e=2.3.3。\n\n```python\nfrom fastNLP.io import ChnSentiCorpLoader\nfrom functools import partial\n\n# 会自动下载数据，并且可以通过文档看到返回的 dataset 应该是包含\"raw_words\"和\"target\"两个field的\ndata_bundle = ChnSentiCorpLoader().load()\n\n# 使用tokenizer对数据进行tokenize\nfrom paddlenlp.transformers import BertTokenizer\ntokenizer = BertTokenizer.from_pretrained('hfl/chinese-bert-wwm')\ntokenize = partial(tokenizer, max_length=256)  # 限制一下最大长度\ndata_bundle.apply_field_more(tokenize, field_name='raw_chars', num_proc=4)  # 会新增\"input_ids\", \"attention_mask\"等field进入dataset中\ndata_bundle.apply_field(int, field_name='target', new_field_name='labels')  # 将int函数应用到每个target上，并且放入新的labels field中\nprint(data_bundle.get_dataset('train')[:4])\n\n# 初始化 model \nfrom paddlenlp.transformers import BertForSequenceClassification, LinearDecayWithWarmup\nfrom paddle import optimizer, nn\nclass SeqClsModel(nn.Layer):\n    def __init__(self, model_checkpoint, num_labels):\n        super(SeqClsModel, self).__init__()\n        self.num_labels = num_labels\n        self.bert = BertForSequenceClassification.from_pretrained(model_checkpoint)\n\n    def forward(self, input_ids, token_type_ids=None, position_ids=None, attention_mask=None):\n        logits = self.bert(input_ids, token_type_ids, position_ids, attention_mask)\n        return logits\n\n    def train_step(self, input_ids, labels, token_type_ids=None, position_ids=None, attention_mask=None):\n        logits = self(input_ids, token_type_ids, position_ids, attention_mask)\n        loss_fct = nn.CrossEntropyLoss()\n        loss = loss_fct(logits.reshape((-1, self.num_labels)), labels.reshape((-1, )))\n        return {\n            \"logits\": logits,\n            \"loss\": loss,\n        }\n    \n    def evaluate_step(self, input_ids, token_type_ids=None, position_ids=None, attention_mask=None):\n        logits = self(input_ids, token_type_ids, position_ids, attention_mask)\n        return {\n            \"logits\": logits,\n        }\n\nmodel = SeqClsModel('hfl/chinese-bert-wwm', num_labels=2)\n\n# 准备dataloader\nfrom fastNLP import prepare_dataloader\ndls = prepare_dataloader(data_bundle, batch_size=16)\n\n# 训练过程中调整学习率。\nscheduler = LinearDecayWithWarmup(2e-5, total_steps=20 * len(dls['train']), warmup=0.1)\noptimizer = optimizer.AdamW(parameters=model.parameters(), learning_rate=scheduler)\n\n# 准备训练\nfrom fastNLP import Trainer, Accuracy, LoadBestModelCallback, Event\ncallbacks = [\n    LoadBestModelCallback()  # 将在训练结束之后，加载性能最优的model\n]\n# 在训练特定时机加入一些操作， 不同时机能够获取到的参数不一样，可以通过Trainer.on函数的文档查看每个时机的参数\n@Trainer.on(Event.on_before_backward())\ndef print_loss(trainer, outputs):\n    if trainer.global_forward_batches % 10 == 0:  # 每10个batch打印一次loss。\n        print(outputs[\"loss\"].item())\n\ntrainer = Trainer(model=model, train_dataloader=dls['train'], optimizers=optimizer,\n                  device=0, evaluate_dataloaders=dls['dev'], metrics={'acc': Accuracy()},\n                  callbacks=callbacks, monitor='acc#acc',\n                  # Accuracy的update()函数需要pred，target两个参数，它们实际对应的就是以下的field。\n                  evaluate_output_mapping={'logits': 'pred'},\n                  evaluate_input_mapping={'labels': 'target'}\n                  )\ntrainer.run()\n\n# 在测试集合上进行评测\nfrom fastNLP import Evaluator\nevaluator = Evaluator(model=model, dataloaders=dls['test'], metrics={'acc': Accuracy()},\n                      # Accuracy的update()函数需要pred，target两个参数，它们实际对应的就是以下的field。\n                      output_mapping={'logits': 'pred'},\n                      input_mapping={'labels': 'target'})\nevaluator.run()\n```\n\n更多内容可以参考如下的链接\n### 快速入门\n\n- [0. 10 分钟快速上手 fastNLP paddle](http://www.fastnlp.top/docs/fastNLP/master/tutorials/torch/fastnlp_torch_tutorial.html)\n\n### 详细使用教程\n\n- [1. 使用 paddlenlp 和 fastNLP 实现中文文本情感分析](http://www.fastnlp.top/docs/fastNLP/master/tutorials/paddle/fastnlp_tutorial_paddle_e1.html)\n- [2. 使用 paddlenlp 和 fastNLP 训练中文阅读理解任务](http://www.fastnlp.top/docs/fastNLP/master/tutorials/paddle/fastnlp_tutorial_paddle_e2.html)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eoneflow\u003c/summary\u003e\n\u003c/details\u003e\n\n\n\n\u003cdetails\u003e\n\u003csummary\u003ejittor\u003c/summary\u003e\n\u003c/details\u003e\n\n\n## 项目结构\n\nfastNLP的项目结构如下：\n\n\u003ctable\u003e\n\u003ctr\u003e\n    \u003ctd\u003e\u003cb\u003e fastNLP \u003c/b\u003e\u003c/td\u003e\n    \u003ctd\u003e 开源的自然语言处理库 \u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n    \u003ctd\u003e\u003cb\u003e fastNLP.core \u003c/b\u003e\u003c/td\u003e\n    \u003ctd\u003e 实现了核心功能，包括数据处理组件、训练器、测试器等 \u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n    \u003ctd\u003e\u003cb\u003e fastNLP.models \u003c/b\u003e\u003c/td\u003e\n    \u003ctd\u003e 实现了一些完整的神经网络模型 \u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n    \u003ctd\u003e\u003cb\u003e fastNLP.modules \u003c/b\u003e\u003c/td\u003e\n    \u003ctd\u003e 实现了用于搭建神经网络模型的诸多组件 \u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n    \u003ctd\u003e\u003cb\u003e fastNLP.embeddings \u003c/b\u003e\u003c/td\u003e\n    \u003ctd\u003e 实现了将序列index转为向量序列的功能，包括读取预训练embedding等 \u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n    \u003ctd\u003e\u003cb\u003e fastNLP.io \u003c/b\u003e\u003c/td\u003e\n    \u003ctd\u003e 实现了读写功能，包括数据读入与预处理，模型读写，数据与模型自动下载等 \u003c/td\u003e\n\u003c/tr\u003e\n\u003c/table\u003e\n\n\u003chr\u003e\n\n","funding_links":[],"categories":["Python","文本数据和NLP","Natural Language Processing","Chinese NLP Toolkits 中文NLP工具"],"sub_categories":["General Purpose NLP","Toolkits 综合NLP工具包"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffastnlp%2FfastNLP","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ffastnlp%2FfastNLP","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffastnlp%2FfastNLP/lists"}