{"id":13595468,"url":"https://github.com/lancopku/pkuseg-python","last_synced_at":"2026-01-25T23:31:45.702Z","repository":{"id":37411949,"uuid":"143589809","full_name":"lancopku/pkuseg-python","owner":"lancopku","description":"pkuseg多领域中文分词工具; The pkuseg toolkit for multi-domain Chinese word segmentation","archived":false,"fork":false,"pushed_at":"2022-11-05T13:37:41.000Z","size":4387,"stargazers_count":6653,"open_issues_count":133,"forks_count":987,"subscribers_count":203,"default_branch":"master","last_synced_at":"2025-09-01T15:17:51.503Z","etag":null,"topics":["chinese-word-segmentation"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/lancopku.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2018-08-05T06:41:07.000Z","updated_at":"2025-08-30T19:17:28.000Z","dependencies_parsed_at":"2022-07-14T08:08:56.633Z","dependency_job_id":null,"html_url":"https://github.com/lancopku/pkuseg-python","commit_stats":null,"previous_names":[],"tags_count":3,"template":false,"template_full_name":null,"purl":"pkg:github/lancopku/pkuseg-python","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lancopku%2Fpkuseg-python","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lancopku%2Fpkuseg-python/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lancopku%2Fpkuseg-python/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lancopku%2Fpkuseg-python/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/lancopku","download_url":"https://codeload.github.com/lancopku/pkuseg-python/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/lancopku%2Fpkuseg-python/sbom","scorecard":{"id":578259,"data":{"date":"2025-08-11","repo":{"name":"github.com/lancopku/pkuseg-python","commit":"071d57c7df9ac0680edda7034b47787d7c6f9184"},"scorecard":{"version":"v5.2.1-40-gf6ed084d","commit":"f6ed084d17c9236477efd66e5b258b9d4cc7b389"},"score":2.1,"checks":[{"name":"Token-Permissions","score":-1,"reason":"No tokens found","details":null,"documentation":{"short":"Determines if the project's workflows follow the principle of least privilege.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#token-permissions"}},{"name":"Code-Review","score":0,"reason":"Found 1/26 approved changesets -- score normalized to 0","details":null,"documentation":{"short":"Determines if the project requires human code review before pull requests (aka merge requests) are merged.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#code-review"}},{"name":"Dangerous-Workflow","score":-1,"reason":"no workflows found","details":null,"documentation":{"short":"Determines if the project's GitHub Action workflows avoid dangerous patterns.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#dangerous-workflow"}},{"name":"Maintained","score":0,"reason":"0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0","details":null,"documentation":{"short":"Determines if the project is \"actively maintained\".","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#maintained"}},{"name":"Packaging","score":-1,"reason":"packaging workflow not detected","details":["Warn: no GitHub/GitLab publishing workflow detected."],"documentation":{"short":"Determines if the project is published as a package that others can easily download, install, easily update, and uninstall.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#packaging"}},{"name":"Binary-Artifacts","score":10,"reason":"no binaries found in the repo","details":null,"documentation":{"short":"Determines if the project has generated executable (binary) artifacts in the source repository.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#binary-artifacts"}},{"name":"CII-Best-Practices","score":0,"reason":"no effort to earn an OpenSSF best practices badge detected","details":null,"documentation":{"short":"Determines if the project has an OpenSSF (formerly CII) Best Practices Badge.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#cii-best-practices"}},{"name":"Pinned-Dependencies","score":-1,"reason":"no dependencies found","details":null,"documentation":{"short":"Determines if the project has declared and pinned the dependencies of its build process.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#pinned-dependencies"}},{"name":"Security-Policy","score":0,"reason":"security policy file not detected","details":["Warn: no security policy file detected","Warn: no security file to analyze","Warn: no security file to analyze","Warn: no security file to analyze"],"documentation":{"short":"Determines if the project has published a security policy.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#security-policy"}},{"name":"License","score":10,"reason":"license file detected","details":["Info: project has a license file: LICENSE:0","Info: FSF or OSI recognized license: MIT License: LICENSE:0"],"documentation":{"short":"Determines if the project has defined a license.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#license"}},{"name":"Fuzzing","score":0,"reason":"project is not fuzzed","details":["Warn: no fuzzer integrations found"],"documentation":{"short":"Determines if the project uses fuzzing.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#fuzzing"}},{"name":"Signed-Releases","score":0,"reason":"Project has not signed or included provenance with any releases.","details":["Warn: release artifact v0.0.25 not signed: https://api.github.com/repos/lancopku/pkuseg-python/releases/68685088","Warn: release artifact v0.0.16 not signed: https://api.github.com/repos/lancopku/pkuseg-python/releases/15613751","Warn: release artifact v0.0.11 not signed: https://api.github.com/repos/lancopku/pkuseg-python/releases/14940335","Warn: release artifact v0.0.25 does not have provenance: https://api.github.com/repos/lancopku/pkuseg-python/releases/68685088","Warn: release artifact v0.0.16 does not have provenance: https://api.github.com/repos/lancopku/pkuseg-python/releases/15613751","Warn: release artifact v0.0.11 does not have provenance: https://api.github.com/repos/lancopku/pkuseg-python/releases/14940335"],"documentation":{"short":"Determines if the project cryptographically signs release artifacts.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#signed-releases"}},{"name":"Branch-Protection","score":0,"reason":"branch protection not enabled on development/release branches","details":["Warn: branch protection not enabled for branch 'master'"],"documentation":{"short":"Determines if the default and release branches are protected with GitHub's branch protection settings.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#branch-protection"}},{"name":"Vulnerabilities","score":5,"reason":"5 existing vulnerabilities detected","details":["Warn: Project is vulnerable to: PYSEC-2021-856 / GHSA-5545-2q6w-2gh6","Warn: Project is vulnerable to: GHSA-6p56-wp2h-9hxr","Warn: Project is vulnerable to: PYSEC-2019-108 / GHSA-9fq2-x9r6-wfmf","Warn: Project is vulnerable to: PYSEC-2021-857 / GHSA-f7c7-j99h-c22f","Warn: Project is vulnerable to: GHSA-fpfv-jqm9-f5jm"],"documentation":{"short":"Determines if the project has open, known unfixed vulnerabilities.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#vulnerabilities"}},{"name":"SAST","score":0,"reason":"SAST tool is not run on all commits -- score normalized to 0","details":["Warn: 0 commits out of 5 are checked with a SAST tool"],"documentation":{"short":"Determines if the project uses static code analysis.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#sast"}}]},"last_synced_at":"2025-08-20T18:26:25.706Z","repository_id":37411949,"created_at":"2025-08-20T18:26:25.707Z","updated_at":"2025-08-20T18:26:25.707Z"},"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28761814,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-25T23:06:19.311Z","status":"ssl_error","status_checked_at":"2026-01-25T23:03:50.555Z","response_time":113,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["chinese-word-segmentation"],"created_at":"2024-08-01T16:01:50.569Z","updated_at":"2026-01-25T23:31:45.679Z","avatar_url":"https://github.com/lancopku.png","language":"Python","readme":"# pkuseg：一个多领域中文分词工具包 [**(English Version)**](readme/readme_english.md)\r\n\r\npkuseg 是基于论文[[Luo et. al, 2019](#论文引用)]的工具包。其简单易用，支持细分领域分词，有效提升了分词准确度。\r\n\r\n\r\n\r\n## 目录\r\n\r\n* [主要亮点](#主要亮点)\r\n* [编译和安装](#编译和安装)\r\n* [各类分词工具包的性能对比](#各类分词工具包的性能对比)\r\n* [使用方式](#使用方式)\r\n* [论文引用](#论文引用)\r\n* [作者](#作者)\r\n* [常见问题及解答](#常见问题及解答)\r\n\r\n\r\n\r\n## 主要亮点\r\n\r\npkuseg具有如下几个特点：\r\n\r\n1. 多领域分词。不同于以往的通用中文分词工具，此工具包同时致力于为不同领域的数据提供个性化的预训练模型。根据待分词文本的领域特点，用户可以自由地选择不同的模型。 我们目前支持了新闻领域，网络领域，医药领域，旅游领域，以及混合领域的分词预训练模型。在使用中，如果用户明确待分词的领域，可加载对应的模型进行分词。如果用户无法确定具体领域，推荐使用在混合领域上训练的通用模型。各领域分词样例可参考 [**example.txt**](https://github.com/lancopku/pkuseg-python/blob/master/example.txt)。\r\n2. 更高的分词准确率。相比于其他的分词工具包，当使用相同的训练数据和测试数据，pkuseg可以取得更高的分词准确率。\r\n3. 支持用户自训练模型。支持用户使用全新的标注数据进行训练。\r\n4. 支持词性标注。\r\n\r\n\r\n## 编译和安装\r\n\r\n- 目前**仅支持python3**\r\n- **为了获得好的效果和速度，强烈建议大家通过pip install更新到目前的最新版本**\r\n\r\n1. 通过PyPI安装(自带模型文件)：\r\n\t```\r\n\tpip3 install pkuseg\r\n\t之后通过import pkuseg来引用\r\n\t```\r\n   **建议更新到最新版本**以获得更好的开箱体验：\r\n   \t```\r\n\tpip3 install -U pkuseg\r\n\t```\r\n2. 如果PyPI官方源下载速度不理想，建议使用镜像源，比如：   \r\n   初次安装：\r\n\t```\r\n\tpip3 install -i https://pypi.tuna.tsinghua.edu.cn/simple pkuseg\r\n\t```\r\n   更新：\r\n\t```\r\n\tpip3 install -i https://pypi.tuna.tsinghua.edu.cn/simple -U pkuseg\r\n\t```\r\n   \r\n3. 如果不使用pip安装方式，选择从GitHub下载，可运行以下命令安装：\r\n\t```\r\n\tpython setup.py build_ext -i\r\n\t```\r\n\t\r\n   GitHub的代码并不包括预训练模型，因此需要用户自行下载或训练模型，预训练模型可详见[release](https://github.com/lancopku/pkuseg-python/releases)。使用时需设定\"model_name\"为模型文件。\r\n\r\n注意：**安装方式1和2目前仅支持linux(ubuntu)、mac、windows 64 位的python3版本**。如果非以上系统，请使用安装方式3进行本地编译安装。\r\n\t\r\n\r\n## 各类分词工具包的性能对比\r\n\r\n我们选择jieba、THULAC等国内代表分词工具包与pkuseg做性能比较，详细设置可参考[实验环境](readme/environment.md)。\r\n\r\n\r\n\r\n#### 细领域训练及测试结果\r\n\r\n以下是在不同数据集上的对比结果：\r\n\r\n| MSRA   | Precision | Recall |   F-score |\r\n| :----- | --------: | -----: | --------: |\r\n| jieba  |     87.01 |  89.88 |     88.42 |\r\n| THULAC |     95.60 |  95.91 |     95.71 |\r\n| pkuseg |     96.94 |  96.81 | **96.88** |\r\n\r\n\r\n| WEIBO  | Precision | Recall |   F-score |\r\n| :----- | --------: | -----: | --------: |\r\n| jieba  |     87.79 |  87.54 |     87.66 |\r\n| THULAC |     93.40 |  92.40 |     92.87 |\r\n| pkuseg |     93.78 |  94.65 | **94.21** |\r\n\r\n\r\n\r\n\r\n#### 默认模型在不同领域的测试效果\r\n\r\n考虑到很多用户在尝试分词工具的时候，大多数时候会使用工具包自带模型测试。为了直接对比“初始”性能，我们也比较了各个工具包的默认模型在不同领域的测试效果。请注意，这样的比较只是为了说明默认情况下的效果，并不一定是公平的。\r\n\r\n| Default | MSRA  | CTB8  | PKU   | WEIBO | All Average |\r\n| ------- | :---: | :---: | :---: | :---: | :---------: |\r\n| jieba  | 81.45 | 79.58 | 81.83 | 83.56 | 81.61       |\r\n| THULAC |\t85.55 | 87.84 | 92.29 | 86.65 | 88.08 |\r\n| pkuseg | 87.29 | 91.77 | 92.68 | 93.43 | **91.29**   |\r\n\r\n其中，`All Average`显示的是在所有测试集上F-score的平均。\r\n\r\n更多详细比较可参见[和现有工具包的比较](readme/comparison.md)。\r\n\r\n## 使用方式\r\n\r\n#### 代码示例\r\n\r\n以下代码示例适用于python交互式环境。\r\n\r\n代码示例1：使用默认配置进行分词（**如果用户无法确定分词领域，推荐使用默认模型分词**）\r\n```python3\r\nimport pkuseg\r\n\r\nseg = pkuseg.pkuseg()           # 以默认配置加载模型\r\ntext = seg.cut('我爱北京天安门')  # 进行分词\r\nprint(text)\r\n```\r\n\r\n代码示例2：细领域分词（**如果用户明确分词领域，推荐使用细领域模型分词**）\r\n```python3\r\nimport pkuseg\r\n\r\nseg = pkuseg.pkuseg(model_name='medicine')  # 程序会自动下载所对应的细领域模型\r\ntext = seg.cut('我爱北京天安门')              # 进行分词\r\nprint(text)\r\n```\r\n\r\n代码示例3：分词同时进行词性标注，各词性标签的详细含义可参考 [tags.txt](https://github.com/lancopku/pkuseg-python/blob/master/tags.txt)\r\n```python3\r\nimport pkuseg\r\n\r\nseg = pkuseg.pkuseg(postag=True)  # 开启词性标注功能\r\ntext = seg.cut('我爱北京天安门')    # 进行分词和词性标注\r\nprint(text)\r\n```\r\n\r\n\r\n代码示例4：对文件分词\r\n```python3\r\nimport pkuseg\r\n\r\n# 对input.txt的文件分词输出到output.txt中\r\n# 开20个进程\r\npkuseg.test('input.txt', 'output.txt', nthread=20)     \r\n```\r\n\r\n其他使用示例可参见[详细代码示例](readme/interface.md)。\r\n\r\n\r\n\r\n#### 参数说明\r\n\r\n模型配置\r\n```\r\npkuseg.pkuseg(model_name = \"default\", user_dict = \"default\", postag = False)\r\n\tmodel_name\t\t模型路径。\r\n\t\t\t        \"default\"，默认参数，表示使用我们预训练好的混合领域模型(仅对pip下载的用户)。\r\n\t\t\t\t\"news\", 使用新闻领域模型。\r\n\t\t\t\t\"web\", 使用网络领域模型。\r\n\t\t\t\t\"medicine\", 使用医药领域模型。\r\n\t\t\t\t\"tourism\", 使用旅游领域模型。\r\n\t\t\t        model_path, 从用户指定路径加载模型。\r\n\tuser_dict\t\t设置用户词典。\r\n\t\t\t\t\"default\", 默认参数，使用我们提供的词典。\r\n\t\t\t\tNone, 不使用词典。\r\n\t\t\t\tdict_path, 在使用默认词典的同时会额外使用用户自定义词典，可以填自己的用户词典的路径，词典格式为一行一个词（如果选择进行词性标注并且已知该词的词性，则在该行写下词和词性，中间用tab字符隔开）。\r\n\tpostag\t\t        是否进行词性分析。\r\n\t\t\t\tFalse, 默认参数，只进行分词，不进行词性标注。\r\n\t\t\t\tTrue, 会在分词的同时进行词性标注。\r\n```\r\n\r\n对文件进行分词\r\n```\r\npkuseg.test(readFile, outputFile, model_name = \"default\", user_dict = \"default\", postag = False, nthread = 10)\r\n\treadFile\t\t输入文件路径。\r\n\toutputFile\t\t输出文件路径。\r\n\tmodel_name\t\t模型路径。同pkuseg.pkuseg\r\n\tuser_dict\t\t设置用户词典。同pkuseg.pkuseg\r\n\tpostag\t\t\t设置是否开启词性分析功能。同pkuseg.pkuseg\r\n\tnthread\t\t\t测试时开的进程数。\r\n```\r\n\r\n模型训练\r\n```\r\npkuseg.train(trainFile, testFile, savedir, train_iter = 20, init_model = None)\r\n\ttrainFile\t\t训练文件路径。\r\n\ttestFile\t\t测试文件路径。\r\n\tsavedir\t\t\t训练模型的保存路径。\r\n\ttrain_iter\t\t训练轮数。\r\n\tinit_model\t\t初始化模型，默认为None表示使用默认初始化，用户可以填自己想要初始化的模型的路径如init_model='./models/'。\r\n```\r\n\r\n\r\n\r\n#### 多进程分词\r\n\r\n当将以上代码示例置于文件中运行时，如涉及多进程功能，请务必使用`if __name__ == '__main__'`保护全局语句，详见[多进程分词](readme/multiprocess.md)。\r\n\r\n\r\n\r\n## 预训练模型\r\n\r\n从pip安装的用户在使用细领域分词功能时，只需要设置model_name字段为对应的领域即可，会自动下载对应的细领域模型。\r\n\r\n从github下载的用户则需要自己下载对应的预训练模型，并设置model_name字段为预训练模型路径。预训练模型可以在[release](https://github.com/lancopku/pkuseg-python/releases)部分下载。以下是对预训练模型的说明：\r\n\r\n- **news**: 在MSRA（新闻语料）上训练的模型。\r\n\r\n- **web**: 在微博（网络文本语料）上训练的模型。\r\n\r\n- **medicine**: 在医药领域上训练的模型。\r\n\r\n- **tourism**: 在旅游领域上训练的模型。\r\n\r\n- **mixed**: 混合数据集训练的通用模型。随pip包附带的是此模型。\r\n\r\n我们还通过领域自适应的方法，利用维基百科的未标注数据实现了几个细领域预训练模型的自动构建以及通用模型的优化，这些模型目前仅可以在release中下载：\r\n\r\n- **art**: 在艺术与文化领域上训练的模型。\r\n\r\n- **entertainment**: 在娱乐与体育领域上训练的模型。\r\n\r\n- **science**: 在科学领域上训练的模型。\r\n\r\n- **default_v2**: 使用领域自适应方法得到的优化后的通用模型，相较于默认模型规模更大，但泛化性能更好。\r\n\r\n\r\n\r\n欢迎更多用户可以分享自己训练好的细分领域模型。\r\n\r\n\r\n\r\n## 版本历史\r\n\r\n详见[版本历史](readme/history.md)。\r\n\r\n\r\n## 开源协议\r\n1. 本代码采用MIT许可证。\r\n2. 欢迎对该工具包提出任何宝贵意见和建议，请发邮件至jingjingxu@pku.edu.cn。\r\n\r\n\r\n\r\n## 论文引用\r\n\r\n该代码包主要基于以下科研论文，如使用了本工具，请引用以下论文：\r\n* Ruixuan Luo, Jingjing Xu, Yi Zhang, Zhiyuan Zhang, Xuancheng Ren, Xu Sun. [PKUSEG: A Toolkit for Multi-Domain Chinese Word Segmentation](https://arxiv.org/abs/1906.11455). Arxiv. 2019.\r\n\r\n```\r\n\r\n@article{pkuseg,\r\n  author = {Luo, Ruixuan and Xu, Jingjing and Zhang, Yi and Zhang, Zhiyuan and Ren, Xuancheng and Sun, Xu},\r\n  journal = {CoRR},\r\n  title = {PKUSEG: A Toolkit for Multi-Domain Chinese Word Segmentation.},\r\n  url = {https://arxiv.org/abs/1906.11455},\r\n  volume = {abs/1906.11455},\r\n  year = 2019\r\n}\r\n```\r\n\r\n## 其他相关论文\r\n\r\n* Xu Sun, Houfeng Wang, Wenjie Li. Fast Online Training with Frequency-Adaptive Learning Rates for Chinese Word Segmentation and New Word Detection. ACL. 2012.\r\n* Jingjing Xu and Xu Sun. Dependency-based gated recursive neural network for chinese word segmentation. ACL. 2016.\r\n* Jingjing Xu and Xu Sun. Transfer learning for low-resource chinese word segmentation with a novel neural network. NLPCC. 2017.\r\n\r\n## 常见问题及解答\r\n\r\n\r\n1. [为什么要发布pkuseg？](https://github.com/lancopku/pkuseg-python/wiki/FAQ#1-为什么要发布pkuseg)\r\n2. [pkuseg使用了哪些技术？](https://github.com/lancopku/pkuseg-python/wiki/FAQ#2-pkuseg使用了哪些技术)\r\n3. [无法使用多进程分词和训练功能，提示RuntimeError和BrokenPipeError。](https://github.com/lancopku/pkuseg-python/wiki/FAQ#3-无法使用多进程分词和训练功能提示runtimeerror和brokenpipeerror)\r\n4. [是如何跟其它工具包在细领域数据上进行比较的？](https://github.com/lancopku/pkuseg-python/wiki/FAQ#4-是如何跟其它工具包在细领域数据上进行比较的)\r\n5. [在黑盒测试集上进行比较的话，效果如何？](https://github.com/lancopku/pkuseg-python/wiki/FAQ#5-在黑盒测试集上进行比较的话效果如何)\r\n6. [如果我不了解待分词语料的所属领域呢？](https://github.com/lancopku/pkuseg-python/wiki/FAQ#6-如果我不了解待分词语料的所属领域呢)\r\n7. [如何看待在一些特定样例上的分词结果？](https://github.com/lancopku/pkuseg-python/wiki/FAQ#7-如何看待在一些特定样例上的分词结果)\r\n8. [关于运行速度问题？](https://github.com/lancopku/pkuseg-python/wiki/FAQ#8-关于运行速度问题)\r\n9. [关于多进程速度问题？](https://github.com/lancopku/pkuseg-python/wiki/FAQ#9-关于多进程速度问题)\r\n\r\n\r\n## 致谢\r\n\r\n感谢俞士汶教授（北京大学计算语言所）与邱立坤博士提供的训练数据集！\r\n\r\n## 作者\r\n\r\nRuixuan Luo （罗睿轩）,  Jingjing Xu（许晶晶）, Xuancheng Ren（任宣丞）, Yi Zhang（张艺）, Zhiyuan Zhang（张之远）, Bingzhen Wei（位冰镇）， Xu Sun （孙栩）  \r\n\r\n北京大学 [语言计算与机器学习研究组](http://lanco.pku.edu.cn/)\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n","funding_links":[],"categories":["资源列表","Natural Language Processing","Python","机器学习","自然语言处理","其他_NLP自然语言处理","Chinese NLP Toolkits 中文NLP工具"],"sub_categories":["自然语言处理","NLP","其他_文本生成、文本对话","General-Purpose Machine Learning","第三方库/开源项目","Chinese Word Segment 中文分词"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flancopku%2Fpkuseg-python","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Flancopku%2Fpkuseg-python","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flancopku%2Fpkuseg-python/lists"}