{"id":13535211,"url":"https://github.com/jiangpinglei/BERT_ChineseWordSegment","last_synced_at":"2025-04-02T00:32:54.886Z","repository":{"id":155851063,"uuid":"166180422","full_name":"jiangpinglei/BERT_ChineseWordSegment","owner":"jiangpinglei","description":"A Chinese word segment model based on BERT, F1-Score 97%","archived":false,"fork":false,"pushed_at":"2019-05-28T07:47:31.000Z","size":2955,"stargazers_count":90,"open_issues_count":2,"forks_count":43,"subscribers_count":3,"default_branch":"master","last_synced_at":"2024-11-02T23:32:40.440Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/jiangpinglei.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2019-01-17T07:25:22.000Z","updated_at":"2024-08-13T12:40:39.000Z","dependencies_parsed_at":"2024-01-14T02:37:12.376Z","dependency_job_id":"7cf49d87-63f6-4f24-912e-a5d2b236c10f","html_url":"https://github.com/jiangpinglei/BERT_ChineseWordSegment","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jiangpinglei%2FBERT_ChineseWordSegment","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jiangpinglei%2FBERT_ChineseWordSegment/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jiangpinglei%2FBERT_ChineseWordSegment/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jiangpinglei%2FBERT_ChineseWordSegment/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/jiangpinglei","download_url":"https://codeload.github.com/jiangpinglei/BERT_ChineseWordSegment/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":246735356,"owners_count":20825221,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-01T08:00:51.338Z","updated_at":"2025-04-02T00:32:54.079Z","avatar_url":"https://github.com/jiangpinglei.png","language":"Python","readme":"# BERT_ChineseWordSegment\n\nTry to implement a Chinese word segment work based on Google BERT!\n\nThe corpus is extracted from The People's Daily (Chinese: 人民日报, Renmin Ribao).\n\n  \u003cbr /\u003e\n  \nFirst git clone https://github.com/google-research/bert.git\n\nSecond put the three scripts:  modeling.py、optimization.py、tokenization.py into this project, structure is as follows:\n\n    BERT_ChinesewordSegment\n\n        |____ PEOPLEdata\n        |____ output\n        |____ modeling.py\n        |____ optimization.py\n        |____ tokenization.py\n        |____ run_cut.py\n        |____ evaluation.py\n\nThird download the Chinese pre-trained bert model [BERT-Base, Chinese](https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip)\n\nAnd then set pre-trained model path and data path environment: $BERT_CHINESE_DIR、$PEOPLEcut\n\n## run\n\n```\npython3 run_cut.py   --task_name=\"people\"   --do_train=True   --do_predict=True  --data_dir=$PEOPLEcut    --vocab_file=$BERT_CHINESE_DIR/vocab.txt   --bert_config_file=$BERT_CHINESE_DIR/bert_config.json   --init_checkpoint=$BERT_CHINESE_DIR/bert_model.ckpt    --max_seq_length=128    --train_batch_size=32    --learning_rate=2e-5   --num_train_epochs=3.0    --output_dir=./output/result_cut/\n```\n\nIt will take about 28 minutes with 3 epochs on a GPU.\n\nThis will produce an evaluate output like this:\n\n```\nINFO:tensorflow:***** Eval results *****\nINFO:tensorflow:  count = 9925\nINFO:tensorflow:  precision_avg = 0.9794\nINFO:tensorflow:  recall_avg = 0.9780\nINFO:tensorflow:  f1_avg = 0.9783\nINFO:tensorflow:  error_avg = 0.0213\n```\nAnd the word segmentation results will be seen in ./output/result_cut/seg_result.txt\n\nIf you want learn more details, see the code analysis(in Chinese)[简书:BERT系列（五）——中文分词实践...](https://www.jianshu.com/p/be0a951445f4)\n","funding_links":[],"categories":["BERT  NER  task:"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjiangpinglei%2FBERT_ChineseWordSegment","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fjiangpinglei%2FBERT_ChineseWordSegment","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjiangpinglei%2FBERT_ChineseWordSegment/lists"}