{"id":13429461,"url":"https://github.com/huawei-noah/Pretrained-Language-Model","last_synced_at":"2025-03-16T03:31:41.029Z","repository":{"id":37663947,"uuid":"225393289","full_name":"huawei-noah/Pretrained-Language-Model","owner":"huawei-noah","description":"Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.","archived":false,"fork":false,"pushed_at":"2024-01-22T01:11:22.000Z","size":30403,"stargazers_count":3066,"open_issues_count":108,"forks_count":630,"subscribers_count":57,"default_branch":"master","last_synced_at":"2025-03-09T06:27:07.082Z","etag":null,"topics":["knowledge-distillation","large-scale-distributed","model-compression","pretrained-models","quantization"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/huawei-noah.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2019-12-02T14:26:04.000Z","updated_at":"2025-03-06T22:54:08.000Z","dependencies_parsed_at":"2023-02-19T12:01:12.650Z","dependency_job_id":"5d446a4d-66bd-4335-a0f9-feb9a90f01af","html_url":"https://github.com/huawei-noah/Pretrained-Language-Model","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/huawei-noah%2FPretrained-Language-Model","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/huawei-noah%2FPretrained-Language-Model/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/huawei-noah%2FPretrained-Language-Model/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/huawei-noah%2FPretrained-Language-Model/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/huawei-noah","download_url":"https://codeload.github.com/huawei-noah/Pretrained-Language-Model/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":243822312,"owners_count":20353496,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["knowledge-distillation","large-scale-distributed","model-compression","pretrained-models","quantization"],"created_at":"2024-07-31T02:00:39.924Z","updated_at":"2025-03-16T03:31:41.017Z","avatar_url":"https://github.com/huawei-noah.png","language":"Python","readme":"# Pretrained Language Model\n\nThis repository provides the latest pretrained language models and its related optimization techniques developed by Huawei Noah's Ark Lab.\n\n## Directory structure\n* [PanGu-α](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/PanGu-α) is a Large-scale autoregressive pretrained Chinese language model with up to 200B parameter. The models are developed under the [MindSpore](https://www.mindspore.cn/en) and trained on a cluster of [Ascend](https://e.huawei.com/en/products/servers/ascend) 910 AI processors.\n* [NEZHA-TensorFlow](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/NEZHA-TensorFlow) is a pretrained Chinese language model which achieves the state-of-the-art performances on several Chinese NLP tasks developed under TensorFlow.\n* [NEZHA-PyTorch](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/NEZHA-PyTorch) is the PyTorch version of NEZHA.\n* [NEZHA-Gen-TensorFlow](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/NEZHA-Gen-TensorFlow) provides two GPT models. One is Yuefu (乐府), a Chinese Classical Poetry generation model, the other is a common Chinese GPT model.\n* [TinyBERT](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/TinyBERT) is a compressed BERT model which achieves 7.5x smaller and 9.4x faster on inference.\n* [TinyBERT-MindSpore](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/TinyBERT-MindSpore) is a MindSpore version of TinyBERT.\n* [DynaBERT](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/DynaBERT) is a dynamic BERT model with adaptive width and depth.\n* [BBPE](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/BBPE) provides a byte-level vocabulary building tool and its correspoinding tokenizer.\n* [PMLM](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/PMLM) is a probabilistically masked language model. Trained without the complex two-stream self-attention, PMLM can be treated as a simple approximation of XLNet.\n* [TernaryBERT](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/TernaryBERT) is a weights ternarization method for BERT model developed under PyTorch.\n* [TernaryBERT-MindSpore](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/TernaryBERT-MindSpore) is the MindSpore version of TernaryBERT.\n* [HyperText](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/HyperText) is an efficient text classification model based on hyperbolic geometry theories.\n* [BinaryBERT](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/BinaryBERT) is a weights binarization method using ternary weight splitting for BERT model, developed under PyTorch.\n* [AutoTinyBERT](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/AutoTinyBERT) provides a model zoo that can meet different latency requirements.\n* [PanGu-Bot](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/PanGu-Bot) is a Chinese pre-trained open-domain dialog model build based on the GPU implementation of [PanGu-α](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/PanGu-α).\n* [CeMAT](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/CeMAT) is a universal sequence-to-sequence multi-lingual pre-training language model for both autoregressive and non-autoregressive neural machine translation tasks.\n* [Noah_WuKong](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/Noah_WuKong) is a large-scale Chinese vision-language dataset and a group of benchmarking models trained on it.\n* [Noah_WuKong-MindSpore](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/Noah_Wukong-MindSpore) is a MindSpore version of Noah_WuKong.\n* [CAME](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/CAME) is a Confidence-guided Adaptive Memory Efficient Optimizer.","funding_links":[],"categories":["2 Foundation Models","Python","BERT优化"],"sub_categories":["2.1 Language Foundation Models","大语言对话模型及数据"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhuawei-noah%2FPretrained-Language-Model","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fhuawei-noah%2FPretrained-Language-Model","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhuawei-noah%2FPretrained-Language-Model/lists"}