Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/benywon/ChineseBert
This is a chinese Bert model specific for question answering
https://github.com/benywon/ChineseBert
chinese-nlp deep-learning natural-language-processing
Last synced: 6 days ago
JSON representation
This is a chinese Bert model specific for question answering
- Host: GitHub
- URL: https://github.com/benywon/ChineseBert
- Owner: benywon
- Created: 2018-11-20T03:04:55.000Z (almost 6 years ago)
- Default Branch: master
- Last Pushed: 2019-08-08T11:35:01.000Z (over 5 years ago)
- Last Synced: 2024-08-02T08:09:53.635Z (3 months ago)
- Topics: chinese-nlp, deep-learning, natural-language-processing
- Language: Python
- Size: 1.2 MB
- Stars: 27
- Watchers: 4
- Forks: 8
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-bert - benywon/ChineseBert
- awesome-transformer-nlp - benywon/ChineseBert - This is a Chinese BERT model specific for question answering. (Tasks / Question Answering (QA))
README
# ChineseBert
This is a chinese Bert model specific for question answering. We provide two models, a large model which is a 16 layer 1024 transformer, and a small model with 8 layer and 512 hidden size. Our implementation is a different from the original paper https://arxiv.org/abs/1810.04805, in which we replace a position embedding with LSTM, which shows advantages when the text length varies a lot.Currently it is run on python3 and pytorch
-------------------------------------
#Stats:
Data: 200m chinese internet question answering pairs.
tokenizer: we use the [sentencepiece](https://github.com/google/sentencepiece) tokenizer with vocab size equal to 35,000
For both large and small model, we train it for 2m steps, which did not suffer from overfit problem
large model takes 12 days for one epoch on 8-GPU NV-LINK v100.
Small model takes 2 days for one epoch on 8-GPU NV-LINK v100.------------------------------------------
#Usage:Fed with chinese question answer pair and get the combined representations.
You can refer to the main.py for more detail.
The model has been tested under sequence length less than 1024
------------------------------------
As the torch model file is very large, you should download it from the google drive via
get_model.sh