Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/circlePi/BERT_Chinese_Text_Class_By_pytorch
A Pytorch implements of Chinese text class based on BERT_Pretrained_Model
https://github.com/circlePi/BERT_Chinese_Text_Class_By_pytorch
Last synced: about 1 month ago
JSON representation
A Pytorch implements of Chinese text class based on BERT_Pretrained_Model
- Host: GitHub
- URL: https://github.com/circlePi/BERT_Chinese_Text_Class_By_pytorch
- Owner: circlePi
- Created: 2019-01-09T12:14:01.000Z (almost 6 years ago)
- Default Branch: master
- Last Pushed: 2019-01-11T02:31:42.000Z (almost 6 years ago)
- Last Synced: 2024-08-02T08:09:55.527Z (4 months ago)
- Language: Python
- Homepage:
- Size: 107 KB
- Stars: 8
- Watchers: 2
- Forks: 0
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-bert - circlePi/BERT_Chinese_Text_Class_By_pytorch
README
# BERT_Chinese_Text_Class_By_pytorch
A implements of Chinese text class based on BERT_Pretrained_Model in the framework of pytorch## How to start
- Download the Chinese BERT Pretrained Model from google search and place it into the model directory
- python convert_tf_checkpoint_to_pytorch.py to transfer the Pretrained Model into pytorch form
- prepare Chinese raw data, you can modify the preprocessing.data_processor to adapt your data
- implement the code by "python run_bert_class"## Tips
- When converting the tensorflow checkpoint into the pytorch, it's expected to choice the "bert_model.ckpt", instead of "bert_model.ckpt.index", as the input file. Otherwise, you will see that the model can learn nothing and give almost same random outputs for any inputs. This means, in fact, you have not loaded the true ckpt for your model
- When using multiple GPUs, the non-tensor calculations, such as accuracy and f1_score, are not supported by DataParallel instance
- As recommanded by Jocob in his paper https://arxiv.org/pdf/1810.04805.pdf, in fine-tuning tasks, the hyperparameters are expected to set as following: **Batch_size**: 16 or 32, **learning_rate**: 5e-5 or 2e-5 or 3e-5, **num_train_epoch**: 3 or 4
- The pretrained model has a limit for the sentence of input that its length should is not larger than 512, the max position embedding dim. The data flows into the model as: Raw_data -> WordPieces -> Model. Note that the length of wordPieces is generally larger than that of raw_data, so a safe max length of raw_data is at ~128 - 256
- Upon testing, we found that fine-tuning all layers could get much better results than those of only fine-tuning the last classfier layer. The latter is actually a feature-based way