Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/NoviScl/BERT-RACE
https://github.com/NoviScl/BERT-RACE
Last synced: about 1 month ago
JSON representation
- Host: GitHub
- URL: https://github.com/NoviScl/BERT-RACE
- Owner: NoviScl
- Created: 2019-01-23T13:29:33.000Z (almost 6 years ago)
- Default Branch: master
- Last Pushed: 2022-10-22T20:19:09.000Z (about 2 years ago)
- Last Synced: 2024-08-02T08:09:52.610Z (4 months ago)
- Language: Python
- Size: 104 KB
- Stars: 79
- Watchers: 5
- Forks: 21
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-bert - NoviScl/BERT-RACE - pretrained-BERT). I adapted the original BERT model to work on multiple choice machine comprehension. (BERT QA & RC task:)
README
# BERT for RACE
By: Chenglei Si (River Valley High School)
### Update:
XLNet has achieved impressive gains on RACE recently. You may refer to my other repo: https://github.com/NoviScl/XLNet_DREAM to see how to use XLNet for multiple-choice machine comprehension problems. Huggingface has updated their work [pytorch_trainsformers](https://github.com/huggingface/pytorch-transformers), please refer to their repo for the documentation and more details of the new version.### Implementation
This work is based on Pytorch implementation of BERT (https://github.com/huggingface/pytorch-pretrained-BERT). I adapted the original BERT model to work on multiple choice machine comprehension.### Environment:
The code is tested with Python3.6 and Pytorch 1.0.0.### Usage
1. Download the dataset and unzip it. The default dataset directory is ./RACE
2. Run ```./run.sh```### Hyperparameters
I did some tuning and find the following hyperparameters to work reasonally well:BERT_base: batch size: 32, learning rate: 5e-5, training epoch: 3
BERT_large: batch size: 8, learning rate: 1e-5 (DO NOT SET IT TOO LARGE), training epoch: 2
### Results
Model | RACE | RACE-M | RACE-H
--- | --- | --- | --- |
BERT_base | 65.0 | 71.7 | 62.3
BERT_large | 67.9 | 75.6 | 64.7You can compare them with other results on the [leaderboard](http://www.qizhexie.com/data/RACE_leaderboard).
BERT large achieves the current (Jan 2019) best result. Looking forward to new models that can beat BERT!
### More Details
I have written a short [report](./BERT_RACE.pdf) in this repo describing the details.