Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/ShannonAI/Neural-Semi-Supervised-Learning-for-Text-Classification

Semi-supervised Learning for Sentiment Analysis
https://github.com/ShannonAI/Neural-Semi-Supervised-Learning-for-Text-Classification

Last synced: about 1 month ago
JSON representation

Semi-supervised Learning for Sentiment Analysis

Awesome Lists containing this project

README

        

# Neural-Semi-supervised-Learning-for-Text-Classification-Under-Large-Scale-Pretraining
Code, models and Datasets for[《Neural Semi-supervised Learning for Text Classification Under Large-Scale Pretraining》](https://arxiv.org/pdf/2011.08626.pdf).

## Download Models and Dataset
Datasets and Models are found in the follwing list.

- Download 3.4M IMDB movie reviews. Save the data at `[REVIEWS_PATH]`.
You can download the dataset [HERE](https://drive.google.com/drive/folders/1YX-CzocJe32DK8j2RBVyhYhxrbgE1l1S?usp=sharing).
- Download the vanilla RoBERTa-large model released by HuggingFace. Save the model at `[VANILLA_ROBERTA_LARGE_PATH]`.
You can download the model [HERE](https://huggingface.co/roberta-large).
- Download in-domain pretrained models in the paper and save the model at `[PRETRAIN_MODELS]`. We provide three following models.
You can download [HERE](https://drive.google.com/drive/folders/1rBjtxVWGlrdEg2XJwBjbPb1Vf2d3Csb9?usp=sharing).
- `init-roberta-base`: RoBERTa-base model(U) trained over 3.4M movie reviews from scratch.
- `semi-roberta-base`: RoBERTa-base model(Large U + U) trained over 3.4M movie reviews from the open-domain pretrained model [RoBERTa-base model](https://huggingface.co/roberta-base).
- `semi-roberta-large`: RoBERTa-large model(Large U + U) trained over 3.4M movie reviews from the open-domain pretrained model [RoBERTa-large model](https://huggingface.co/roberta-large).
- Download the 1M (D\` + D) training dataset for the student model, save the data at `[STUDENT_DATA_PATH]`.
You can download it [HERE](https://drive.google.com/drive/folders/1wu76V3LgJIZjNtpfscLVYTvcAJ2RuqJX?usp=sharing).
- `student_data_base`: student training data generated by roberta-base teacher model
- `student_data_large`: student training data generated by roberta-large teacher model
- Download the IMDB dataset from Andrew Maas' paper. Save the data at `[IMDB_DATA_PATH]`. For IMDB,
The training data and test data are saved in two separate files, each line in the file corresponds to one IMDB sample.
You can download [HERE](https://drive.google.com/drive/folders/1zShIK9n3HCZRjfE6311MhZ2Z3Jf1C6x2?usp=sharing).
- Download shannon_preprocssor.whl to install a binarize tool. Save the .whl file at `[SHANNON_PREPROCESS_WHL_PATH]`.
You can download [HERE](https://drive.google.com/file/d/1wjH7hdSRL_QQj0OouBsN_O8Ng6m8bQiN/view?usp=sharing)
- Download the teacher model and student model that we trained. Save them at `[CHECKPOINTS]`.
You can download [HERE](https://drive.google.com/drive/folders/1eiwS-0620S4H3yZUlrjvNAeze8JWWVu6?usp=sharing)
- `roberta-base`: teacher and student model checkpoint for roberta-base
- `roberta-large`: teacher and student model checkpoint for roberta-large

## Installation
`pip install -r requirements.txt`
`pip install [SHANNON_PREPROCESS_WHL_PATH]`

## Quick Tour

### train the roberta-large teacher model
Use the roberta model we pretrained over 3.4M reviews data to train teacher model.
Our teacher model had an accuracy rate of 96.2% on the test set.
```bash
cd sstc/tasks/semi-roberta
python trainer.py \
--mode train_teacher \
roberta_path [PRETRAIN_MODELS]\semi-roberta-large \
--imdb_data_path [IMDB_DATA_PATH]/bin \
--gpus=0,1,2,3 \
--save_path [ROOT_SAVE_PATH] \
--precision 16 \
--batch_size 10 \
--min_epochs 10 \
--patience 3 \
--lr 3e-5
```

### train the roberta-large student model
Use the roberta model we pretrained over 3.4M reviews data to train student model.
Our student model had an accuracy rate of 96.8% on the test set.
```bash
cd sstc/tasks/semi-roberta
python trainer.py \
--mode train_student \
--roberta_path [PRETRAIN_MODELS]\semi-roberta-large \
--imdb_data_path [IMDB_DATA_PATH]/bin \
--student_data_path [STUDENT_DATA_PATH]/student_data_large/bin \
--save_path [ROOT_SAVE_PATH] \
--batch_size=10 \
--precision 16 \
--lr=2e-5 \
--warmup_steps 40000 \
--gpus=0,1,2,3,4,5,6,7 \
--accumulate_grad_batches=50
```

### evaluate the student model on the test set
Load student model checkpoint to evaluate over test set to reproduce our result.
```bash
cd sstc/tasks/semi-roberta
python evaluate.py \
--checkpoint_path [CHECKPOINTS]/roberta-large/train_student_checkpoint/***.ckpt \
--roberta_path [PRETRAIN_MODELS]\semi-roberta-large \
--imdb_data_path [IMDB_DATA_PATH]/bin \
--batch_size=10 \
--gpus=0,
```

## Reproduce paper results step by step
### 1.Train in-domain LM based on RoBERTa
#### 1.1 binarize 3.4M reviews data
You should modify the shell according to your paths. The result binarize data will be saved in `[REVIEWS_PATH]/bin`
```bash
cd sstc/tasks/roberta_lm
bash binarize.sh
```
#### 1.2 train RoBERTa-large (or small, as you wish) over 3.4M reviews data
```bash
cd sstc/tasks/roberta_lm
python trainer.py \
--roberta_path [VANILLA_ROBERTA_LARGE_PATH] \
--data_dir [REVIEWS_PATH]/bin \
--gpus=0,1,2,3 \
--save_path [PRETRAIN_ROBERTA_CK_PATH] \
--val_check_interval 0.1 \
--precision 16 \
--batch_size 10 \
--distributed_backend=ddp \
--accumulate_grad_batches=50 \
--adam_epsilon 1e-6 \
--weight_decay 0.01 \
--warmup_steps 10000 \
--workers 8 \
--lr 2e-5
```
Training checkpoints will be saved in `[PRETRAIN_ROBERTA_CK_PATH]`,
find the best checkpoint and convert it to HuggingFace bin format,
The relevant code can be found in `sstc/tasks/roberta_lm/trainer.py`.
Save the pretrain bin model at `[PRETRAIN_MODELS]\semi-roberta-large`,
or you can just download the model we trained.

### 2.train the teacher model
#### 2.1 binarize IMDB dataset.
```bash
cd sstc/tasks/semi_roberta/scripts
bash binarize_imdb.sh
```
You can run the above code to binarize IMDB data, or you can just use the file we binarized in `[IMDB_DATA_PATH]\bin`
#### 2.2 train the teacher model
```bash
cd sstc/tasks/semi_roberta
python trainer.py \
--mode train_teacher \
--roberta_path [PRETRAIN_MODELS]\semi-roberta-large \
--imdb_data_path [IMDB_DATA_PATH]/bin \
--gpus=0,1,2,3 \
--save_path [ROOT_SAVE_PATH] \
--precision 16 \
--batch_size 10 \
--min_epochs 10 \
--patience 3 \
--lr 3e-5
```
After training, teacher model checkpoint will be save in `[ROOT_SAVE_PATH]/train_teacher_checkpoint`.
The teacher model we trained had an accuracy rate of 96.2% on the test set.
The download link of teacher model checkpoint can be found in quick tour part.

### 3.label the unlabeled in-domain data U
#### 3.1 label 3.4M data
Use the teacher model that you trained in previous step to label 3.4M reviews data,
notice that `[ROOT_SAVE_PATH]` should be the same as previous setting.
The labeled data will be save in `[ROOT_SAVE_PATH]\predictions`.
```bash
cd sstc/tasks/roberta_lm
python trainer.py \
--mode train_teacher \
--roberta_path [PRETRAIN_ROBERTA_PATH] \
--reviews_data_path [REVIEWS_PATH]/bin \
--best_teacher_checkpoint_path [CHECKPOINTS]/roberta-large/train_teacher_checkpoint/***.ckpt \
--gpus=0,1,2,3 \
--save_path [ROOT_SAVE_PATH]
```
#### 3.2 select the top-K data points
Firstly, we random sample 3M data from 3.4M reviews data as U',
then we select 1M data from U' with the highest score as D',
finally, we concat the IMDB train data(D) and D' as train data for student model.
The student train data will be saved in `[ROOT_SAVE_PATH]\student_data\train.txt`,
or you can use the data we provide in `[STUDENT_DATA_PATH]/student_data_large`
```bash
cd sstc/tasks/roberta_lm
python data_selector.py \
--imdb_data_path [IMDB_DATA_PATH] \
--save_path [ROOT_SAVE_PATH]
```

### 4.train the student model
#### 4.1 binarize the dataset
You can use the same script in 3.1 to binarize student train data in `[ROOT_SAVE_PATH]\student_data\train.txt`

#### 4.1 train the student model
use can use the training data we provide in `[STUDENT_DATA_PATH]/student_data_large/bin` or use your own training data in
`[ROOT_SAVE_PATH]\student_data\bin`, make sure you set the right `student_data_path`.
```bash
cd sstc/tasks/semi-roberta
python trainer.py \
--mode train_student \
--roberta_path [PRETRAIN_MODELS]\semi-roberta-large \
--imdb_data_path [IMDB_DATA_PATH]/bin \
--student_data_path [STUDENT_DATA_PATH]/student_data_large/bin \
--save_path [ROOT_SAVE_PATH] \
--batch_size=10 \
--precision 16 \
--lr=2e-5 \
--warmup_steps 40000 \
--gpus=0,1,2,3,4,5,6,7 \
--accumulate_grad_batches=50
```
After training, student model checkpoint will be save in `[ROOT_SAVE_PATH]/train_student_checkpoint`.
The student model we trained had an accuracy rate of 96.6% on the test set.
The download link of student model checkpoint can be found in Quick tour part.