Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/kanyun-inc/fairseq-gec
Source code for paper: Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data
https://github.com/kanyun-inc/fairseq-gec
grammar nlp
Last synced: 18 days ago
JSON representation
Source code for paper: Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data
- Host: GitHub
- URL: https://github.com/kanyun-inc/fairseq-gec
- Owner: kanyun-inc
- License: other
- Created: 2019-05-04T07:35:58.000Z (almost 6 years ago)
- Default Branch: master
- Last Pushed: 2020-06-03T04:13:31.000Z (over 4 years ago)
- Last Synced: 2025-01-16T23:02:10.261Z (26 days ago)
- Topics: grammar, nlp
- Language: Python
- Homepage:
- Size: 2.72 MB
- Stars: 246
- Watchers: 7
- Forks: 67
- Open Issues: 18
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
README
# Introduction
Source code for the paper:
**Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data**
Authors: Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, Jingming Liu
Arxiv url: https://arxiv.org/abs/1903.00138
Comments: Accepted by NAACL 2019 (oral)
![](arch.jpg)## Dependecies
- PyTorch version >= 1.0.0
- Python version >= 3.6## Downloads
- Download CoNLL-2014 evaluation scripts```
cd gec_scripts/
sh download.sh
```- Download **pre-processed data** & **pre-trained models**
pre-trained model: (Google Drive/Baidu Pan)
- url1: https://drive.google.com/file/d/1zewifHUUwvqc2F-MfDRsZFio6PlSzx2c/view?usp=sharing
- url2: https://pan.baidu.com/s/1hCwQeNFjng_0_NiViJq6fg (code: mxrf)
pre-processed data: (Google Drive)(train/valid/test)
- url: https://drive.google.com/open?id=17s-TZiM6ilQ-SHklxTUun2Jdgg8B9zS3## Train with the pre-trained model
```
cd fairseq-gec
pip install --editable
sh train.sh \${device_id} \${experiment_name}
```## Train without the pre-trained model
Modify train.sh to train without the pre-trained model- delete parameter "--pretrained-model"
- change the value of "--max-epoch" to 15 (more epochs are needed without pre-trained parameters)## Evaluate on the CoNLL-2014 test dataset
```
sh g.sh \${device_id} \${experiment_name}
```## Get pre-trained models from scratch
We have public our pre-trained models as mentioned in the downloads part. We list the steps here, in case someone want to get the pre-trained models from scratch.```
1. # prepare target sentences using one billion benchmark dataset
2. sh noise.sh # generate the noised source sentences
3. sh preprocess_noise_data.sh # preprocess data
4. sh pretrain.sh 0,1 _pretrain # pretrain
```## Acknowledgments
Our code was modified from [fairseq](https://github.com/pytorch/fairseq) codebase. We use the same license as fairseq(-py).## Citation
Please cite as:```
@article{zhao2019improving,
title={Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data},
author={Zhao, Wei and Wang, Liang and Shen, Kewei and Jia, Ruoyu and Liu, Jingming},
journal={arXiv preprint arXiv:1903.00138},
year={2019}
}
```