Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/nlpdata/c3
Investigating Prior Knowledge for Challenging Chinese Machine Reading Comprehension
https://github.com/nlpdata/c3
dataset dialogue machine-reading-comprehension
Last synced: about 1 month ago
JSON representation
Investigating Prior Knowledge for Challenging Chinese Machine Reading Comprehension
- Host: GitHub
- URL: https://github.com/nlpdata/c3
- Owner: nlpdata
- License: other
- Created: 2019-12-18T00:22:33.000Z (about 5 years ago)
- Default Branch: master
- Last Pushed: 2022-04-20T21:58:39.000Z (over 2 years ago)
- Last Synced: 2024-08-03T09:07:06.501Z (5 months ago)
- Topics: dataset, dialogue, machine-reading-comprehension
- Language: Python
- Homepage: https://dataset.org/c3/
- Size: 3.03 MB
- Stars: 165
- Watchers: 9
- Forks: 23
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: license.txt
Awesome Lists containing this project
- StarryDivineSky - nlpdata/c3 - Choice Chinese machine reading Comprehension dataset. (机器阅读理解 / 其他_文本生成、文本对话)
README
C3
=====
Overview
--------
This repository maintains **C3**, the first free-form multiple-**C**hoice **C**hinese machine reading **C**omprehension dataset.* Paper: https://arxiv.org/abs/1904.09679
```
@article{sun2019investigating,
title={Investigating Prior Knowledge for Challenging Chinese Machine Reading Comprehension},
author={Sun, Kai and Yu, Dian and Yu, Dong and Cardie, Claire},
journal={Transactions of the Association for Computational Linguistics},
year={2020},
url={https://arxiv.org/abs/1904.09679v3}
}
```Files in this repository:
* ```license.txt```: the license of C3.
* ```data/c3-{m,d}-{train,dev,test}.json```: the dataset files, where m and d represent "**m**ixed-genre" and "**d**ialogue", respectively. The data format is as follows.
```
[
[
[
document 1
],
[
{
"question": document 1 / question 1,
"choice": [
document 1 / question 1 / answer option 1,
document 1 / question 1 / answer option 2,
...
],
"answer": document 1 / question 1 / correct answer option
},
{
"question": document 1 / question 2,
"choice": [
document 1 / question 2 / answer option 1,
document 1 / question 2 / answer option 2,
...
],
"answer": document 1 / question 2 / correct answer option
},
...
],
document 1 / id
],
[
[
document 2
],
[
{
"question": document 2 / question 1,
"choice": [
document 2 / question 1 / answer option 1,
document 2 / question 1 / answer option 2,
...
],
"answer": document 2 / question 1 / correct answer option
},
{
"question": document 2 / question 2,
"choice": [
document 2 / question 2 / answer option 1,
document 2 / question 2 / answer option 2,
...
],
"answer": document 2 / question 2 / correct answer option
},
...
],
document 2 / id
],
...
]
```
* ```annotation/c3-{m,d}-{dev,test}.txt```: question type annotations. Each file contains 150 annotated instances. We adopt the following abbreviations:
Abbreviation
Question Type
Matching
m
Matching
Prior knowledge
l
Linguistic
s
Domain-specific
c-a
Arithmetic
c-o
Connotation
c-e
Cause-effect
c-i
Implication
c-p
Part-whole
c-d
Precondition
c-h
Scenario
c-n
Other
Supporting Sentences
0
Single Sentence
1
Multiple sentences
2
Independent
* ```bert``` folder: code of Chinese BERT, BERT-wwm, and BERT-wwm-ext baselines. The code is derived from [this repository](https://github.com/nlpdata/mrc_bert_baseline). Below are detailed instructions on fine-tuning Chinese BERT on C3.
1. Download and unzip the pre-trained Chinese BERT from [here](https://github.com/google-research/bert), and set up the environment variable for BERT by ```export BERT_BASE_DIR=/PATH/TO/BERT/DIR```.
2. Copy the dataset folder ```data``` to ```bert/```.
3. In ```bert```, execute ```python convert_tf_checkpoint_to_pytorch.py --tf_checkpoint_path=$BERT_BASE_DIR/bert_model.ckpt --bert_config_file=$BERT_BASE_DIR/bert_config.json --pytorch_dump_path=$BERT_BASE_DIR/pytorch_model.bin```.
4. Execute ```python run_classifier.py --task_name c3 --do_train --do_eval --data_dir . --vocab_file $BERT_BASE_DIR/vocab.txt --bert_config_file $BERT_BASE_DIR/bert_config.json --init_checkpoint $BERT_BASE_DIR/pytorch_model.bin --max_seq_length 512 --train_batch_size 24 --learning_rate 2e-5 --num_train_epochs 8.0 --output_dir c3_finetuned --gradient_accumulation_steps 3```.
5. The resulting fine-tuned model, predictions, and evaluation results are stored in ```bert/c3_finetuned```.**Note**:
1. Fine-tuning Chinese BERT-wwm or BERT-wwm-ext follows the same steps except for downloading their pre-trained language models.
2. There is randomness in model training, so you may want to run multiple times to choose the best model based on development set performance. You may also want to set different seeds (specify ```--seed``` when executing ```run_classifier.py```).
3. Depending on your hardware, you may need to change ```gradient_accumulation_steps```.
4. The code has been tested with Python 3.6 and PyTorch 1.0.