Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers

Survey on Machine Reading Comprehension
https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers

List: Reading-Comprehension-Question-Answering-Papers

awesome-list machine-comprehension machine-reading-comprehension mrc-datasets mrc-models qa-over-kg question-answering survey

Last synced: about 1 month ago
JSON representation

Survey on Machine Reading Comprehension

Awesome Lists containing this project

README

        

## Content
- [Survey papers](https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/blob/master/README.md#surveyoverview-papersdocuments-should-read-on-machine-reading-comprehension)
- [Slides](https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/blob/master/README.md#slides)
- [Evaluation papers](https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/blob/master/README.md#evaluation-papers)
- [Models (for single-hop)](https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/blob/master/README.md#basic-papersmodels)
- [Knowledge-based Machine Reading Comprehension](https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/blob/master/README.md#kbmrc-knowledge-based-machine-reading-comprehension)
- [Open-domain Question Answering](https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/blob/master/README.md#opqa-open-domain-question-answering)
- [Unanswerable Questions](https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/blob/master/README.md#uq-unanswerable-questions)
- [Multi-Passage Machine Reading Comprehension](https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/blob/master/README.md#multi-passage-mrc-multi-passage-machine-reading-comprehension)
- [Conversational Question Answering](https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/blob/master/README.md#cqa-conversational-question-answering)
- [Datasets](https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/blob/master/README.md#datasets)
- [Datasets with Explanations](https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/blob/master/README.md#datasets-with-explanations)
- [QA over KG](https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/blob/master/README.md#qa-over-kg)
- [Question answering systems](https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/blob/master/README.md#question-answering-systems)
- [Knowledge bases/Knowledge sources](https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/blob/master/README.md#knowledge-basesknowledge-sources)
- [Papers misc](https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/blob/master/README.md#others-misc-model-transfer-learning-data-augmentation-domain-adaption-cross-lingual-)

## Survey/Overview papers/documents should read on Machine Reading Comprehension
- Fengbin Zhu et al., **Retrieving and Reading : A Comprehensive Survey on Open-domain Question Answering**, arXiv, 2021, [paper](https://arxiv.org/pdf/2101.00774.pdf)
- Mokanarangan Thayaparan, Marco Valentino, and André Freitas, **A Survey on Explainability in Machine Reading Comprehension**, arXiv, 2020, [paper](https://arxiv.org/pdf/2010.00389.pdf)
- Viktor Schlegel et al., **Beyond Leaderboards: A survey of methods for revealing weaknesses in Natural Language Inference data and models**, arXiv, 2020, [paper](https://arxiv.org/pdf/2005.14709.pdf)
- Viktor Schlegel et al., **A Framework for Evaluation of Machine Reading Comprehension Gold Standards**, arXiv, 2020, [paper](https://arxiv.org/pdf/2003.04642.pdf)
- Chengchang Zeng et al., **A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics, and Benchmark Datasets**, arXiv, 2020, [paper](https://arxiv.org/pdf/2006.11880.pdf).
- Razieh Baradaran, Razieh Ghiasi, and Hossein Amirkhani, **A Survey on Machine Reading Comprehension Systems**, arXiv, 6 Jan 2020, [paper](https://arxiv.org/abs/2001.01582).
- Matthew Gardner et al., **On Making Reading Comprehension More Comprehensive.**, aclweb, 2019, [paper](https://www.aclweb.org/anthology/D19-5815.pdf).
- Shanshan Liu et al., **Neural Machine Reading Comprehension: Methods and Trends**, arXiv, 2019, [paper](https://arxiv.org/pdf/1907.01118.pdf).
- Xin Zhang et al., **Machine Reading Comprehension: a Literature Review**, arXiv, 2019, [paper](https://arxiv.org/pdf/1907.01686.pdf).
- Boyu Qiu et al., **A Survey on Neural Machine Reading Comprehension**, arXiv, 2019, [paper](https://arxiv.org/pdf/1906.03824.pdf).
- Danqi Chen: **Neural Reading Comprehension and Beyond**. PhD thesis, Stanford University, 2018, [paper](https://github.com/danqi/thesis).

## Slides
- Sebastian Riedel, Reading and Reasoning with Neural Program Interpreters, [slides](https://mrqa2018.github.io/slides/sebastian.pdf), MRQA 2018.
- Phil Blunsom, Data driven reading comprehension: successes and limitations, [slides](https://mrqa2018.github.io/slides/phil.pdf), MRQA 2018.
- Jianfeng Gao, Multi-step reasoning neural networks for question answering, [slides](https://mrqa2018.github.io/slides/jianfeng.pdf), MRQA 2018.
- Sameer Singh, Questioning Question Answering Answers, [slides](https://mrqa2018.github.io/slides/sameer.pdf), MRQA 2018.

## Evaluation papers
- Diana Galvan, **Active Reading Comprehension: A dataset for learning the Question-Answer Relationship strategy**, ACL 2019, [paper](https://www.aclweb.org/anthology/P19-2014).
- Divyansh Kaushik and Zachary C. Lipton, **How Much Reading Does Reading Comprehension Require? A Critical Investigation of Popular Benchmarks**, EMNLP 2018, [paper](https://www.aclweb.org/anthology/D18-1546.pdf).
- Saku Sugawara et al., **What Makes Reading Comprehension Questions Easier?**, EMNLP 2018, [paper](https://www.aclweb.org/anthology/D18-1453.pdf).
- Pramod K. Mudrakarta et al., **Did the Model Understand the Question?**, ACL 2018, [paper](https://www.aclweb.org/anthology/P18-1176.pdf).
- Robin Jia and Percy Liang, **Adversarial Examples for Evaluating Reading Comprehension Systems**, EMNLP 2017, [paper](https://www.aclweb.org/anthology/D17-1215.pdf).
- Saku Sugawara et al., **Evaluation Metrics for Machine Reading Comprehension: Prerequisite Skills and Readability**, ACL 2017, [paper](https://www.aclweb.org/anthology/P17-1075.pdf).
- Saku Sugawara et al., **Prerequisite Skills for Reading Comprehension: Multi-perspective Analysis of MCTest Datasets and Systems**, AAAI 2017, [paper](http://www.aaai.org/Conferences/AAAI/2017/PreliminaryPapers/14-Sugawara-14614.pdf).
- Danqi Chen et al., **A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task**, ACL 2016, [paper](https://www.aclweb.org/anthology/P16-1223.pdf).

## Basic Papers/Models
| Year | Title | Model| Datasets | Misc | Paper, Source Code |
| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |
| 2019 | XLNet: Generalized Autoregressive Pretraining for Language Understanding | XLNet | Race, SQuAD 1.1, SQuAD 2.0 | pretrained LM | [paper](https://arxiv.org/pdf/1906.08237.pdf), [code](https://github.com/zihangdai/xlnet/) |
| 2019 | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding | BERT | GLUE, SQuAD 1.1, SQuAD 2.0, SWAG | pretrained LM | [paper](https://www.aclweb.org/anthology/N19-1423), [code](https://github.com/google-research/bert) |
| 2018 | S-NET: From Answer Extraction to Answer Generation for Machine Reading Comprehension | S-NET | MS-MARCO | multiple passages | [paper](https://arxiv.org/pdf/1706.04815.pdf), [code]|
| 2018 | QANET: Combining local Convolution with global Self-Attention for Reading Comprehension | QANet | SQuAD 1.1 | | [paper](https://openreview.net/pdf?id=B14TlG-RW), [code](https://github.com/google-research/google-research/tree/master/qanet) |
| 2017 | ReasoNet: Learning to Stop Reading in Machine Comprehension | ReasoNet | CNN and Daily Mail, SQuAD 1.1 | | [paper](https://arxiv.org/pdf/1609.05284.pdf), [code] |
| 2017 | Reading Wikipedia to Answer Open-Domain Questions | DrQA | Wikipedia, SQuAD 1.1, CuratedTREC, WebQuestions, WikiMovies | OPQA, Multi-Passage MRC | [paper](https://www.aclweb.org/anthology/P17-1171), [code](https://github.com/facebookresearch/DrQA) |
| 2017 | R-Net: Machine Reading Comprehension with Self-Matching Networks | R-Net | SQuAD 1.1, MS-MARCO | | [paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2017/05/r-net.pdf), [code](https://github.com/HKUST-KnowComp/R-Net) |
| 2017 | Machine Comprehension Using Match-LSTM and Answer Pointer | Match-LSTM + Pointer Network| SQuAD 1.1 | | [paper](https://arxiv.org/pdf/1608.07905.pdf), [code](https://github.com/shuohangwang/SeqMatchSeq) |
| 2017 | Gated-Attention Readers for Text Comprehension | Gated-attention Reader | CNN and Daily Mail, Children’s Book Test, Who Did What | | [paper](https://arxiv.org/pdf/1606.01549.pdf), [code](https://github.com/bdhingra/ga-reader) |
| 2017 | Gated Self-Matching Networks for Reading Comprehension and Question Answering | Gated Self-Matching | SQuAD 1.1 | | [paper](https://www.aclweb.org/anthology/P17-1018.pdf), [code] |
| 2017 | Dynamic CoAttention Networks for Question Answering | Dynamic coattention networks | SQuAD 1.1 | | [paper](https://arxiv.org/pdf/1611.01604.pdf), [code](https://github.com/thomasfermi/Dynamic-Coattention-Network-for-SQuAD) |
| 2017 | DCN+: Mixed Objective and Deep Residual CoAttention for Question Answering | DCN+ | SQuAD 1.1 | | [paper](https://arxiv.org/pdf/1711.00106.pdf), [code](https://github.com/andrejonasson/dynamic-coattention-network-plus) |
| 2017 | Bi-directional Attention Flow for Machine Comprehension | BiDAF | SQuAD 1.1 | | [paper](https://arxiv.org/pdf/1611.01603.pdf), [code](https://github.com/allenai/bi-att-flow) |
| 2017 | Attention-over-Attention Neural Networks for Reading Comprehension | Attention-over-Attention Reader | Children’s Book Test, CNN and Daily Mail | | [paper](https://www.aclweb.org/anthology/P17-1055.pdf), [code](https://github.com/OlavHN/attention-over-attention)|
| 2016 | Text Understanding with the Attention Sum Reader Network | Attention Sum Reader | Children’s Book Test, CNN and Daily Mail | | [paper](https://www.aclweb.org/anthology/P16-1086), [code](https://github.com/rkadlec/asreader) |
| 2016 | Multi-Perspective Context Matching for Machine Comprehension | Multi-Perspective Context Matching | SQuAD 1.1 | | [paper](https://arxiv.org/pdf/1612.04211.pdf), [code] |
| 2016 | Key-Value Memory Networks for Directly Reading Documents | Key-Value Memory Networks | WikiMovies, WikiQA | | [paper](https://aclweb.org/anthology/D16-1147/), [code](https://github.com/facebook/MemNN/tree/master/KVmemnn) |
| 2016 | Iterative Alternating Neural Attention for Machine Reading | Iterative Attention Reader | Children’s Book Test, CNN and Daily Mail | | [paper](https://arxiv.org/pdf/1606.02245.pdf), [code]|
| 2016 | A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task | | CNN and Daily Mail | | [paper](https://www.aclweb.org/anthology/P16-1223.pdf), [code] |
| 2015 | Teaching Machines to Read and Comprehend | Attentive Reader | CNN and Daily Mail | | [paper](https://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend.pdf), [code](https://github.com/thomasmesnard/DeepMind-Teaching-Machines-to-Read-and-Comprehend) |

## KBMRC: Knowledge-based Machine Reading Comprehension
* Details: https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/wiki/Knowledge-based-MRC

## OPQA: Open-domain Question Answering
* Details: https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/wiki/Open-domain-Question-Answering

## UQ: Unanswerable Questions
* Details: https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/wiki/Unanswerable-Questions

## Multi-Passage MRC: Multi-Passage Machine Reading Comprehension
* Details: https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/wiki/Multi-Passage-MRC

## CQA: Conversational Question Answering
* Details: https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/wiki/CQA

## Datasets
- Following Danqi Chen, we have four answer types:
* Cloze test
* Multiple choice
* Span extraction
* Free answering

- Details: https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/wiki/MRC-Datasets

| Year | Dataset | Task | Size | Source | Web/Paper | Answer type | Misc | Similar datasets |
| ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- | ------------- |
| 2019 | ROPES | RC | 14k | Wikipedia + science textbooks | [web](https://allennlp.org/ropes), [paper](https://arxiv.org/pdf/1908.05852.pdf) | Span extraction | background passage + situation | ShARC |
|2019| RC-QED | RC | 12k | Wikipedia |[web](https://naoya-i.github.io/rc-qed/), [paper](https://arxiv.org/pdf/1910.04601.pdf)| Multiple choice | multi-passage | HotpotQA |
|2019| QUOREF| RC | 24k+ | Wikipedia |[web](https://allennlp.org/quoref), [paper](https://www.aclweb.org/anthology/D19-1606.pdf)| Span extraction | coreference resolution | |
|2019| COSMOS QA| | 35,600 | narrative |[web](https://wilburone.github.io/cosmos/), [paper](https://www.aclweb.org/anthology/D19-1243.pdf)| Multiple choice | | |
|2019| DROP | RC | 96k | Wikipedia | [web](https://allennlp.org/drop), [paper](https://arxiv.org/pdf/1903.00161.pdf) | Span extraction + numerical reasoning | multi-span answers | |
|2019| Natural Questions | RC | 323k | Wikipedia | [paper](https://www.aclweb.org/anthology/Q19-1026.pdf) | Span extraction | | |
|2018|SQuAD 2.0| RC | 150k | Wikipedia |[paper](https://rajpurkar.github.io/SQuAD-explorer/)| Span extraction | no answer: 50k | NewsQA |
|2018|MultiRC| RC | 6k+ questions | various articles | [web](https://cogcomp.seas.upenn.edu/multirc/), [paper](https://www.aclweb.org/anthology/N18-1023)| Multiple choice | multiple sentence reasoning | MCTest |
|2018|CSQA| QA |200k dialogs, 1.6M turns ||[paper](https://arxiv.org/pdf/1801.10314.pdf)| | | |
|2018| QuAC | RC | 100k | Wikipedia |[web](http://quac.ai/), [paper](https://arxiv.org/pdf/1808.07036.pdf) | Span extraction | conversational questions | CoQA |
|2018| QAngaroo (Wikihop + Medhop) | RC | | Wikipedia + Medline |[web](https://qangaroo.cs.ucl.ac.uk/), [paper](https://transacl.org/ojs/index.php/tacl/article/viewFile/1325/299)| Multiple choice | multi-passage | HotpotQA |
|2018| HotpotQA | RC | 113k | Wikipedia |[web](https://hotpotqa.github.io/), [paper](https://arxiv.org/pdf/1809.09600.pdf)| Span extraction | multi-passage | QAngaroo |
|2018| CoQA | RC | 127k | various articles |[paper](https://stanfordnlp.github.io/coqa/)| Free answering | conversational questions | QuAC |
|2018| ComplexWebQuestions | RC | 34,689 | WebQuestionsSP |[web](https://www.tau-nlp.org/compwebq), [paper](https://www.aclweb.org/anthology/N18-1059.pdf)| Span extraction? | multi-passage | |
| 2018 | SWAG | QA | 113k | video caption | | Multiple choice | situational commonsense reasoning | |
| 2018 | RecipeQA | RC | 36k | various | | | multimodal comprehension | |
| 2018 | ProPara | RC | 2k | procedural text | | | | bAbI, SCoNE |
| 2018 | OpenBookQA | QA | 6k | science facts | | Multiple choice | external knowledge | ARC |
| 2018 | FEVER | | | | | | | |
| 2018 | DuReader | | | | | Free answering | | |
| 2018 | DuoRC | RC | 186k | movie plot | | Span extraction | | NarrativeQA |
| 2018 | CLOTH | RC | 99k | English exams | | Cloze test | | RACE |
| 2018 | CliCR | RC | 100k | clinical case text | | Cloze test | | |
| 2018 | ARC | RC | 8k | science exam | | | easy 5197, challenge 2590 | |
|2017|WikiSuggest||||[paper](https://aclweb.org/anthology/D15-1237)| | | |
|2017|TriviaQA| RC | 96k question-answer pairs | Web + Wikipedia | [web](http://nlp.cs.washington.edu/triviaqa/), [paper](https://arxiv.org/pdf/1705.03551.pdf) | Span extraction | | SQuAD |
|2017|SQA||||[paper](https://people.cs.umass.edu/~miyyer/pubs/2017_acl_dynsp.pdf)| | | |
|2017|SearchQA||||[paper](https://arxiv.org/pdf/1704.05179.pdf)| Free answering | | |
|2017|RACE||||[paper](http://www.cs.cmu.edu/~glai1/data/race/)| Multiple choice | | |
|2017|NarrativeQA||||[paper](https://github.com/deepmind/narrativeqa)| Free answering | | |
|2016|Who-did-What||||[paper](https://tticnlp.github.io/who_did_what/)| Cloze test | | |
|2016|SQuAD 1.1| RC | 87k training + 10k development | Wikipedia |[paper](https://rajpurkar.github.io/SQuAD-explorer/)| Span extraction | | NewsQA |
|2016|NewsQA||||[paper](https://datasets.maluuba.com/NewsQA)| Span extraction | | |
|2016|MS MARCO||||[web](http://www.msmarco.org/dataset.aspx)| Free answering | | |
|2016|LAMBADA||||[paper](http://clic.cimec.unitn.it/lambada/)| Cloze test | | |
| 2016 | WikiMovies | QA | | | | | | |
| 2015 | CuratedTREC | QA | | | | | | |
|2015|CNN and Daily Mail| RC | 93k + 220k articles| CNN + Daily Mail |[paper](https://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend.pdf) [web](https://cs.nyu.edu/~kcho/DMQA/)| Cloze test | | |
|2015|Children's Book Test| RC | 108 children's books | |[web](https://research.fb.com/downloads/babi/)| Cloze test | | |
|2015|bAbI| RC | | classic text adventure game |[web](https://research.fb.com/downloads/babi/)| Free answering | 20 tasks | |
| 2013 | WebQuestions | QA | | | | | | |
|2013|QA4MRE| RC | | various articles |[paper](https://www.cs.cmu.edu/~hovy/papers/13CLEF-QA4MRE.pdf)| Multiple choice | | |
|2013|MCTest| RC | 500 stories + 2k questions | fictional stories |[paper](http://aclweb.org/anthology/D13-1020)| Multiple choice | open-domain | |
|1999|DeepRead| RC | 60 development and 60 test? | news stories|[paper](https://dl.acm.org/citation.cfm?id=1034678.1034731)| Free answering | | |

## Datasets with Explanations
* Details: https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/wiki/Datasets-with-Explanations

## QA over KG
* Details: https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers/wiki/QA-over-KG

## Knowledge Bases/Knowledge Sources
- Wikidata, [web](https://www.wikidata.org/wiki/Wikidata:Main_Page)
- Freebase, [paper](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.538.7139&rep=rep1&type=pdf)
- DBPedia

## Question Answering Systems
- IBM's DeepQA
- QuASE
- Microsoft's AskMSR
- YodaQA
- DrQA

## Others (Misc: Model, transfer learning, data augmentation, domain adaption, cross lingual ...)
- Minghao Hu, Yuxing Peng, Zhen Huang and Dongsheng Li, **A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete Reasoning**, EMNLP 2019, [paper](https://www.aclweb.org/anthology/D19-1170.pdf).
- Huazheng Wang, Zhe Gan, Xiaodong Liu, Jingjing Liu, Jianfeng Gao and Hongning Wang, **Adversarial Domain Adaptation for Machine Reading Comprehension**, EMNLP 2019, [paper](https://www.aclweb.org/anthology/D19-1254.pdf).
- Yimin Jing, Deyi Xiong and Zhen Yan, **BiPaR: A Bilingual Parallel Dataset for Multilingual and Cross-lingual Reading Comprehension on Novels**, EMNLP 2019, [paper](https://www.aclweb.org/anthology/D19-1249.pdf).
- Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang and Guoping Hu, **Cross-Lingual Machine Reading Comprehension**, EMNLP 2019, [paper](https://www.aclweb.org/anthology/D19-1169.pdf).
- Todor Mihaylov and Anette Frank, **Discourse-Aware Semantic Self-Attention for Narrative Reading Comprehension**, EMNLP 2019, [paper](https://www.aclweb.org/anthology/D19-1257.pdf).
- Kyungjae Lee, Sunghyun Park, Hojae Han, Jinyoung Yeo, Seung-won Hwang and Juho Lee, **Learning with Limited Data for Multilingual Reading Comprehension**, EMNLP 2019, [paper](https://www.aclweb.org/anthology/D19-1283.pdf).
- Qiu Ran, Yankai Lin, Peng Li, Jie Zhou and Zhiyuan Liu, **NumNet: Machine Reading Comprehension with Numerical Reasoning**, EMNLP 2019, [paper](https://www.aclweb.org/anthology/D19-1251.pdf).
- Yiming Cui, Ting Liu, Wanxiang Che, Li Xiao, Zhipeng Chen, Wentao Ma, Shijin Wang and Guoping Hu, **A Span-Extraction Dataset for Chinese Machine Reading Comprehension**, EMNLP 2019, [paper](https://www.aclweb.org/anthology/D19-1600.pdf).
- Daniel Andor, Luheng He, Kenton Lee and Emily Pitler, **Giving BERT a Calculator: Finding Operations and Arguments with Reading Comprehension**, EMNLP 2019, [paper](https://www.aclweb.org/anthology/D19-1609.pdf).
- Tsung-Yuan Hsu, Chi-Liang Liu and Hung-yi Lee, **Zero-shot Reading Comprehension by Cross-lingual Transfer Learning with Multi-lingual Language Representation Model**, EMNLP 2019, [paper](https://www.aclweb.org/anthology/D19-1607.pdf).
- Kyosuke Nishida et al., **Multi-style Generative Reading Comprehension**, ACL 2019, [paper](https://www.aclweb.org/anthology/P19-1220).
- Alon Talmor and Jonathan Berant, **MultiQA: An Empirical Investigation of Generalization and Transfer in Reading Comprehension**, ACL 2019, [paper](https://www.aclweb.org/anthology/P19-1485).
- Yi Tay et al., **Simple and Effective Curriculum Pointer-Generator Networks for Reading Comprehension over Long Narratives**, ACL 2019, [paper](https://www.aclweb.org/anthology/P19-1486).
- Haichao Zhu et al., **Learning to Ask Unanswerable Questions for Machine Reading Comprehension**, ACL 2019, [paper](https://www.aclweb.org/anthology/P19-1415).
- Patrick Lewis et al., **Unsupervised Question Answering by Cloze Translation**, ACL 2019, [paper](https://www.aclweb.org/anthology/P19-1484.pdf).
- Michael Hahn and Frank Keller, **Modeling Human Reading with Neural Attention**, EMNLP 2016, [paper](https://www.aclweb.org/anthology/D16-1009.pdf).
- Jianpeng Cheng et al., **Long Short-Term Memory-Networks for Machine Reading**, EMNLP 2016, [paper](https://www.aclweb.org/anthology/D16-1053.pdf).

## Thanks to these repositories:
- https://github.com/penzant/nlu_datasets_2018
- https://github.com/seriousmac/awesome-qa