Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-text-interpretability
A repo to keep all resources about interpretability in NLP organised and up to date
https://github.com/copenlu/awesome-text-interpretability
Last synced: about 14 hours ago
JSON representation
-
Other
- Sanity Checks for Saliency Maps
- Dataset Cartography: Mapping and Diagnosing Datasets with Training Dynamics. EMNLP 2020 - a model-based tool to characterize and diagnose datasets
- How does this interaction affect me? Interpretable attribution for feature interactions, NeurlIPS 2020 - propose an interaction attribution and detection framework called Archipelago; scalable in real-world settings; more interpretable explanations than comparable methods, which is important for analyzing the impact of interactions on predictions
- More Bang for Your Buck: Natural Perturbation for Robust Question Answering, EMNLP 2020
- FIND: Human-in-the-Loop Debugging Deep Text Classifiers, EMNLP 2020
- Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension(UCL group), EMNLP 2020
- Sanity Checks for Saliency Maps
- Human-grounded Evaluations of Explanation Methods for Text Classification, EMNLP 2019 - CAM-Text and Decision Trees (for words and n-grams). They find that LIME is the most class-discriminative approach. Unfortunately, the annotator agreement is considerably low in most tasks and one general improvement would be to provide the words and n-grams together with the context they appear in.
- Analysis Methods in Neural Language Processing: A Survey, TACL 2019
- Interpretation of Neural Networks is Fragile
- Manipulating and Measuring Model Interpretability
- Manipulating and Measuring Model Interpretability
- Human-grounded Evaluations of Explanation Methods for Text Classification, EMNLP 2019 - CAM-Text and Decision Trees (for words and n-grams). They find that LIME is the most class-discriminative approach. Unfortunately, the annotator agreement is considerably low in most tasks and one general improvement would be to provide the words and n-grams together with the context they appear in.
- Analysis Methods in Neural Language Processing: A Survey, TACL 2019
- Interpretation of Neural Networks is Fragile
- Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension(UCL group), EMNLP 2020
-
Social Sciences
-
Fact Checking
- Explainable Fact Checking with Probabilistic Answer Set Programming, TTO 2019 - Retrieve tripes from knowledge graphs and combine them using rules to produce explanations for fact checking
-
Machine Reaching Comprehension / Question Answering
- Learning to Explain: Datasets and Models for Identifying Valid Reasoning Chains in Multihop Question-Answering, EMNLP 2020 - 3 datasets and delexicalized chain representations in which repeated noun phrases are replaced by variables, thus turning them into generalized reasoning chains
-
Saliency maps
- Hierarchical interpretations for neural network predictions, ICLR 2019 - Provide a hierarchical visualisation of how words contribute to phrases and how phrases contribute to bigger piesces of text and eventually to the overall prediction.
- Towards a Deep and Unified Understanding of Deep Neural Models in NLP, ICML 2019
-
Generating Rationales
- Why do you think that? Exploring Faithful Sentence-Level Rationales Without Supervision, EMNLP 2020 - a differentiable training-framework to create models which output faithful rationales on a sentence level, by solely applying supervision on the target task; model solves the task based on each rationale individually and learns to assign high scores to those which solved the task best
- F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering, EMNLP 2020 - two novel evaluation scores: (i) tracks prediction changes when removing facts, (ii) assesses whether the answer is contained in the explanation or not; Further strengthen the coupling of answer and explanation prediction in the model architecture and during training
- Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers, EMNLP 2020 - variational word masks (VMASK) that are inserted into a neural text classifier, after the word embedding layer, and trained jointly with the model. VMASK learns to restrict the information of globally irrelevant or noisy wordlevel features flowing to subsequent network layers, hence forcing the model to focus on important features to make predictions
- Why do you think that? Exploring Faithful Sentence-Level Rationales Without Supervision, EMNLP 2020 - a differentiable training-framework to create models which output faithful rationales on a sentence level, by solely applying supervision on the target task; model solves the task based on each rationale individually and learns to assign high scores to those which solved the task best
- F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering, EMNLP 2020 - two novel evaluation scores: (i) tracks prediction changes when removing facts, (ii) assesses whether the answer is contained in the explanation or not; Further strengthen the coupling of answer and explanation prediction in the model architecture and during training
- Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers, EMNLP 2020 - variational word masks (VMASK) that are inserted into a neural text classifier, after the word embedding layer, and trained jointly with the model. VMASK learns to restrict the information of globally irrelevant or noisy wordlevel features flowing to subsequent network layers, hence forcing the model to focus on important features to make predictions
- Why do you think that? Exploring Faithful Sentence-Level Rationales Without Supervision, EMNLP 2020 - a differentiable training-framework to create models which output faithful rationales on a sentence level, by solely applying supervision on the target task; model solves the task based on each rationale individually and learns to assign high scores to those which solved the task best
-
On Human Rationales
- From Language to Language-ish: How Brain-Like is an LSTM's Representation of Nonsensical Language Stimuli?, EMNLP 2020 - The syntactic signatures available in Sentence and Jabberwocky LSTM representations are similar, and can be predicted from either the Sentence or Jabberwocky EEG. From our results, we can infer which LSTM representations encode semantic and/or syntactic information. We confirm using syntactic and semantic probing tasks. Our results show that there are similarities between the way the brain and an LSTM represent stimuli from both the Sentence (within-distribution) and Jabberwocky (out-of-distribution) conditions.
- Evaluating and Characterizing Human Rationales, EMNLP 2020 - An open question, however, is how human rationales fare with these automatic metrics - do not necessarily perform well- reveal irrelevance and redundancy. Our work leads to actionable suggestions for evaluating and characterizing rationales.
- From Language to Language-ish: How Brain-Like is an LSTM's Representation of Nonsensical Language Stimuli?, EMNLP 2020 - The syntactic signatures available in Sentence and Jabberwocky LSTM representations are similar, and can be predicted from either the Sentence or Jabberwocky EEG. From our results, we can infer which LSTM representations encode semantic and/or syntactic information. We confirm using syntactic and semantic probing tasks. Our results show that there are similarities between the way the brain and an LSTM represent stimuli from both the Sentence (within-distribution) and Jabberwocky (out-of-distribution) conditions.
- Evaluating and Characterizing Human Rationales, EMNLP 2020 - An open question, however, is how human rationales fare with these automatic metrics - do not necessarily perform well- reveal irrelevance and redundancy. Our work leads to actionable suggestions for evaluating and characterizing rationales.
-
Datasets with Highlights
-
Datasets with Textual explanations
- Where is your Evidence: Improving Fact-checking by Justification Modeling
- AllenNLP INterpret - comprehension)
- Interpretable Machine Learning
- Explain Yourself! Leveraging Language Models for Commonsense Reasoning
- e-SNLI: Natural Language Inference with Natural Language Explanations
- Explain Yourself! Leveraging Language Models for Commonsense Reasoning
- e-SNLI: Natural Language Inference with Natural Language Explanations