Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/nouhadziri/DialogEntailment
The implementation of the paper "Evaluating Coherence in Dialogue Systems using Entailment"
https://github.com/nouhadziri/DialogEntailment
bert dialogue-evaluation evaluation-framework natural-language-inference
Last synced: about 1 month ago
JSON representation
The implementation of the paper "Evaluating Coherence in Dialogue Systems using Entailment"
- Host: GitHub
- URL: https://github.com/nouhadziri/DialogEntailment
- Owner: nouhadziri
- License: mit
- Created: 2019-04-13T04:51:33.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2022-04-18T04:33:01.000Z (over 2 years ago)
- Last Synced: 2024-08-02T08:10:07.515Z (4 months ago)
- Topics: bert, dialogue-evaluation, evaluation-framework, natural-language-inference
- Language: Python
- Homepage: https://arxiv.org/abs/1904.03371
- Size: 80.1 KB
- Stars: 74
- Watchers: 7
- Forks: 5
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-bert - nouhadziri/DialogEntailment
README
This repository hosts the implementation of the paper
"[Evaluating Coherence in Dialogue Systems using Entailment](https://arxiv.org/abs/1904.03371)",
published in NAACL'19.# DialogEntailment
DialogEntailment is a microframework to automatically evaluate coherence in dialogue systems. Our implementation includes the following metrics:
- __Semantic Similarity__, derived from [\[Dziri et al., 2018\]](https://arxiv.org/abs/1811.01063), estimates the correspondence
between the utterances in the conversation history and the generated response. The metric is acquired by computing the cosine
distance between the embedding vectors of the test utterances in the dialogue history and the generated response.
- __Word-level metrics__, introduced in [\[Liu et al., 2016\]](https://aclweb.org/anthology/D16-1230), incorporates word embeddings to measure three metrics: A (average), G (greedy), and E (extrema) (will be added later to the repo)
- __Consistency by textual entailment__: we cast a generated response as the hypothesis and the conversation history as the
premise, projecting thus the automatic evaluation into an natural language inference (NLI) task.Note that in the paper, we reported distance for the semantic similarity, but in the code, we named the metric [SemanticDistance](dialogentail/semantic_distance.py) (i.e., the lower the better). We also provided [SemanticSimilarity](dialogentail/semantic_similarity.py) that actually computes the similarity.
## Installation
DialogEntailment is shipped as a Python package and can be installed using `pip`:
```
git clone [email protected]:nouhadziri/DialogEntailment.git
pip install -e .
python -m spacy link en_core_web_lg en
```### Dependencies
- Python >= 3.6
- SpaCy >= 2.1.0
- allennlp >= 0.8.3
- pytorch-pretrained-bert
- scikit-learn
- tqdm
- smart_open
- pandas
- seaborn## Dataset
We build a syntenthized entailment corpus, namely InferConvAI,
from the ConvAI dialogue data [\[Zhang et al., 2018\]](https://arxiv.org/abs/1801.07243), described in details in the paper. The dataset is formatted in both tsv (similar to [MultiNLI](https://www.nyu.edu/projects/bowman/multinli/)) and jsonl (following [SNLI](https://nlp.stanford.edu/projects/snli/)). To download InferConvAI, please use the following links:
- [InferConvAI_v1.3_tsv.tar.gz](https://drive.google.com/file/d/16mxLm1fqkguYVjUibU10D99Ns3L5VgKm/view?usp=sharing) (84MB download / 236MB uncompressed)
- [InferConvAI_v1.3_jsonl.tar.gz](https://drive.google.com/file/d/1yeU7yHzFBs93UkMHtN2uq_rv_nLrD8mF/view?usp=sharing) (74MB download / 274MB uncompressed)
Check out [convai_to_nli.py](dialogentail/preprocessing/convai_to_nli.py) to see how the synthesized inferenece data is generated from the utterances.
## Train an Entailment model
We adopt two prominent models that have shown promising results in commonsense reasoning:- The Enhanced Sequential Inference Model (ESIM) [\[Chen et al., 2016\]](https://arxiv.org/abs/1609.06038) entangled with ELMO [\[Peters et al., 2018\]](https://arxiv.org/abs/1802.05365) contextualized word embedding. The implementation is obtained from the [AllenNLP](https://allennlp.org/) library. You can run the following command to train the ESIM model with [this](training/configs/esim_elmo.jsonnet) configuration:
```bash
training/allennlp.sh -s [--overwrite] [--config ]
```
- BERT [\[Devlin et al., 2018\]](https://arxiv.org/abs/1810.04805): We fine-tuned a pre-trained BERT model using :hugs: [Transformers](https://github.com/huggingface/pytorch-pretrained-BERT) (when it was called, `pytorch-pretrained-BERT`). We modified [run_classifier.py](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py) to support the entailment task. Here is how to train the model, followed by other arguments that can be passed to the program:
```bash
python -m dialogentail.huggingface --do_eval --do_train --output_dir
```
--train_dataset default: InferConvAI train data
--eval_dataset default: InferConvAI validation data
--model bert-base-uncased, bert-large-uncased (default: bert-base-uncased)
--train_batch_size default: 32
--eval_batch_size default: 8
--num_train_epochs default: 3
--max_seq_length default: 128## Visualization
You may run the `dialogentail` module to replicate the plots provided in the paper:
```
python -m dialogentail --bert_dir --esim_model [--plots_dir ]
```
For the ESIM model, you need to input `model.tar.gz` which is generated by allennlp in the model directory once the training is finished.Note that loading the BERT model and the ESIM model in the same process requires massive amount of memory, so we recommend to run the above command for each model separately.
#### Custom Test Data
The default test data is 150 dialogues drawn from Reddit (used in [THRED](https://github.com/nouhadziri/THRED) for human evaluation). We also provided a 150-dialogue test data from OpenSubtitles. You can change the test data by the `--response_file` argument. To use our OpenSubtitles data, simply pass `--response_file opensubtitles`.
For your own test data, the file format should be the following for each test sample (see our [Reddit]() data for more information):
Line N: TAB-separated utterances in the conversation history
Line N+1: the ground-truth response
Line N+2: Response generated by Method_1
Line N+3: Response generated by Method_2
...
Line N+m+1: Response generated by Method_mRun the program with the following arguments:
--response_file Path to your test file
--generator_types The names of 'm' generative modelsBy default, the program evaluates the following `m=4` models:
- Seq2Seq [\[Vinyals & Le, 2015\]](https://arxiv.org/abs/1506.05869),
- HRED [\[Serban et al., 2016\]](https://arxiv.org/abs/1507.04808),
- TA-Seq2Seq [\[Xing et al., 2017\]](https://arxiv.org/abs/1606.08340),
- THRED [\[Dziri et al., 2018\]](https://arxiv.org/abs/1811.01063).#### Correlation with Human Judgment
To measure the correlation with human judgment, you need to provide a pickle file
containing the mean evaluation ratings of your human judges. More precisely, the pickle file consists of a python list
containing triples `('Method_i', sample_index, mean_rate)`.
If you have `m` generative models and `N` test samples, the size of the list would be `N * m`:
[('Method_1', 1, 2.1), ('Method_2', 1, 3.4), ..., ('Method_m', 1, 2.6), ('Method_1', 2, 0.2), ...]To pass your own human judgment file, use `--human_judgment `. For the OpenSubtitles test data, you may simply set the argument to `opensubtitles` to use the provided human judgment.
## Citation
Please cite the following paper if you used our work in your research:
```
@inproceedings{dziri-etal-2019-evaluating,
title = "Evaluating Coherence in Dialogue Systems using Entailment",
author = "Dziri, Nouha and
Kamalloo, Ehsan and
Mathewson, Kory and
Zaiane, Osmar",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
month = jun,
year = "2019",
address = "Minneapolis, Minnesota",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N19-1381",
doi = "10.18653/v1/N19-1381",
pages = "3806--3812",
}```