Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

https://github.com/arian-askari/ChatGPT-RetrievalQA

A dataset for training/evaluating Question Answering Retrieval models on ChatGPT responses with the possibility to training/evaluating on real human responses.
https://github.com/arian-askari/ChatGPT-RetrievalQA

ai chatgpt chatgpt-information-retrieval chatgpt-ir data-augmentation dataset deep-learning gpt-3 gpt2 gpt3 information-retrieval information-retrieval-chatgpt ir ir-chatgpt machine-learning nlp openai python sequence-to-sequence text-retrieval

Last synced: 4 months ago
JSON representation

A dataset for training/evaluating Question Answering Retrieval models on ChatGPT responses with the possibility to training/evaluating on real human responses.

Lists

README

        

# ChatGPT-RetrievalQA: Can ChatGPT's responses act as training data for Q&A retrieval models?
[![](https://img.shields.io/badge/ChatGPT-RetrievalQA-brightgreen)](https://github.com/arian-askari/ChatGPT-RetrievalQA)
![](https://img.shields.io/badge/Language-English-blue)

The repository of paper ["Generating Synthetic Documents for Cross-Encoder Re-Rankers: A Comparative Study of ChatGPT and Human Experts"](https://arxiv.org/abs/2305.02320) and paper ["A Test Collection of Synthetic Documents for Training Rankers: ChatGPT vs. Human Experts"](https://dl.acm.org/doi/10.1145/3583780.3615111). A dataset for training and evaluating Question Answering (QA) Retrieval models on ChatGPT responses with the possibility of training/evaluating on real human responses.

If you use this dataset, please use the following bibtex references:

```bibtex

@InProceedings{askari2023chatgptcikm2023,
author = {Askari, Arian and Aliannejadi, Mohammad and Kanoulas, Evangelos and Verberne, Suzan},
titlE = {A Test Collection of Synthetic Documents for Training Rankers: ChatGPT vs. Human Experts},
year = 2023,
booktitle = {The 32nd ACM International Conference on Information and Knowledge Management (CIKM 2023)},
}

@InProceedings{askari2023genirsigir2023,
author = {Askari, Arian and Aliannejadi, Mohammad and Kanoulas, Evangelos and Verberne, Suzan},
title = {Generating Synthetic Documents for Cross-Encoder Re-Rankers: A Comparative Study of ChatGPT and Human Experts},
year = 2023,
booktitle = {Generative Information Retrieval workshop at ACM SIGIR 2023},
}
```

This work has been done under the supervision of Prof. [Mohammad Aliannejadi](https://scholar.google.com/citations?user=yiZk6coAAAAJ&hl=en&oi=ao), [Evangelos Kanoulas](https://scholar.google.com/citations?hl=en&user=0HybxV4AAAAJ&view_op=list_works&sortby=pubdate), and [Suzan Verberne](https://scholar.google.com/citations?hl=en&user=-IHDKA0AAAAJ&view_op=list_works&sortby=pubdate) during [my](https://scholar.google.com/citations?user=fp9QtoEAAAAJ&hl=en) visiting research at Information Retrieval Lab at the University of Amsterdam ([IRLab@UvA](https://irlab.science.uva.nl/)).

## Summary of what we did
Given a set of questions and corresponding ChatGPT's and humans' responses, we make two separate collections: one from ChatGPT and one from humans. By doing so, we provide several analysis opportunities from an **information retrieval perspective** regarding the usefulness of ChatGPT responses for training retrieval models. We provide the dataset for both end-to-end retrieval and a re-ranking setup. To give flexibility to other analyses, we organize all the files separately for ChatGPT and human responses.

## Why rely on retrieval when ChatGPT can generate answers?
While ChatGPT is a powerful language model that can produce impressive answers, it is not immune to mistakes or hallucinations. Furthermore, the source of the information generated by ChatGPT is not transparent and usually there is no source for the generated information even when the information is correct. This can be a bigger concern when it comes to domains such as law, medicine, science, and other professional fields where trustworthiness and accountability are critical. Retrieval models, as opposed to generative models, retrieve the actual (true) information from sources and search engines provide the source of each retrieved item. This is why information retrieval -- even when ChatGPT is available -- remains an important application, especially in situations where reliability is vital.

## Answer ranking dataset

This dataset is based on the public [HC3 dataset](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection), although our experimental setup and evaluation will be different.
We split the data in a train, validation, and test set in order to train/evaluate answer retrieval models on ChatGPT or human answers. We store the actual response by human/ChatGPT as the relevant answer. For training, a set of random responses can be used as non-relevant answers. In our main experiments, we train on ChatGPT responses and evaluate on human responses. We release ChatGPT-RetrievalQA dataset in a similar format to the MSMarco dataset, which is a popular dataset for training retrieval models. Therefore, everyone could re-use their scripts for the MSMarco dataset on our data.

| Description | Filename | File size | Num Records | Format |
|-------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|----------:|-----------------------------------:|----------------------------------------------------------------|
| Collection-H (H: Human Responses) | [collection_h.tsv](https://drive.google.com/file/d/1M5ZN-5CSnp6fL7u0EgtUjcyjrWQwqiJZ/view?usp=share_link) | 38.6 MB | 58,546 | tsv: pid, passage |
| Collection-C (C: ChatGPT Responses) | [collection_c.tsv](https://drive.google.com/file/d/1--1P0SnaBh4ikwNSmGbP0NDRHUFK8LFZ/view?usp=share_link) | 26.1 MB | 26,882 | tsv: pid, passage |
| Queries | [queries.tsv](https://drive.google.com/file/d/1-9H60KOBVy6vRvkaIMySUKGXV8A45Ygp/view?usp=share_link) | 4 MB | 24,322 | tsv: qid, query |
| Qrels-H Train (Train set Qrels for Human Responses) | [qrels_h_train.tsv](https://drive.google.com/file/d/1-9gu7BhdeRewU7i5ClcTgEbkb2tIUR9x/view?usp=share_link) | 724 KB | 40,406 | TREC qrels format |
| Qrels-H Validation (Validation set Qrels for Human Responses) | [qrels_h_valid.tsv](https://drive.google.com/file/d/1-JH0b37WFL8V-KhDUYShiGcidGArUodZ/view?usp=share_link) | 29 KB | 1,460 | TREC qrels format |
| Qrels-H Test (Test set Qrels for Human Responses) | [qrels_h_test.tsv](https://drive.google.com/file/d/1-IJEzHUJFVoELAuT68k0otFKN4QyZua0/view?usp=share_link) | 326 KB | 16,680 | TREC qrels format |
| Qrels-C Train (Train set Qrels for ChatGPT Responses) | [qrels_c_train.tsv](https://drive.google.com/file/d/1-Kllea1-oP3LoS98TU5WAHJ3SyALav-g/view?usp=share_link) | 339 KB | 18,452 | TREC qrels format |
| Qrels-C Validation (Validation set Qrels for ChatGPT Responses) | [qrels_c_valid.tsv](https://drive.google.com/file/d/1-S0tA7_B_vqjU3AGG2I1QTu-WaQ_9O0X/view?usp=share_link) | 13 KB | 672 | TREC qrels format |
| Qrels-C Test (Test set Qrels for ChatGPT Responses) | [qrels_c_test.tsv](https://drive.google.com/file/d/1-UC8sq8mKTvUxnyZCZljMQ1JCI-iYFRp/view?usp=share_link) | 152 KB | 7,756 | TREC qrels format |
| Queries, Answers, and Relevance Labels | [collectionandqueries.zip](https://drive.google.com/file/d/1-VDhikUVr6k0ZRRArGruazQuCMPtk-mT/view?usp=share_link) | 23.9 MB | 866,504 | |
| Train-H Triples | [train_h_triples.tsv](https://drive.google.com/file/d/1-7Im-U8RG7XvWW9QxOfERFii69S8clJ5/view?usp=share_link) | 58.68 GB | 40,641,772 | tsv: query, positive passage, negative passage |
| Validation-H Triple | [valid_h_triples.tsv](https://drive.google.com/file/d/1-PnmO8fG_HgcBeWS5Akf9tc67VFyMQWz/view?usp=share_link) | 2.02 GB | 1,468,526 | tsv: query, positive passage, negative passage |
| Train-H Triples QID PID Format | [train_h_qidpidtriples.tsv](https://drive.google.com/file/d/1-G3GCx50PnwF4LaZAHcHkmcmBpe81j5i/view?usp=share_link) | 921.7 MB | 40,641,772 | tsv: qid, positive pid, negative pid |
| Validation-H Triples QID PID Format | [valid_h_qidpidtriples.tsv](https://drive.google.com/file/d/1-SITWpMrKGDW7RZiXjntJjKa7gcRpSHB/view?usp=share_link) | 35.6 MB | 1,468,526 | tsv: qid, positive pid, negative pid |
| Train-C Triples | [train_c_triples.tsv](https://drive.google.com/file/d/1-UBkFxLqpXwxXjm_RpYpVJ-06nDgwEh1/view?usp=share_link) | 37.4 GB | 18,473,122 | tsv: query, positive passage, negative passage |
| Validation-C Triple | [valid_c_triples.tsv](https://drive.google.com/file/d/1-mx01uFJI3HGAjdfbGQq2gQqvEsPHFcN/view?usp=share_link) | 1.32 GB | 672,659 | tsv: query, positive passage, negative passage |
| Train-C Triples QID PID Format | [train_c_qidpidtriples.tsv](https://drive.google.com/file/d/1-UJjrGbbUza0pw4bCIZhbTvnzkUVwEig/view?usp=share_link) | 429.6 MB | 18,473,122 | tsv: qid, positive pid, negative pid |
| Validation-C Triples QID PID Format | [valid_c_qidpidtriples.tsv](https://drive.google.com/file/d/1-nhumklMpM7VDRkZPDeh56MJmoGe8DSn/view?usp=share_link) | 16.4 MB | 672,659 | tsv: qid, positive pid, negative pid |

We release the training and validation data in Triples format to facilitate training. The Triples files to train on ChatGPT responses are: "train_c_triples.tsv" and "valid_c_triples.tsv". Moreover, we release the triples based on human responses so everyone could compare training on ChatGPT VS training on human responses ("train_h_triples.tsv" and "valid_h_triples.tsv" files). Given each query and positive answer, 1000 negative answers have been sampled randomly.

## Answer re-ranking dataset
| Description | Filename | File size | Num Records |
|-------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------:|-----------------------------------:|----------------------------------------------------------------|
| Top-H 1000 Train | [top_1000_h_train.run](https://drive.google.com/file/d/1aZiXlRh0oSTsv0aBGMzPhVH8wT1RAgxS/view?usp=share_link) | 646.6 MB | 16,774,122 |
| Top-H 1000 Validation | [top_1000_h_valid.run](https://drive.google.com/file/d/1-NkD-LFqx0BnEZcBAPBl3al56SjrstIn/view?usp=share_link) | 23.7 MB | 605,956 |
| Top-H 1000 Test | [top_1000_h_test.run](https://drive.google.com/file/d/1-M9jVYLuzhcGW8AfsRXw7d86M5xhdqdL/view?usp=share_link) | 270.6 MB | 692,0845 |
| Top-C 1000 Train | [top_1000_c_train.run](https://drive.google.com/file/d/1-PdmL3dX1-oVrw8TQKpLxT6RyZSmK-Wm/view?usp=share_link) | 646.6 MB | 16,768,032 |
| Top-C 1000 Validation | [top_1000_c_valid.run](https://drive.google.com/file/d/1-UzOgQnwxM6K-L-2EhU9zOrwy3lZ8T3D/view?usp=share_link) | 23.7 MB | 605,793 |
| Top-C 1000 Test | [top_1000_c_test.run](https://drive.google.com/file/d/1-dzQM516jC5miaAMZlZosaAE7E-vv6VK/view?usp=share_link) | 271.1 MB | 6,917,616 |

The format of the run files of the Answer re-ranking dataset is in TREC run format.

*Note*: We use BM25 as first-stage ranker in Elasticsearch in order to rank top-1000 documents given a question (i.e., query). However, for some queries, less than 1000 documents will be retrieved which means there were less than 1000 documents with at least one word matched with the query in the collection.

### Analyzing the effectiveness of BM25 on human/ChatGPT responses

Coming soon.

## BERT re-ranking effectiveness on the Qrels-H Test
We train BERT on the responses that are produced by ChatGPT (using queries.tsv, collection_c.tsv, train_c_triples.tsv, valid_c_triples.tsv, qrels_c_train.tsv, and qrels_c_valid.tsv files). Next, we evaluate the effectiveness of BRET as an answer re-ranker model on human responses (using queries.tsv, collection_h.tsv, top_1000_c_test.run, and qrels_h_test.tsv). By doing so, we answer to the following question: "What is the effectiveness of an answer retrieval model that is trained on ChatGPT responses, when we evaluate it on human responses?"

Coming soon.

## Collection of responses produced by other Large Language Models (LLMs)

Coming soon

## Code for creating the dataset
[ChatGPT-RetrievalQA-Dataset-Creator](https://colab.research.google.com/drive/1OK8H_SYUD7n_LKTNj33kANP4t2fLcmGt?usp=sharing) [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1OK8H_SYUD7n_LKTNj33kANP4t2fLcmGt?usp=sharing)

## Dataset source and copyright
Special thanks to the [HC3 team](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection) for releasing Human ChatGPT Comparison Corpus (HC3) corpus. Our data is created based on their dataset and follows the license of them.