Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/Zjh-819/LLMDataHub
A quick guide (especially) for trending instruction finetuning datasets
https://github.com/Zjh-819/LLMDataHub
chatbot chatgpt dataset llm
Last synced: 2 days ago
JSON representation
A quick guide (especially) for trending instruction finetuning datasets
- Host: GitHub
- URL: https://github.com/Zjh-819/LLMDataHub
- Owner: Zjh-819
- License: mit
- Created: 2023-04-10T05:38:52.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2023-11-28T09:41:28.000Z (11 months ago)
- Last Synced: 2024-08-02T10:27:31.532Z (3 months ago)
- Topics: chatbot, chatgpt, dataset, llm
- Homepage:
- Size: 4.91 MB
- Stars: 2,304
- Watchers: 45
- Forks: 151
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-huge-models - LLMDataHub
- awesome-ai-list-guide - LLMDataHub
- StarryDivineSky - Zjh-819/LLMDataHub
- awesome-chatgpt - Zjh-819/LLMDataHub - A quick guide (especially) for trending instruction finetuning datasets (Documentation and examples / Lists, Guides and examples)
- Awesome-LLM - LLMDatahub - a curated collection of datasets specifically designed for chatbot training, including links, size, language, usage, and a brief description of each dataset (Other Papers)
README
#LLMDataHub: Awesome Datasets for LLM Training
----------------------------------
🔥 Alignment Datasets • 💡 Domain-specific Datasets • :atom: Pretraining Datasets 🖼️ Multimodal Datasets
## Introduction 📄
Large language models (LLMs), such as OpenAI's GPT series, Google's Bard, and Baidu's Wenxin Yiyan, are driving profound technological changes. Recently, with the emergence of open-source large model frameworks like LlaMa and ChatGLM, training an LLM is no longer the exclusive domain of resource-rich companies. Training LLMs by small organizations or individuals has become an important interest in the open-source community, with some notable works including Alpaca, Vicuna, and Luotuo. In addition to large model frameworks, large-scale and high-quality training corpora are also essential for training large language models. Currently, relevant open-source corpora in the community are still scattered. Therefore, the goal of this repository is to continuously collect high-quality training corpora for LLMs in the open-source community.Training a chatbot LLM that can follow human instruction effectively requires access to high-quality datasets that cover a range of conversation domains and styles. In this repository, we provide a curated collection of datasets specifically designed for chatbot training, including links, size, language, usage, and a brief description of each dataset. Our goal is to make it easier for researchers and practitioners to identify and select the most relevant and useful datasets for their chatbot LLM training needs. Whether you're working on improving chatbot dialogue quality, response generation, or language understanding, this repository has something for you.
### Contact 📬
If you want to contribute, you can contact:[Junhao Zhao]([email protected]) 📧
Advised by [Prof. Wanyun Cui](https://cuiwanyun.github.io/) [![](https://img.shields.io/badge/[email protected])](https://cuiwanyun.github.io/)##
General Open Access Datasets for Alignment 🟢:
#### Type Tags 🏷️:
- SFT: Supervised Finetune
- Dialog: Each entry contains continuous conversations
- Pairs: Each entry is an input-output pair
- Context: Each entry has a context text and related QA pairs
- PT: pretrain
- CoT: Chain-of-Thought Finetune
- RLHF: train reward model in Reinforcement Learning with Human Feedback### Datasets released in November 2023
| Dataset name | Used by | Type | Language | Size | Description ️ |
|----------------------------------------------------------------------|---------|------|----------|---------------|------------------------------------------------------------------------------------------------------------------------|
| [helpSteer](https://huggingface.co/datasets/nvidia/HelpSteer) | / | RLHF | English | 37k instances | An RLHF dataset that is annotated by human with helpfulness, correctness, coherence, complexity and verbosity measures |
| [no_robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots) | / | SFT | English | 10k instance | High-quality human-created STF data, single turn. |### Datasets released in September 2023
| Dataset name | Used by | Type | Language | Size | Description ️ |
|------------------------------------------------------------------------------------------------------------------|---------|------------|----------|-------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Anthropic_
HH_Golden](https://huggingface.co/datasets/Unified-Language-Model-Alignment/Anthropic_HH_Golden) | ULMA | SFT / RLHF | English | train 42.5k + test 2.3k | Improved on the harmless dataset of Anthropic's Helpful and Harmless (HH) datasets. Using GPT4 to rewrite the original "chosen" answer. Compared with the original Harmless dataset, empirically this dataset improves the performance of RLHF, DPO or ULMA methods significantly on harmless metrics. |### Datasets released in August 2023
| Dataset name | Used by | Type | Language | Size | Description ️ |
|---------------------------------------------------------------------------------------------------------|---------------------------|---------------------|---------------------|-------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|
| [function_
calling_
extended](https://huggingface.co/datasets/Trelis/function_calling_extended) | / | Pairs | English
code | / | High quality human created dataset from enhance LM's API using ability. |
| [AmericanStories](https://huggingface.co/datasets/dell-research-harvard/AmericanStories) | / | PT | English | / | Vast sized corpus scanned from US Library of Congress. |
| [dolma](https://huggingface.co/datasets/allenai/dolma) | OLMo | PT | / | 3T tokens | A large diverse open-source corpus for LM pretraining. |
| [Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) | Platypus2 | Pairs | English | 25K | A very high quality dataset for improving LM's STEM reasoning ability. |
| [Puffin](https://huggingface.co/datasets/LDJnr/Puffin) | Redmond-Puffin
Series | Dialog | English | ~3k entries | A dataset consists of conversations between real human and GPT-4,which features long context (over 1k tokens per conversation) and multi-turn dialogs. |
| [tiny series](https://huggingface.co/datasets/nampdn-ai/tiny-codes) | / | Pairs | English | / | A series of short and concise codes or texts aim at improving LM's reasoning ability. |
| [LongBench](https://huggingface.co/datasets/THUDM/LongBench) | / | Evaluation
Only | English
Chinese | 17 tasks | A benchmark for evaluate LLM's long context understanding capability. |### Datasets released in July 2023
| Dataset name | Used by | Type | Language | Size | Description ️ |
|-------------------------------------------------------------------------------------------------------------|--------------|-----------------|--------------|-------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [orca-chat](https://huggingface.co/datasets/shahules786/orca-chat) | / | Dialog | English | 198,463 entries | An Orca-style dialog dataset aims at improving LM's long context conversational ability. |
| [DialogStudio](https://github.com/salesforce/DialogStudio) | / | Dialog | Multilingual | / | A collection of diverse datasets aim at building conversational Chatbot. |
| [chatbot_arena
_conversations](https://huggingface.co/datasets/lmsys/chatbot_arena_conversations) | / | RLHF
Dialog | Multilingual | 33k conversations | Cleaned conversations with pairwise human preferences collected on Chatbot Arena. |
| [WebGLM-qa](https://huggingface.co/datasets/THUDM/webglm-qa) | WebGLm | Pairs | English | 43.6k entries | Dataset used by WebGLM, which is a QA system based on LLM and Internet. Each of the entry in this dataset comprise a question, a response and a reference. The response is grounded in the reference. |
| [phi-1](https://huggingface.co/datasets/teleprint-me/phi-1) | phi-1 | Dialog | English | / | A dataset generated by using the method in [Textbooks Are All You Need](https://arxiv.org/abs/2306.11644). It focuses on math and CS problems. |
| [Linly-
pretraining-
dataset](https://huggingface.co/datasets/Linly-AI/Chinese-pretraining-dataset) | Linly series | PT | Chinese | 3.4GB | Chinese pretraining dataset used by Linly series model, comprises ClueCorpusSmall, CSL news-crawl and etc. |
| [FineGrainedRLHF](https://github.com/allenai/FineGrainedRLHF) | / | RLHF | English | ~5K examples | A repo aims at develop a new framework to collect human feedbacks. Data collected is with the purpose to improve LLMs factual correctness, topic relevance and other abilities. |
| [dolphin](https://huggingface.co/datasets/ehartford/dolphin) | / | Pairs | English | 4.5M entries | An attempt to replicate Microsoft's Orca. Based on FLANv2. |
| [openchat_
sharegpt4_
dataset](https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset) | OpenChat | Dialog | English | 6k dialogs | A high quality dataset generated by using GPT-4 to complete refined ShareGPT prompts. |### Datasets released in June 2023
| Dataset name | Used by | Type | Language | Size | Description ️ |
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------|--------------|-----------------------|-------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) | / | Pairs | English | 4.5M completions | A collection of augmented FLAN data. Generated by using method is Orca paper. |
| [COIG-PC](https://huggingface.co/datasets/BAAI/COIG-PC)
[COIG-Lite](https://huggingface.co/datasets/BAAI/COIG-PC-Lite) | / | Pairs | Chinese | / | Enhanced version of COIG. |
| [WizardLM_Orca](https://huggingface.co/datasets/psmathur/WizardLM_Orca) | orca_mini series | Pairs | English | 55K entries | Enhanced WizardLM data. Generated by using orca's method. |
| arxiv instruct datasets
[math](https://huggingface.co/datasets/ArtifactAI/arxiv-math-instruct-50k)
[CS](https://huggingface.co/datasets/ArtifactAI/arxiv-beir-cs-ml-generated-queries)
[Physics](https://huggingface.co/datasets/ArtifactAI/arxiv-physics-instruct-tune-30k) | / | Pairs | English | 50K/
50K/
30K entries | dataset consists of question-answer pairs derived from ArXiv abstracts. Questions are generated using the t5-base model, while the answers are generated using the GPT-3.5-turbo model. |
| [im-feeling-
curious](https://huggingface.co/datasets/xiyuez/im-feeling-curious) | / | Pairs | English | 2595 entries | Random questions and correspond facts generated by Google **I'm feeling curious** features. |
| [ign_clean
_instruct
_dataset_500k](https://huggingface.co/ignmilton) | / | Pairs | / | 509K entries | A large scale SFT dataset which is synthetically created from a subset of Ultrachat prompts. ⚠ lack of detailed datacard |
| [WizardLM
evolve_instruct V2](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k) | WizardLM | Dialog | English | 196k entries | The latest version of Evolve Instruct dataset. |
| [Dynosaur](https://github.com/WadeYin9712/Dynosaur) | / | Pairs | English | 800K entries | The dataset generated by applying method in [this paper](https://dynosaur-it.github.io/). Highlight is generating high-quality data at low cost. |
| [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B) | / | PT | Primarily
English | / | A cleaned and deduplicated version of RedPajama |
| [LIMA dataset](https://huggingface.co/datasets/GAIR/lima) | LIMA | Pairs | English | 1k entries | High quality SFT dataset used by [LIMA: Less Is More for Alignment](https://arxiv.org/pdf/2305.11206.pdf) |
| [TigerBot Series](https://github.com/TigerResearch/TigerBot#%E5%BC%80%E6%BA%90%E6%95%B0%E6%8D%AE%E9%9B%86) | TigerBot | PT
Pairs | Chinese
English | / | Datasets used to train the TigerBot, including pretraining data, STF data and some domain specific datasets like financial research reports. |
| [TSI-v0](https://huggingface.co/datasets/tasksource/tasksource-instruct-v0) | / | Pairs | English | 30k examples
per task | A Multi-task instruction-tuning data recasted from 475 of the tasksource datasets. Similar to Flan dataset and Natural instruction. |
| [NMBVC](https://github.com/esbatmop/MNBVC) | / | PT | Chinese | / | A large scale, continuously updating Chinese pretraining dataset. |
| [StackOverflow
post](https://huggingface.co/datasets/mikex86/stackoverflow-posts) | / | PT | / | 35GB | Raw StackOverflow data in markdown format, for pretraining. |### Datasets released before June 2023
| Dataset name | Used by | Type | Language | Size | Description ️ |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------|------------------------------|------------------------------------------------------|-----------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [LaMini-Instruction](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) | / | Pairs | English | 2.8M entries | A dataset distilled from flan collection, p3 and self-instruction. |
| [ultraChat](https://huggingface.co/datasets/stingning/ultrachat) | / | Dialog | English | 1.57M dialogs | A large scale dialog dataset created by using two ChatGPT, one of which act as the user, another generates response. |
| [ShareGPT_
Vicuna_unfiltered](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) | Vicuna | Pairs | Multilingual | 53K entries | Cleaned ShareGPT dataset. |
| [pku-saferlhf-dataset](https://github.com/PKU-Alignment/safe-rlhf#pku-saferlhf-dataset) | Beaver | RLHF | English | 10K + 1M | The first dataset of its kind and contains 10k instances with safety preferences. |
| RefGPT-Dataset
[nonofficial link](https://github.com/sufengniu/RefGPT) | RefGPT | Pairs, Dialog | Chinese | ~50K entries | A Chinese dialog dataset aims at improve the correctness of fact in LLMs (mitigate the hallucination of LLM). |
| [Luotuo-QA-A
CoQA-Chinese](https://huggingface.co/datasets/silk-road/Luotuo-QA-A-CoQA-Chinese) | Luotuo project | Context | Chinese | 127K QA pairs | A dataset built upon translated CoQA. Augmented by using OpenAI API. |
| [Wizard-LM-Chinese
instruct-evol](https://huggingface.co/datasets/silk-road/Wizard-LM-Chinese-instruct-evol) | Luotuo project | Pairs | Chinese | ~70K entries | Chinese version WizardLM 70K. Answers are obtained by feed translated questions in OpenAI's GPT API and then get responses. |
| [alpaca_chinese
dataset](https://github.com/hikariming/alpaca_chinese_dataset) | / | Pairs | Chinese | / | GPT-4 translated alpaca data includes some complement data (like Chinese poetry, application, etc.). Inspected by human. |
| [Zhihu-KOL](https://huggingface.co/datasets/wangrui6/Zhihu-KOL) | Open Assistant | Pairs | Chinese | 1.5GB | QA data on well-know Chinese Zhihu QA platform. |
| [Alpaca-GPT-4_zh-cn](https://huggingface.co/datasets/shibing624/alpaca-zh) | / | Pairs | Chinese | about 50K entries | A Chinese Alpaca-style dataset, generated by GPT-4 originally in Chinese, not translated. |
| [hh-rlhf](https://github.com/anthropics/hh-rlhf)
[on Huggingface](https://huggingface.co/datasets/Anthropic/hh-rlhf) | Koala | RLHF | English | 161k pairs
79.3MB | A pairwise dataset for training reward models in reinforcement learning for improving language models' harmlessness and helpfulness. |
| [Panther-dataset_v1](https://huggingface.co/datasets/Rardilit/Panther-dataset_v1) | Panther | Pairs | English | 377 entries | A dataset comes from the hh-rlhf. It rewrite hh-rlhf into the form of input-output pairs. |
| [Baize Dataset](https://github.com/project-baize/baize-chatbot/tree/main/data) | Baize | Dialog | English | 100K dialogs | A dialog dataset generated by GPT-4 using self-talking. Questions and topics are collected from Quora, StackOverflow and some medical knowledge source. |
| [h2ogpt-fortune2000
personalized](https://huggingface.co/datasets/h2oai/h2ogpt-fortune2000-personalized) | h2ogpt | Pairs | English | 11363 entries | A instruction finetune developed by h2oai, covered various topics. |
| [SHP](https://huggingface.co/datasets/stanfordnlp/SHP) | StableVicuna,
chat-opt,
, SteamSHP | RLHF | English | 385K entries | An RLHF dataset different from previously mentioned ones, it use scores+timestamps to infer the users' preferences. Covers 18 domains, collected by Stanford. |
| [ELI5](https://huggingface.co/datasets/eli5#source-data) | MiniLM series | FT,
RLHF | English | 270K entries | Questions and Answers collected from Reddit, including score. Might be used for RLHF reward model training. |
| [WizardLM
evol_instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k)
[V2](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k) | WizardLM | Pairs | English | | An instruction finetune dataset derived from Alpaca-52K, using the **evolution** method in [this paper](https://arxiv.org/pdf/2304.12244.pdf) |
| [MOSS SFT data](https://github.com/OpenLMLab/MOSS/tree/main/SFT_data) | MOSS | Pairs,
Dialog | Chinese, English | 1.1M entries | A conversational dataset collected and developed by MOSS team. It has usefulness, loyalty and harmlessness labels for every data entries. |
| [ShareGPT52K](https://huggingface.co/datasets/RyokoAI/ShareGPT52K) | Koala, Stable LLM | Pairs | Multilingual | 52K | This dataset comprises conversations collected from ShareGPT, with a specific focus on customized creative conversation. |
| [GPT-4all Dataset](https://huggingface.co/datasets/nomic-ai/gpt4all-j-prompt-generations) | GPT-4all | Pairs | English,
Might have
a translated version | 400k entries | A combination of some subsets of OIG, P3 and Stackoverflow. Covers topics like general QA, customized creative questions. |
| [COIG](https://huggingface.co/datasets/BAAI/COIG) | / | Pairs | Chinese,
code | 200K entries | A Chinese-based dataset. It contains domains like general purpose QA, Chinese exams, code. Its quality is checked by human annotators. |
| [RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | RedPajama | PT | Primarily English | 1.2T tokens
5TB | A fully open pretraining dataset follows the LLaMA's method. |
| [OASST1](https://huggingface.co/datasets/OpenAssistant/oasst1) | OpenAssistant | Pairs,
Dialog | Multilingual
(English, Spanish, etc.) | 66,497 conversation trees | A large, human-written, human-annotated high quality conversation dataset. It aims at making LLM generates more natural response. |
| [Alpaca-COT](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT) | Phoenix | Pairs,
Dialog,
CoT | English | / | A mixture a many dataset like classic Alpaca dataset, OIG, Guanaco and some CoT(Chain-of-Thought) datasets like FLAN-CoT. May be handy to use. |
| [Bactrian-X](https://huggingface.co/datasets/MBZUAI/Bactrian-X) | / | Pairs | Multilingual
(52 languages) | 67K entries per language | A multilingual version of **Alpaca** and **Dolly-15K**. |
| [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k)
[zh-cn Ver](https://huggingface.co/datasets/jaja7744/dolly-15k-cn) | Dolly2.0 | Pairs | English | 15K+ entries | A dataset of **human-written** prompts and responses, featuring tasks such as open-domain question-answering, brainstorming, summarization, and more. |
| [AlpacaDataCleaned](https://github.com/gururise/AlpacaDataCleaned) | Some Alpaca/ LLaMA-like models | Pairs | English | / | Cleaned version of Alpaca, GPT_LLM and GPTeacher. |
| [GPT-4-LLM Dataset](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM) | Some Alpaca-like models | Pairs,
RLHF | English,
Chinese | 52K entries for English and Chinese respectively
9K entries unnatural-instruction | NOT the dataset used by GPT-4!! It is generated by GPT-4 and some other LLM for better Pairs and RLHF. It includes instruction data as well as comparison data in RLHF style. |
| [GPTeacher](https://github.com/teknium1/GPTeacher) | / | Pairs | English | 20k entries | A dataset contains targets generated by GPT-4 and includes many of the same seed tasks as the Alpaca dataset, with the addition of some new tasks such as roleplay. |
| [HC3](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection) | Koala | RLHF | English,
Chinese | 24322 English
12853 Chinese | A multi-domain, human-vs-ChatGPT comparison dataset. Can be used for reward model training or ChatGPT detector training. |
| [Alpaca data](https://github.com/tatsu-lab/stanford_alpaca#data-release)
[Download](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json) | Alpaca, ChatGLM-finetune-LoRA, Koala | Dialog,
Pairs | English | 52K entries
21.4MB | A dataset generated by text-davinci-003 to improve language models' ability to follow human instruction. |
| [OIG](https://huggingface.co/datasets/laion/OIG)
[OIG-small-chip2](https://huggingface.co/datasets/0-hero/OIG-small-chip2) | Pythia-Chat-Base-7B, GPT-NeoXT-Chat-Base-20B, Koala | Dialog,
Pairs | English,
code | 44M entries | A large conversational instruction dataset with medium and high quality subsets *(OIG-small-chip2)* for multi-task learning. |
| [ChatAlpaca data](https://github.com/cascip/ChatAlpaca) | / | Dialog,
Pairs | English,
Chinese version coming soon | 10k entries
39.5MB | A dataset aims to help researchers develop models for instruction-following in multi-turn conversations. |
| [InstructionWild](https://github.com/XueFuzhao/InstructionWild) | ColossalChat | Pairs | English, Chinese | 10K enreues | A Alpaca-style dataset, but with seed tasks comes from chatgpt screenshot. |
| [Firefly(流萤)](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M) | Firefly(流萤) | Pairs | Chinese | 1.1M entries
1.17GB | A Chinese instruction-tuning dataset with 1.1 million human-written examples across 23 tasks, but no conversation. |
| [BELLE](https://github.com/LianjiaTech/BELLE)
[0.5M version](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN)
[1M version](https://huggingface.co/datasets/BelleGroup/train_1M_CN)
[2M version](https://huggingface.co/datasets/BelleGroup/train_2M_CN) | BELLE series, Chunhua (春华) | Pairs | Chinese | 2.67B in total | A Chinese instruction dataset similar to *Alpaca data* constructed by generating answers from seed tasks, but no conversation. |
| [GuanacoDataset](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset#guanacodataset) | Guanaco | Dialog,
Pairs | English,
Chinese,
Japanese | 534,530 entries | A multilingual instruction dataset for enhancing language models' capabilities in various linguistic tasks, such as natural language understanding and explicit content recognition. |
| [OpenAI WebGPT](https://huggingface.co/datasets/openai/webgpt_comparisons) | WebGPT's reward model, Koala | RLHF | English | 19,578 pairs | Data set used in WebGPT paper. Used for training reward model in RLHF. |
| [OpenAI
Summarization
Comparison](https://huggingface.co/datasets/openai/summarize_from_feedback) | Koala | RLHF | English | ~93K entries
420MB | A dataset of human feedback which helps training a reward model. The reward model was then used to train a summarization model to align with human preferences. |
| [self-instruct](https://github.com/yizhongw/self-instruct) | / | Pairs | English | 82K entries | The dataset generated by using the well-known [self-instruction method](https://arxiv.org/abs/2212.10560) |
| [unnatural-instructions](https://github.com/orhonovich/unnatural-instructions) | / | Pairs | English | 240,670 examples | An early attempt to use powerful model (text-davinci-002) to generate data. |
| [xP3 (and some variant)](https://huggingface.co/datasets/bigscience/xP3) | BLOOMZ, mT0 | Pairs | Multilingual,
code | 79M entries
88GB | An instruction dataset for improving language models' generalization ability, similar to *Natural Instruct*. |
| [Flan V2](https://github.com/google-research/FLAN/tree/main/flan/v2) | / | / | English | / | A dataset compiles datasets from Flan 2021, P3, Super-Natural Instructions, along with dozens more datasets into one and formats them into a mix of zero-shot, few-shot and chain-of-thought templates |
| [Natural Instruction](https://instructions.apps.allenai.org/)
[GitHub&Download](https://github.com/allenai/natural-instructions) | tk-instruct series | Pairs,
evaluation | Multilingual | / | A benchmark with over 1,600 tasks with instruction and definition for evaluating and improving language models' multi-task generalization under natural language instruction. |
| [CrossWOZ](https://github.com/thu-coai/CrossWOZ) | / | Dialog | English,
Chinese | 6K dialogs | The dataset introduced by [this paper](https://arxiv.org/pdf/2002.11893.pdf), mainly about tourism topic in Beijing, answers are generated automatically by rules. |#### Potential Overlaps ⚠️
We consider row items as subject.
| | OIG | hh-rlhf | xP3 | natural instruct | AlpacaDataCleaned | GPT-4-LLM | Alpaca-CoT |
|-------------------|---------|----------|---------|------------------|-------------------|-----------|------------|
| OIG | / | contains | overlap | overlap | overlap | | overlap |
| hh-rlhf | part of | / | | | | | overlap |
| xP3 | overlap | | / | overlap | | | overlap |
| natural instruct | overlap | | overlap | / | | | overlap |
| AlpacaDataCleaned | overlap | | | | / | overlap | overlap |
| GPT-4-LLM | | | | | overlap | / | overlap |
| Alpaca-CoT | overlap | overlap | overlap | overlap | overlap | overlap | / |##
Open Datasets for Pretraining 🟢 :atom:
| Dataset name | Used by | Type | Language | Size | Description ️ |
|-----------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|--------------------------------------|-------------------------|-------------|---------------------------------------------------------------------------------------------------------------------------------------------------|
| [proof-pile](https://huggingface.co/datasets/hoskinson-center/proof-pile) | proof-GPT | PT | English
LaTeX | 13GB | A pretraining dataset which is similar to the pile but have LaTeX corpus to enhance LM's ability in proof. |
| [peS2o](https://huggingface.co/datasets/allenai/peS2o) | / | PT | English | 7.5GB | A high quality academic paper dataset for pretraining. |
| [StackOverflow
post](https://huggingface.co/datasets/mikex86/stackoverflow-posts) | / | PT | / | 35GB | Raw StackOverflow data in markdown format, for pretraining. |
| [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B) | / | PT | Primarily
English | / | A cleaned and deduplicated version of RedPajama |
| [NMBVC](https://github.com/esbatmop/MNBVC) | / | PT | Chinese | / | A large scale, continuously updating Chinese pretraining dataset. |
| [falcon-refinedweb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | tiiuae/falcon series | PT | English | / | A refined subset of CommonCrawl. |
| [CBook-150K](https://github.com/FudanNLPLAB/CBook-150K) | / | PT,
building dataset | Chinese | 150K+ books | A raw Chinese books dataset. Need some preprocess pipeline. |
| [Common Crawl](https://commoncrawl.org/) | LLaMA (After some process) | building datasets,
PT | / | / | The most well-known raw dataset, rarely be used directly. One possible preprocess pipeline is [CCNet](https://github.com/facebookresearch/cc_net) |
| [nlp_Chinese_Corpus](https://github.com/brightmart/nlp_chinese_corpus) | / | PT,
TF | Chinese | / | A Chinese pretrain corpus. Includes Wikipedia, Baidu Baike, Baidu QA, some forums QA and news corpus. |
| [The Pile (V1)](https://pile.eleuther.ai/) | GLM (partly), LLaMA (partly), GPT-J, GPT-NeoX-20B, Cerebras-GPT 6.7B, OPT-175b | PT | Multilingual,
code | 825GB | A diverse open-source language modeling dataset consisting of 22 smaller, high-quality datasets that includes many domains and tasks. |
| C4
[Huggingface dataset](https://huggingface.co/datasets/c4)
[TensorFlow dataset](https://www.tensorflow.org/datasets/catalog/c4) | Google T5 Series, LLaMA | PT | English | 305GB | A colossal, cleaned version of Common Crawl's web crawl corpus. Frequently be used. |
| [ROOTS](https://huggingface.co/bigscience-data) | BLOOM | PT | Multilingual,
code | 1.6TB | A diverse open-source dataset consisting of sub-datasets like Wikipedia and StackExchange for language modeling. |
| [PushshPairs reddit](https://files.pushshPairs.io/reddit/)
[paper](https://arxiv.org/pdf/2001.08435.pdf) | OPT-175b | PT | / | / | Raw reddit data, one possible processing pipeline in [this paper](https://aclanthology.org/2021.eacl-main.24.pdf) |
| [Gutenberg project](https://www.gutenberg.org/policy/robot_access.html) | LLaMA | PT | Multilingual | / | A book dataset, mostly novels. Not be preprocessed. |
| [CLUECorpus](https://github.com/CLUEbenchmark/CLUE) | / | PT,
finetune,
evaluation | Chinese | 100GB | A Chinese pretraining Corpus sourced from *Common Crawl*. |##
Domain-specific Datasets 🟢 💡| Dataset name | Used by | Type | Language | Size | Description ️ |
|----------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------|------------------|-----------------------|-------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [starcoderdata](https://huggingface.co/datasets/bigcode/starcoderdata) | starcoder
series | PT | code | 783GB | A large pretraining dataset for improving LM's coding ability. |
| [code_
instructions
_120k_alpaca](https://huggingface.co/datasets/iamtarun/code_instructions_120k_alpaca) | / | Pairs | English/code | 121,959 entries | [code_instruction](https://huggingface.co/datasets/sahil2801/code_instructions_120k) in instruction finetune format. |
| [function-
invocations-25k](https://huggingface.co/datasets/unaidedelf87777/openapi-function-invocations-25k) | some MPT
variants | Pairs | English code | 25K entries | A dataset aims at teaching AI models how to correctly invoke [APIsGuru](https://github.com/APIs-guru/openapi-directory) functions based on natural language prompts. |
| [TheoremQA](https://huggingface.co/datasets/wenhu/TheoremQA) | / | Pairs | English | 800 | A high quality STEM theorm QA dataset. |
| [phi-1](https://huggingface.co/datasets/teleprint-me/phi-1) | phi-1 | Dialog | English | / | A dataset generated by using the method in [Textbooks Are All You Need](https://arxiv.org/abs/2306.11644). It focuses on math and CS problems. |
| [FinNLP](https://github.com/AI4Finance-Foundation/FinNLP) | [FinGPT](https://github.com/AI4Finance-Foundation/FinGPT) | Raw data | English,
Chinese | / | Open-source raw financial text data. Includes news, social media and etc. |
| [PRM800K](https://github.com/openai/prm800k) | A variant of
GPT-4 | Context | English | 800K entries | A process supervision dataset for mathematical problems |
| [MeChat data](https://github.com/qiuhuachuan/smile) ⚠️use with care | MeChat | Dialog | Chinese | 355733 utterances | A Chinese SFT dataset for training a mental healthcare chatbot. |
| [ChatGPT-Jailbreak-Prompts](https://huggingface.co/datasets/rubend18/ChatGPT-Jailbreak-Prompts)
⚠️RISKY | / | / | English | 163KB file size | Prompts for bypassing the safety regulation of ChatGPT. Can be use for probing the harmlessness of LLMs |
| [awesome chinese
legal resources](https://github.com/pengxiao-song/awesome-chinese-legal-resources) | LaWGPT | / | Chinese | / | A collection of Chinese legal data for LLM training. |
| [Long Form](https://github.com/akoksal/LongForm) | / | Pairs | English | 23.7K entries | A dataset aims at improving the long text generation ability of LLM. |
| [symbolic-instruction-tuning](https://huggingface.co/datasets/sail/symbolic-instruction-tuning) | / | Pairs | English,
code | 796 | A dataset focuses on the 'symbolic' tasks: like SQL coding, mathematical computation, etc. |
| [Safety Prompt](https://github.com/thu-coai/Safety-Prompts) | / | Evaluation only | Chinese | 100k entries | Chinese safety prompts for evaluating and improving the safety of LLMs. |
| [Tapir-Cleaned](https://huggingface.co/datasets/MattiaL/tapir-cleaned-116k) | / | Pairs | English, | 116k entries | This is a revised version of the DAISLab dataset of PairsTT rules, which has been thoroughly cleaned, scored, and adjusted for the purpose of instruction-tuning |
| [instructional_
codesearchnet_python](https://huggingface.co/datasets/Nan-Do/instructional_codesearchnet_python) | / | Pairs | English &
Python | 192MB | This dataset is a template generated instructional Python datastet generated from an annotated version of the code-search-net dataset for the Open-Assistant project. |
| [finance-alpaca](https://huggingface.co/datasets/gbharti/finance-alpaca) | / | Pairs | English | 1.3K entries | An Alpaca-style dataset but focus on financial topics |##
Multimodal Datasets for VLM
| Dataset name | Used by | Type | Language | Size | Description ️ |
|-------------------------------------------------------------------------------------|--------------------|----------------------|--------------|----------------|-------------------------------------------------------------------------------------------------------------|
| [ShareGPT4V](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V) | / | image-prompt-caption | English | 1.2M instances | A set of GPT4-Vision-powered multi-modal captions data. |
| [OBELICS](https://huggingface.co/datasets/HuggingFaceM4/OBELICS) | idefics
series | image-document | English | 141M documents | an open, massive, and curated collection of interleaved image-text web documents. |
| [JourneyDB](https://huggingface.co/datasets/JourneyDB/JourneyDB) | / | image-prompt-caption | English | 4M instances | A large scale dataset comprises QA, caption, and text prompting tasks, which is based on Midjourney images. |
| [M3IT](https://huggingface.co/datasets/MMInstruction/M3IT) | Ying-VLM | instruction-image | Multilingual | 2.4M instances | A dataset comprises 40 tasks with 400 human written instruction. |
| [MIMIC-IT](https://github.com/Luodian/Otter/tree/main/mimic-it) | Otter | instruction-image | Multilingial | 2.2M instances | High quality multi-modal instructions-response pairs based on images and videos. |
| [LLaVA Instruction](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) | LLaVA | instruction-image | English | 158k samples | A multimodal dataset generated upon COCO dataset by prompting GPT-4 to get instructions. |## Private Datasets 🔴
| Dataset name | Used by | Type | Language | Size | Description ️ |
|-----------------------|--------------------|------|---------------------------------------|-------|-------------------------------------------------------------------------------------------------|
| WebText(Reddit links) | GPT-2 | PT | English | / | Data crawled from Reddit and filtered for GPT-2 pretraining. |
| MassiveText | Gopher, Chinchilla | PT | 99% English, 1% other(including code) | | |
| WuDao(悟道) Corpora | GLM | PT | Chinese | 200GB | A large scale Chinese corpus, Possible component originally open-sourced but not available now. |