Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/yizhongw/self-instruct
Aligning pretrained language models with instruction data generated by themselves.
https://github.com/yizhongw/self-instruct
general-purpose-model instruction-tuning language-model
Last synced: 7 days ago
JSON representation
Aligning pretrained language models with instruction data generated by themselves.
- Host: GitHub
- URL: https://github.com/yizhongw/self-instruct
- Owner: yizhongw
- License: apache-2.0
- Created: 2022-12-20T08:52:51.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2023-03-27T18:18:51.000Z (over 1 year ago)
- Last Synced: 2024-11-29T00:04:09.903Z (14 days ago)
- Topics: general-purpose-model, instruction-tuning, language-model
- Language: Python
- Homepage:
- Size: 58.6 MB
- Stars: 4,180
- Watchers: 58
- Forks: 488
- Open Issues: 12
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- Awesome-LLM - https://github.com/yizhongw/self-instruct
- awesome-ChatGPT - yizhongw/self-instruct: Aligning pretrained language models with instruction data generated by themselves. (github.com)
- awesome-ChatGPT-repositories - self-instruct - Aligning pretrained language models with instruction data generated by themselves. (NLP)
- awesome-prompt-engineering - Self-Instruct - tuning的方法,引入Task pool和Quality filtering等概念 | (Instruct/Prompt Tuning Data)
- StarryDivineSky - yizhongw/self-instruct
- awesome-instruction-datasets - Dataset Link
README
# Self-Instruct: Aligning LM with Self Generated Instructions
This repository contains code and data for the [Self-Instruct paper](https://arxiv.org/abs/2212.10560), a method for aligning pretrained language models with instructions.
## Introduction
Self-Instruct is a framework that helps language models improve their ability to follow natural language instructions. It does this by using the model's own generations to create a large collection of instructional data. With Self-Instruct, it is possible to improve the instruction-following capabilities of language models without relying on extensive manual annotation.
### Background
In recent years, there has been a growing interest in building models that can follow natural language instructions to perform a wide range of tasks. These models, known as "instruction-tuned" language models, have demonstrated the ability to generalize to new tasks. However, their performance is heavily dependent on the quality and quantity of the human-written instruction data used to train them, which can be limited in diversity and creativity. To overcome these limitations, it is important to develop alternative approaches for supervising instruction-tuned models and improving their instruction-following capabilities.
### How Self-Instruct works?
The Self-Instruct process is an iterative bootstrapping algorithm that starts with a seed set of manually-written instructions and uses them to prompt the language model to generate new instructions and corresponding input-output instances. These generations are then filtered to remove low-quality or similar ones, and the resulting data is added back to the task pool. This process can be repeated multiple times, resulting in a large collection of instructional data that can be used to fine-tune the language model to follow instructions more effectively.
Here is an overview of Self-Instruct:
![The pipeline for generating instruction data from a language model itself.](docs/pipeline.JPG)
## Usage
\* **This work is still in progress. We may update the code and data as we make progress. Please be cautious about the version control.**
### Instruction-tuning using our Self-Instruct data
We release a dataset that contains 52k instructions, paired with 82K instance inputs and outputs. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better. The entire model-generated data can be accessed in `data/gpt3-generations/batch_221203/all_instances_82K.jsonl`. This data (+ the 175 seed tasks) reformatted in clean GPT3-finetuning format (prompt + completion) is put in `data/finetuning/self_instruct_221203`. You can use the script in [`./scripts/finetune_gpt3.sh`](./scripts/finetune_gpt3.sh) to finetune GPT3 on this data.
**Note**: This data is generated by a language model (GPT3) and inevitably contains some errors or biases. We analyzed the data quality on 200 random instructions in our paper, and found that 46% of the data points may have problems. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections.
### Evaluating instruction-following capabilities
We also release a new set of 252 expert-written tasks and their instructions motivated by user-oriented applications (rather than well-studied NLP tasks). This data is used in the human evaluation section of [the self-instruct paper](https://arxiv.org/abs/2212.10560). Please refer to [the human evaluation README](human_eval/README.md) for more details.
### Generating Self-Instruct data from scratch
To generate Self-Instruct data using your own seed tasks or other models, we open-source our scripts for the entire pipeline here. Our current code is only tested on the GPT3 model accessible via the [OpenAI API](https://beta.openai.com/docs/models/gpt-3).
Here are the scripts for generating the data:
```bash
# 1. Generate instructions from the seed tasks
./scripts/generate_instructions.sh# 2. Identify whether the instruction represents a classification task or not
./scripts/is_clf_or_not.sh# 3. Generate instances for each instruction
./scripts/generate_instances.sh# 4. Filtering, processing, and reformatting
./scripts/prepare_for_finetuning.sh
```## Citation
If you use the Self-Instruct framework or data, feel free to cite us.
```bibtex
@misc{selfinstruct,
title={Self-Instruct: Aligning Language Model with Self Generated Instructions},
author={Wang, Yizhong and Kordi, Yeganeh and Mishra, Swaroop and Liu, Alisa and Smith, Noah A. and Khashabi, Daniel and Hajishirzi, Hannaneh},
journal={arXiv preprint arXiv:2212.10560},
year={2022}
}
```