Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/sahil280114/codealpaca
https://github.com/sahil280114/codealpaca
Last synced: 4 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/sahil280114/codealpaca
- Owner: sahil280114
- License: apache-2.0
- Created: 2023-03-22T11:28:54.000Z (over 1 year ago)
- Default Branch: master
- Last Pushed: 2023-05-12T17:41:28.000Z (over 1 year ago)
- Last Synced: 2024-12-01T08:08:52.214Z (11 days ago)
- Language: Python
- Size: 8.91 MB
- Stars: 1,433
- Watchers: 21
- Forks: 107
- Open Issues: 17
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-instruction-datasets - Code Alpaca - davinci-003 | [download](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/tree/main/CodeAlpaca) | (Statistics)
- awesome-ai-coding - CodeAlpaca
- Awesome-LLMs-Datasets - Github
- awesome-chatgpt-dataset - Code Alpaca - | (Dataset Detail)
- StarryDivineSky - sahil280114/codealpaca - following LLaMA Model。包括用于微调模型的 20K 数据。 (A01_文本生成_文本对话 / 大语言对话模型及数据)
- Awesome-Code-LLM - [Repo
- Awesome-LLM-Synthetic-Data - **Code Alpaca: An Instruction-following LLaMA Model trained on code generation instructions**
README
# Code Alpaca: An Instruction-following LLaMA Model trained on code generation instructions
[![License](https://img.shields.io/badge/License-Apache_2.0-green.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)
[![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/release/python-390/)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)This is the repo for the Code Alpaca project, which aims to build and share an instruction-following LLaMA model for code generation. This repo is fully based on [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) ,and only changes the data used for training. Training approach is the same.
The repo contains:
- The [20K data](#data-release) used for fine-tuning the model
- The code for [generating the data](#data-generation-process)
- The code for [fine-tuning the model](#fine-tuning)Demo for the model can be found [https://code-alpaca-demo.vercel.app/](https://code-alpaca-demo.vercel.app/)
## Overview
The Code Alpaca models are fine-tuned from a 7B and 13B LLaMA model on 20K instruction-following data generated by the techniques in the Self-Instruct [1] paper, with some modifications that we discuss in the next section.
Evals are still a todo.The model is not finetuned to be safe and harmless, so be cautious.
Current release contains the data generation procedure, dataset, and training code. Model weights aren't part of the release for now, to respect OpenAI TOS and LLaMA license.
[1]: Self-Instruct: Aligning Language Model with Self Generated Instructions. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, Hannaneh Hajishirzi. https://arxiv.org/abs/2212.10560
## Data Release
[`data/code_alpaca_20k.json`](./data/code_alpaca_20k.json) contains 20K instruction-following data used for fine-tuning the Code Alpaca model.
This JSON file is a list of dictionaries, each dictionary contains the following fields:
- `instruction`: `str`, describes the task the model should perform. Each of the 20K instructions is unique.
- `input`: `str`, optional context or input for the task. For example, when the instruction is "Amend the following SQL query to select distinct elements", the input is the SQL query. Around 40% of the examples have an input.
- `output`: `str`, the answer to the instruction as generated by `text-davinci-003`.We used the following prompts for fine-tuning the model:
- for examples with a non-empty input field:
```
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Input:
{input}
### Response:
```
- for examples with an empty input field:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
```
During inference (eg for the web demo), we use the user instruction with an empty input field (second option).## Data Generation Process
Running the code
1. Set environment variables `OPENAI_API_KEY` to your OpenAI API key.
2. Install the dependencies with `pip install -r requirements.txt`.
3. Run `python -m generate_instruction generate_instruction_following_data` to generate the data.Data generation pipeline had minor changes from [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- Modified prompt to focus on code generation/editing/optimization tasks instead of general tasks.
- Modified seed tasks to only be related to code generation.This produced an instruction-following dataset with 20K examples obtained at a much lower cost (less than $200). Also including a smaller 2k samples dataset which was used to derisk the approach and quality of the model.
## Fine-tuning
Finetuned the models using standard Hugging Face training code and deepspeed with the following hyperparameters:| Hyperparameter | Value |
|----------------|-------|
| Learning rate | 2e-5 |
| Epochs | 3 |
| Max length | 512 |
| Weight decay | 0 |Given Hugging Face hasn't officially supported the LLaMA models, we fine-tuned LLaMA with Hugging Face's transformers library by installing it from a particular fork (i.e. this [PR](https://github.com/huggingface/transformers/pull/21955) to be merged).
The hash of the specific commit we installed was `68d640f7c368bcaaaecfc678f11908ebbd3d6176`.The code runs on a 8xA100 80GB, but can also run on 8xA10040GB or 4xA100 with lower batch size and gradient accumulation steps. To get the GPUs, I suggest using [Lambda Labs](https://cloud.lambdalabs.com/login?redirect_to=/instances?), best pricing for the best hardware.
To reproduce the fine-tuning runs for LLaMA, first install the requirements
```bash
pip install -r requirements.txt
```
Then, install the particular fork of Hugging Face's transformers library.Below is a command that fine-tunes LLaMA-7B with our dataset on a machine with 4 A100 80G GPUs using deepspeed.
Replace `` with a port of your own, `` with the
path to your converted checkpoint and tokenizer (following instructions in the PR), and `` with where you want to store your outputs.```bash
torchrun --nproc_per_node=8 --master_port= train.py \
--model_name_or_path
--data_path ./data/code_alpaca_20k.json \
--fp16 True \
--output_dir \
--num_train_epochs 3 \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 8 \
--gradient_accumulation_steps 4 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 500 \
--save_total_limit 1 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--deepspeed ds_config.json
--tf32 False
```Note the given training script is meant to be simple and easy to use, and is not particularly optimized.
For convenience I have included the [`convert_to_hf.py`](./convert_to_hf.py) to covnert llama checkpoints to huggingface compatible checkpoints. (This file is taken from the hugginface transformers repo)
### Citation
Cite this repo if you want to, or don't, both are fine.
```
@misc{codealpaca,
author = {Sahil Chaudhary},
title = {Code Alpaca: An Instruction-following LLaMA model for code generation},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/sahil280114/codealpaca}},
}
```Naturally, you should also cite the original LLaMA paper [1] and the Self-Instruct paper [2] and the [Stanford Alpaca repo](https://github.com/tatsu-lab/stanford_alpaca).