https://github.com/liutiedong/goat
a Fine-tuned LLaMA that is Good at Arithmetic Tasks
https://github.com/liutiedong/goat
ai llms nlp-datasets
Last synced: 14 days ago
JSON representation
a Fine-tuned LLaMA that is Good at Arithmetic Tasks
- Host: GitHub
- URL: https://github.com/liutiedong/goat
- Owner: liutiedong
- Created: 2023-05-18T19:57:36.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2023-09-15T14:44:32.000Z (over 1 year ago)
- Last Synced: 2024-11-09T12:39:44.271Z (6 months ago)
- Topics: ai, llms, nlp-datasets
- Language: Jupyter Notebook
- Homepage:
- Size: 863 KB
- Stars: 174
- Watchers: 3
- Forks: 16
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- StarryDivineSky - liutiedong/goat
- awesome-llm-and-aigc - Goat - tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks". (**[arXiv 2023](https://arxiv.org/abs/2305.14201)**). "微信公众号「AINLPer」《[近乎完美!最强算术语言模型: Goar-7B,干翻GPT-4,怒越PaLM-540B!24G可训练](https://mp.weixin.qq.com/s/_haINkHNV4bMszm9F41yXA)》"。 (Applications / 提示语(魔法))
- awesome-llm-and-aigc - Goat - tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks". (**[arXiv 2023](https://arxiv.org/abs/2305.14201)**). "微信公众号「AINLPer」《[近乎完美!最强算术语言模型: Goar-7B,干翻GPT-4,怒越PaLM-540B!24G可训练](https://mp.weixin.qq.com/s/_haINkHNV4bMszm9F41yXA)》"。 (Applications / 提示语(魔法))
README
# 🐐 Goat: Fine-tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks
[Paper] | [Adapter Weights] | [Dataset] | [Colab]
### Demo
1. Addition
![]()
![]()
2. Subtraction
![]()
![]()
3. Multiplication
![]()
![]()
4. Division
![]()
![]()
### Local Setup
```bash
git clone https://github.com/liutiedong/goat.git
cd goat
pip install -r requirements.txt
```### Dataset (`dataset.ipynb`)
Run `dataset.ipynb` to generate `dataset.json` file, or download from HuggingFace dataset `tiedong/goat` (https://huggingface.co/datasets/tiedong/goat). Each instance in the dataset contains- __instruction__: human instruction created by inserting an arithmetic expression to a randomly chosen template and adding some natural language noises. It serves as prompts to be fed to the model for instruction-finetuning.
- __input__: a randomly generated arithmetic expression. It can be used to replace 'instruction' for training when we want to focus on arithmetic and avoid the influence of natural language.
- __output__: the target output for the model to learn. It contains CoTs for multi-digit multiplication and division.
- __answer__: direct numerical answer to the arithmetic task. It can be used to test learnability of various sub-tasks.Example:
```bash
{
"instruction": "What is 94140209+73?",
"input": "94140209 + 73",
"output": "94140209 + 73 = 94140282",
"answer": "94140282"
},
{
"instruction": "Compute 8432862 - 659016175?",
"input": "8432862 - 659016175",
"output": "8432862 - 659016175 = -650583313",
"answer": "-650583313"
},
{
"instruction": "Calculate 37 times 3066",
"input": "37 * 3066",
"output": "37 * 3066 = 3066 * (30 + 7) = 3066 * 30 + 3066 * 7 = 91980 + 21462 = 113442",
"answer": "113442"
},
{
"instruction": "Determine the numerical value of 5697/47.",
"input": "5697 / 47",
"output": "5697 - 47 * 100 = 5697 - 4700 = 997\n997 - 47 * 20 = 997 - 940 = 57\n57 - 47 * 1 = 57 - 47 = 10\nTherefore, 5697 / 47 = 121 R 10",
"answer": "121 R 10"
},```
Feel free to modify `dataset.ipynb` to create your own data.It is good to start with a simple sub-task, say 8-digit by 8-digit addition,
```
pairs = [(random.randint(10**7, 10**8), random.randint(10**7, 10**8)) for k in range(100000)]
```
It only takes less than 2 hours of finetuning to achieve near-perfect accuracy (100000 training samples on A10 GPU).### Template (`goat.json`)
`template.txt` contains several hundred natural language instructions. Instructions that are more commonly used are duplicated more times to increase their chances of being sampled. Instructions that are generated using ChatGPT are listed behind without duplication. Note that some instructions may not be coherent or grammatical correct after inserting arithmetic expressions, but it should not be a problem if we do not train on input.To add more instructions for training, put new instructions in `template.txt` under `templates` folder. Then run `python convert_txt_to_json.py` to convert to `goat.json` file, which is used by `dataset.ipynb` to generate dataset for fine-tuning.
### Training (`finetune.py`)
Example usage:
```bash
python finetune.py \
--base_model 'decapoda-research/llama-7b-hf' \
--data_path 'dataset.json' \
--output_dir './weights'
```We train our model using the following command:
```bash
python finetune.py \
--base_model 'decapoda-research/llama-7b-hf' \
--data_path 'dataset.json' \
--output_dir './weights' \
--batch_size 128 \
--micro_batch_size 16 \
--num_epochs 1 \
--learning_rate 1e-4 \
--cutoff_len 512 \
--val_set_size 0 \
--lora_r 64 \
--lora_alpha 64 \
--lora_dropout 0.05 \
--lora_target_modules '[q_proj,v_proj,k_proj,o_proj]' \
```### Inference (`app.py`)
This file downloads LoRA weights from HuggingFace `tiedong/goat-lora-7b`, and runs a Gradio interface for inference.
Example usage:
```bash
python app.py \
--base_model 'decapoda-research/llama-7b-hf' \
--lora_weights 'tiedong/goat-lora-7b'
```Alternatively, host your own Goat gradio demo directly in Colab with [this notebook](https://colab.research.google.com/drive/15tiSi_XvSpFC-M0c45lJXOwDPgjDSrK9?usp=sharing).
### Citation
```
@article{liu2023goat,
title={Goat: Fine-tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks},
author={Liu, Tiedong and Low, Bryan Kian Hsiang},
journal={arXiv preprint arXiv:2305.14201},
year={2023}
}
```### Acknowledgements
Our implementation is mainly based on [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).