Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/tangqiaoyu/ToolAlpaca
ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases
https://github.com/tangqiaoyu/ToolAlpaca
Last synced: about 1 month ago
JSON representation
ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases
- Host: GitHub
- URL: https://github.com/tangqiaoyu/ToolAlpaca
- Owner: tangqiaoyu
- License: apache-2.0
- Created: 2023-06-14T03:21:40.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-07-04T07:40:02.000Z (6 months ago)
- Last Synced: 2024-08-03T09:06:53.900Z (5 months ago)
- Language: Python
- Homepage:
- Size: 3.74 MB
- Stars: 271
- Watchers: 4
- Forks: 32
- Open Issues: 7
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- StarryDivineSky - tangqiaoyu/ToolAlpaca
README
# ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases
[![arXiv](https://img.shields.io/badge/arXiv-2306.05301-.svg?style=flat-square)](https://arxiv.org/abs/2306.05301)
[![](https://img.shields.io/badge/huggingface-ToolAlpaca_7B-blue)](https://huggingface.co/TangQiaoYu/ToolAlpaca-7B)
[![](https://img.shields.io/badge/huggingface-ToolAlpaca_13B-blue)](https://huggingface.co/TangQiaoYu/ToolAlpaca-13B)`ToolAlpaca` is a framework designed for learning generalized tool-use abilities in compact language models with minimal human supervision. It addresses the challenge of tool learning by generating a tool-use corpus via a multi-agent simulation environment, providing 3.9k tool-use instances from more than 400 tools.
## Data
Dataset list:
- train_data.json: training data with 400+ APIs
- eval_simulated.json: evaluation data with 10 simulated APIs
- eval_real.json: evaluation data with 11 real APIs, some APIs require authentication.Data format:
```json
{
"Name": "name, from public-apis",
"Description": "description, from public-apis",
"Category": "category, from public-apis",
"Introduction": "introduction, generated by LLM",
"Functions": "NLDocumentation in paper v1, generated by LLM",
"Documentation": "str(json), OpenAPI Specification documentation, generated by LLM",
"NLDocumentation": "natural language documentation, similar to Functions, converted from Documentation",
"Function_Description": "each functions description in NLDocumentation",
"Function_Projection": "function to HTTP request method",
"Instructions": "instructions, generated by LLM",
"Instances": [
{
"input": "use's init instruction, from use agent",
"output": "final output, from assistant agent",
"Final Thought": "the final thought before output, from assistant agent",
"intermediate_steps": [
[
[
"action, from assistant agent",
"action input, str(json), from assistant agent",
"thought + action + action input, assistant agent's output"
]
"bbservation, from [user agent, type check python code, tool executor agent]"
]
]
}
]
}
```## Dataset Generation
- Clone this repository and install packages
```bash
git clone [email protected]:tangqiaoyu/ToolAlpaca.git
cd ToolAlpaca
pip install -r requirements.txt
```- download public-api data
```bash
python tool_maker/preprocess_public_apis.py -api data/public_apis.json
```- toolset construction
```bash
export PYTHONPATH=$PYTHONPAT:$(pwd)
export OPENAI_API_KEY=""python tool_maker/get_elements.py -api data/public_apis.json -out ./data
python tool_maker/natural_language_documentation.py -api ./data/api_data.json
```- tool-use instances generation
```bash
python instance_generation/instruction.py -api ./data/api_data.json -out ./datapython instance_generation/simulator.py -api ./data/api_data.json
python instance_generation/generation.py -api ./data/api_data.json -out ./data --use_cache
```## Train
To train Toolapaca, we need to create a prompt to organize the dataset in a format that the standard SFT training code can read, similar to what is done in `build_dataset.py`. Afterward, we can proceed with training using the standard SFT method, only optimizing the loss on `thought`, `action`, and `action input`.```bash
deepspeed --num_gpus=2 --master_port=12345 train.py \
--deepspeed ${deepspeed config path} \
--model_name_or_path ${path to base model like vicuna-7b} \
--data_path ${data path} \
--bf16 True \
--output_dir outputs/vicuna-7b-toolalpaca/ \
--num_train_epochs 3 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 2 \
--evaluation_strategy "no" \
--save_strategy "epoch" \
--save_total_limit 10 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--tf32 True \
--model_max_length 2048 \
--gradient_checkpointing True \
--lazy_preprocess True
```You can Find our models on huggingface hub: [ToolAlpaca-7B](https://huggingface.co/TangQiaoYu/ToolAlpaca-7B), [ToolAlpaca-13B](https://huggingface.co/TangQiaoYu/ToolAlpaca-13B).
## Evaluation
- for simulated APIs:
```bash
# start the api simulator
python instance_generation/simulator.py -api ./data/eval_simulated.json# get LLM outputs
python instance_generation/generation.py \
-api ./data/eval_simulated.json \
-out ./eval \
-llm TangQiaoYu/ToolAlpaca-13B \
--agent_prompt test_v1 \
--use_cache# evaluation with LLM like GPT-4
python evaluation.py -api ${api_data_path} -out ./eval
```- for real APIs:
You should register the websites and get the API_KEYs.```bash
python instance_generation/generation.py \
-api ./data/eval_real.json \
-out ./data \
-llm TangQiaoYu/ToolAlpaca-13B \
--agent_prompt test_v1 \
--realpython evaluation.py -api ${api_data_path} -out ./eval
```## Citation
If you find our work helpful, please cite as
```bibtex
@misc{tang2023toolalpaca,
title={ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases},
author={Qiaoyu Tang and Ziliang Deng and Hongyu Lin and Xianpei Han and Qiao Liang and Le Sun},
year={2023},
eprint={2306.05301},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```