Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/deepseek-ai/esft
Expert Specialized Fine-Tuning
https://github.com/deepseek-ai/esft
Last synced: 5 days ago
JSON representation
Expert Specialized Fine-Tuning
- Host: GitHub
- URL: https://github.com/deepseek-ai/esft
- Owner: deepseek-ai
- License: mit
- Created: 2024-07-04T09:48:48.000Z (5 months ago)
- Default Branch: main
- Last Pushed: 2024-09-22T15:46:39.000Z (about 2 months ago)
- Last Synced: 2024-11-08T14:15:02.202Z (12 days ago)
- Language: Python
- Size: 31.8 MB
- Stars: 143
- Watchers: 7
- Forks: 13
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE-CODE
Awesome Lists containing this project
README
# Expert-Specialized Fine-Tuning
Official Repo for paper [Let the Expert Stick to His Last: Expert-Specialized Fine-Tuning for Sparse Architectural Large Language Models](https://arxiv.org/abs/2407.01906) by
[Zihan Wang](https://zihanwang314.github.io), [Deli Chen](https://victorchen96.github.io/chendeli.io/), [Damai Dai](https://scholar.google.com.hk/citations?user=8b-ysf0NWVoC&hl=zh-CN), [Runxin Xu](https://runxinxu.github.io/aboutme/),
[Zhuoshu Li](http://www.idi.zju.edu.cn/member/3053.html) and
Y. Wu.**ESFT** aims to efficiently customize Large Language Models (LLMs) with Mixture-of-Experts (MoE) architecture by adjusting only task-relevant parts, improving efficiency and performance while using fewer resources and storage.
## π° News
π **2024.9.20:** Glad to announce that ESFT has been accepted to the **EMNLP 2024 Main Conference**!
π **2024.8.11:** We now release the **ESFT training code**! β¨ You can now try it with your own models and dataset!
## π Quick Start
### Installation and Setup
```bash
git clone https://github.com/deepseek-ai/ESFT.git
cd esft
```### Install required dependencies
```bash
pip install transformers torch safetensors accelerate
```### Download necessary adapters
```bash
bash scripts/download_adapters.sh
```## π§Key Scripts
1. **eval_multigpu.py**
This script evaluates the performance of the model on various datasets. See **scripts/eval.sh** for detailed configs and explanations.**Usage:**
```bash
python eval_multigpu.py \
--eval_dataset=translation \
--base_model_path=deepseek-ai/ESFT-vanilla-lite \
--adapter_dir=all_models/adapters/token/translation \
--output_path=results/completions/token/translation.jsonl \
--openai_api_key=YOUR_OPENAI_API_KEY
```2. **get_expert_scores.py**
This script calculates the scores for each expert based on the evaluation datasets.
**Usage:**
```bash
python scripts/expert/get_expert_scores.py \
--eval_dataset=translation \
--base_model_path=deepseek-ai/ESFT-vanilla-lite \
--output_dir=results/expert_scores/translation \
--n_sample_tokens=131072 \
--world_size=4 \
--gpus_per_rank=2
```3. **generate_expert_config.py**
This script generates the configuration to convert a MoE model with only task-relevant tasks trained based on evaluation scores.
**Usage:**
```bash
python scripts/expert/generate_expert_config.py \
--eval_datasets=intent,summary,law,translation \
--expert_scores_dir=results/expert_scores \
--output_dir=results/expert_configs \
--score_function=token \
--top_p=0.2 # the scoring function and top_p are hyperparameters
```4. **train.py** and **train_ep.py**
This script trains the model with the expert configuration generated by the previous script. The train_ep.py file uses expert parallel and has been optimized for multi-GPU training.
**Usage:**
```bash
python train.py \
--base_model_path=deepseek-ai/ESFT-vanilla-lite \
--expert_config=results/expert_configs/intent.json \
--train_dataset=intent \
--train_config=configs/base.yaml \
--output_dir=results/checkpoints/intent
torchrun --nproc-per-node=8 train_ep.py \
--base_model_path=deepseek-ai/ESFT-vanilla-lite \
--expert_config=results/expert_configs/translation.json \
--train_dataset=translation \
--train_config=configs/base.yaml \
--output_dir=results/checkpoints/translation```
## Contact and Support
For bug reports, feature requests, and general inquiries, please open an issue on our GitHub Issues page. Make sure to include as much detail as possible to help us address your issue quickly.## πTodo list
- βοΈ π Update models, evaluation scripts, and expert selection scripts
- βοΈ π§ Update training scripts
- π² π More...## πCitation
If you find our code or paper useful, please cite:
```bash
@article{wang2024letexpertsticklast,
title={Let the Expert Stick to His Last: Expert-Specialized Fine-Tuning for Sparse Architectural Large Language Models},
author={Zihan Wang and Deli Chen and Damai Dai and Runxin Xu and Zhuoshu Li and Y. Wu},
year={2024},
eprint={2407.01906},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.01906},
}
```