https://github.com/amazon-science/llm-code-preference
Training and Benchmarking LLMs for Code Preference.
https://github.com/amazon-science/llm-code-preference
code-generation llm-evaluation llm-training llms-benchmarking
Last synced: about 1 month ago
JSON representation
Training and Benchmarking LLMs for Code Preference.
- Host: GitHub
- URL: https://github.com/amazon-science/llm-code-preference
- Owner: amazon-science
- License: other
- Created: 2024-10-22T21:32:25.000Z (8 months ago)
- Default Branch: main
- Last Pushed: 2024-10-23T19:43:10.000Z (8 months ago)
- Last Synced: 2024-10-23T23:47:27.431Z (8 months ago)
- Topics: code-generation, llm-evaluation, llm-training, llms-benchmarking
- Language: Python
- Homepage: https://llm-code-preference.github.io/
- Size: 156 KB
- Stars: 8
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
README
# Learning Code Preference via Synthetic Evolution
[](https://llm-code-preference.github.io/)
[](https://arxiv.org/abs/2410.03837)
๐ฐ TL;DR โข
๐ Evaluation โข
๐งช Training โข
๐ฎ Synthetic Data Generation โข
๐ Citation โข
๐ Acknowledgement## ๐ฐ TL;DR
How to *effectively* and *efficiently* obtain code preferences and judgements is an important yet under-studied topic!
To this end, our work provides:
* **CodeFavor**: an open recipe to train code preference models with from-scratch data!
* *Commit-Instruct*: code commits -> code preference
* *Critic-Evol*: code critique & revising -> code preference
* **CodePrefBench**: 1364 code preference tasks covering both verifiable and human objectives:
* *Code Correctness*
* *Code Efficiency*
* *Code Security*
* *Human Preference*
* **Study**: our paper provides comprehensive studies!
* *Human studies*: quantifying the cost and performance of human preference based 18 developers
* *Case studies*: our Appendix case-studies model preferences over code correctness, efficiency, and security
* *Controlled experiments*: impact of data, comment, criteria, modeling, etc. on training preference models
## ๐ Evaluation
### Environment
* Python requirements: 3.10 or higher.
```bash
conda create -n codefavor python=3.10 -y
conda activate codefavor
pip install -r requirements.txt
```### CodePrefBench
```bash
# OpenAI server
python codefavor/evaluate.py --model-id "gpt-4o-2024-05-13" --model-type openai --concurrency 80
# Other OpenAI-compatible servers (vLLM, DeepSeek APIs, etc.)
python codefavor/evaluate.py --model-id "google/gemma-2-27b-it" --model-type openai --concurrency 80 --model-url http://localhost:8000/v1
# Claude models via Bedrock
python codefavor/evaluate.py --model-id "anthropic.claude-3-sonnet-20240229-v1:0" --model-type bedrock --concurrency 10
# Pairwise RM
python codefavor/evaluate.py --model-id ./models/mix-cls-mistral-7b-it_bs32_ep1_lr5e-6-l3-70b/checkpoint-688 --model-type pair-rm
```* Supported `--model-type`: `huggingface`, `openai`, `bedrock`, `pair-rm`, and `google`
## ๐งช Training
### Environment
```bash
git clone https://github.com/axolotl-ai-cloud/axolotl.git axolotl-dep
cd axolotl-deppip install torch==2.3.0
pip install packaging ninja wandb
pip install -e '.[flash-attn,deepspeed]'
```### Use existing dataset
```bash
python scripts/axolotl/prepare_data.py \
--decomposed-dataset datasets/train/editpackft-Llama-3-70B-Instruct.commit_instruct.decompose.jsonl \
--judge-type classification --both-order
python scripts/axolotl/prepare_data.py \
--decomposed-dataset datasets/train/Llama-3-8B-Instruct-SOSS.teacher.Llama-3-70B-Instruct.critic_evol.decompose.jsonl \
--judge-type classification --both-order
```### Train models using Axolotl
```bash
accelerate launch -m axolotl.cli.train \
scripts/axolotl/recipe/gemma/cls-commit-instruct-from-llama3-70b.yaml \
--deepspeed scripts/axolotl/zero3.json
# or use `torchrun` if your `accelerate` is complaining
torchrun --nproc_per_node 8 -m axolotl.cli.train \
scripts/axolotl/recipe/gemma/cls-commit-instruct-from-llama3-70b.yaml \
--deepspeed scripts/axolotl/zero3.json
```## ๐ฎ Synthetic Data Generation
### Commit-Instruct from Scratch
```bash
# Support OpenAI and Bedrock interface
# OAI interface
python codefavor/prompt/commit_instruct.py --model-id "deepseek-chat" --model-type "openai" --concurrency 256 --dataset editpackft --model-url "https://api.deepseek.com/v1"
# Bedrock interface
python codefavor/prompt/commit_instruct.py --model-id "meta.llama3-1-405b-instruct-v1:0" --model-type "bedrock" --concurrency 10 --dataset editpackft
```### Critic-Evol from Scratch
```bash
python codefavor/prompt/critic_evol.py --weak-dataset ./datasets/train/Llama-3-8B-Instruct-SOSS.jsonl \
--model-id "deepseek-coder" --model-url "https://api.deepseek.com/v1"
python codefavor/prompt/critic_evol.py --weak-dataset ./datasets/train/Llama-3-8B-Instruct-SOSS.jsonl \
--model-id "meta.llama3-1-405b-instruct-v1:0" --concurrency 10
```* Pairwise training code is partially adopted from https://github.com/RLHFlow/RLHF-Reward-Modeling/tree/main/pair-pm
## ๐ Citation
```bibtex
@article{codefavor,
title = {Learning Code Preference via Synthetic Evolution},
author = {Liu, Jiawei and Nguyen, Thanh and Shang, Mingyue and Ding, Hantian and Li, Xiaopeng and Yu, Yu and Kumar, Varun and Wang, Zijian},
journal = {arXiv preprint arXiv:2410.03837},
year = {2024},
}
```## ๐ Acknowledgement
* Our training code is partially adapted from [RLHFlow](https://github.com/RLHFlow/RLHF-Reward-Modeling).
* Our evaluation code is partially adapted from [RepoQA](https://github.com/evalplus/repoqa).
* The seed corpus used in this paper comes from [EditPackFT](https://huggingface.co/datasets/nuprl/EditPackFT) and [Self-OSS-Instruct](https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k).## ๐ Research Use Only
This source code is being released solely for academic and scientific reproducibility purposes, in support of the methods and findings described in the associated publication. Pull requests are not being accepted in order to maintain the code exactly as it was used in the paper, but interested parties are encouraged to open an issue requesting open source community development.