https://github.com/riccorl/llama-trainer
Llama Trainer Utility
https://github.com/riccorl/llama-trainer
huggingface llama llm llm-inference llm-training llms transformer
Last synced: 4 months ago
JSON representation
Llama Trainer Utility
- Host: GitHub
- URL: https://github.com/riccorl/llama-trainer
- Owner: Riccorl
- Created: 2023-07-28T10:07:02.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2023-09-28T08:47:12.000Z (about 2 years ago)
- Last Synced: 2025-06-12T23:02:19.932Z (4 months ago)
- Topics: huggingface, llama, llm, llm-inference, llm-training, llms, transformer
- Language: Python
- Homepage:
- Size: 19.5 KB
- Stars: 9
- Watchers: 1
- Forks: 1
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# 🦙 Llama Trainer Utility
[](https://github.com/Riccorl/llama-trainer/actions/workflows/python-publish-pypi.yml)
A "just few lines of code" utility for fine-tuning (not only) Llama models.
To install:
```bash
pip install llama-trainer
```### Training and Inference
#### Training
```python
from llama_trainer import LlamaTrainer
from datasets import load_datasetdataset = load_dataset("timdettmers/openassistant-guanaco")
# define your instruction-based sample
def to_instruction_fn(sample):
return sample["text"]formatting_func = to_instruction_fn
output_dir = "llama-2-7b-hf-finetune"
llama_trainer = LlamaTrainer(
model_name="meta-llama/Llama-2-7b-hf",
dataset=dataset,
formatting_func=formatting_func,
output_dir=output_dir
)
llama_trainer.train()
```#### Inference
```python
from llama_trainer import LlamaInfer
import transformers as trllama_infer = LlamaInfer(output_dir)
prompt = "### Human: Give me some output!### Assistant:"
print(llama_infer(prompt))
```