An open API service indexing awesome lists of open source software.

https://github.com/rahulunair/tiny_llm_finetuner

LLM finetuning on Intel XPUs - LoRA on intel discrete GPUs
https://github.com/rahulunair/tiny_llm_finetuner

gpu intel intel-arc xpu

Last synced: 11 days ago
JSON representation

LLM finetuning on Intel XPUs - LoRA on intel discrete GPUs

Awesome Lists containing this project

README

        

### Tiny llm Finetuner for Intel dGPUs

#### Finetuning openLLAMA on Intel discrete GPUS

A finetuner[1](#f1) [2](#f2) for LLMs on Intel XPU devices, with which you could finetune the openLLaMA-3b model to sound like your favorite book.

![image](https://github.com/rahulunair/tiny_llm_finetuning/assets/786476/f060f4f4-f85e-42e5-82c7-fb95fad932fd)

#### Setup and activate conda env

```bash
conda env create -f env.yml
conda activate pyt_llm_xpu
```

**Warning**: OncePyTorch and intel extension for PyTorch is already setup, then install peft without dependencies as peft requires PyTorch 2.0(not supported yet on Intel XPU devices.)

#### Generate data

Fetch a book from guttenberg (default: pride and prejudice) and generate the dataset.

```python
python fetch_data.py
```

#### Finetune

```bash
python finetune.py --input_data ./book_data.json --batch_size=64 --micro_batch_size=16 --num_steps=300
```

#### Inference

For inference, you can either provide a input prompt, or the model will take a default prompt

##### Without user provided prompt

```bash
python inference.py --infer
```

##### Using your own prompt for inference

```bash
python inference.py --infer --prompt "my prompt"
```

##### Benchmark Inference

```bash
python inference.py --bench
```
1: adapted from: [source](https://github.com/modal-labs/doppel-bot/blob/main/src/finetune.py) [↩](#a1)
2: adapted from: [source](https://github.com/tloen/alpaca-lora/blob/65fb8225c09af81feb5edb1abb12560f02930703/finetune.py) [↩](#a2)