Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/promptslab/llmtuner

FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)
https://github.com/promptslab/llmtuner

fine-tuning fine-tuning-llm finetune finetune-gpt finetune-llama finetune-llm finetune-llms finetune-whisper finetunechatgpt finetuning finetuning-large-language-models finetuning-rl llm llm-framework llm-inference llm-training llmops llmtuner whisper whisper-finetune

Last synced: 4 days ago
JSON representation

FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)

Awesome Lists containing this project

README

        



LLMTuner



LLMTuner: Fine-Tune Llama, Whisper, and other LLMs with best practices like LoRA, QLoRA, through a sleek, scikit-learn-inspired interface.



LLMTuner is released under the Apache 2.0 license.


http://makeapullrequest.com


Community


colab

## Installation

### With pip

This repository is tested on Python 3.7+

You should install Promptify using Pip command

```bash
pip3 install git+https://github.com/promptslab/LLMTuner.git
```

## Quick tour

To finetune Large models, we provide the `Tuner` API.

```python

from llmtuner import Tuner, Dataset, Model, Deployment

# Initialize the Whisper model with parameter-efficient fine-tuning
model = Model("openai/whisper-small", use_peft=True)

# Create a dataset instance for the audio files
dataset = Dataset('/path/to/audio_folder')

# Set up the tuner with the model and dataset for fine-tuning
tuner = Tuner(model, dataset)

# Fine-tune the model
trained_model = tuner.fit()

# Inference with Fine-tuned model
tuner.inference('sample.wav')

# Launch an interactive UI for the fine-tuned model
tuner.launch_ui('Model Demo UI')

# Set up deployment for the fine-tuned model
deploy = Deployment('aws') # Options: 'fastapi', 'aws', 'gcp', etc.

# Launch the model deployment
deploy.launch()

```

Features 🤖




  • 🏋️‍♂️ Effortless Fine-Tuning: Finetune state-of-the-art LLMs like Whisper, Llama with minimal code


  • ⚡️ Built-in utilities for techniques like LoRA and QLoRA


  • ⚡️ Interactive UI: Launch webapp demos for your finetuned models with one click


  • 🏎️ Simplified Inference: Fast inference without separate code


  • 🌐 Deployment Readiness: (Coming Soon) Deploy your models with minimal effort to aws, gcp etc, ready to share with the world.

### Supported Models :

| Task Name | Colab Notebook | Status |
|-------------|-------|-------|
| Fine-Tune Whisper | [Fine-Tune Whisper](https://colab.research.google.com/drive/1j_1AcPRk4s1uivVRSwrsOfodjPv55Jpc?usp=sharing) | ✅ |
| Fine-Tune Whisper Quantized | [LoRA](https://colab.research.google.com/drive/1ia9KvqEGOxARtJScPBY6ccF8l41-w_l5?usp=sharing) | ✅ |
| Fine-Tune Llama | [Coming soon..](#) | ✅ |

## Community


If you are interested in Fine-tuning Open source LLMs, Building scalable Large models, Prompt-Engineering, and other latest research discussions, please consider joining PromptsLab


Join us on Discord

```

@misc{LLMtuner2023,
title = {LLMTuner: Fine-Tune Large Models with best practices through a sleek, scikit-learn-inspired interface.},
author = {Pal, Ankit},
year = {2023},
howpublished = {\url{https://github.com/promptslab/LLMtuner}}
}

```

## 💁 Contributing

We welcome any contributions to our open source project, including new features, improvements to infrastructure, and more comprehensive documentation.
Please see the [contributing guidelines](#)