Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/promptslab/llmtuner
FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)
https://github.com/promptslab/llmtuner
fine-tuning fine-tuning-llm finetune finetune-gpt finetune-llama finetune-llm finetune-llms finetune-whisper finetunechatgpt finetuning finetuning-large-language-models finetuning-rl llm llm-framework llm-inference llm-training llmops llmtuner whisper whisper-finetune
Last synced: 4 days ago
JSON representation
FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)
- Host: GitHub
- URL: https://github.com/promptslab/llmtuner
- Owner: promptslab
- License: apache-2.0
- Created: 2023-04-28T12:44:42.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-01-13T10:32:58.000Z (11 months ago)
- Last Synced: 2024-12-11T15:49:19.018Z (11 days ago)
- Topics: fine-tuning, fine-tuning-llm, finetune, finetune-gpt, finetune-llama, finetune-llm, finetune-llms, finetune-whisper, finetunechatgpt, finetuning, finetuning-large-language-models, finetuning-rl, llm, llm-framework, llm-inference, llm-training, llmops, llmtuner, whisper, whisper-finetune
- Language: Python
- Homepage:
- Size: 591 KB
- Stars: 232
- Watchers: 4
- Forks: 15
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
LLMTuner
LLMTuner: Fine-Tune Llama, Whisper, and other LLMs with best practices like LoRA, QLoRA, through a sleek, scikit-learn-inspired interface.
## Installation
### With pip
This repository is tested on Python 3.7+
You should install Promptify using Pip command
```bash
pip3 install git+https://github.com/promptslab/LLMTuner.git
```## Quick tour
To finetune Large models, we provide the `Tuner` API.
```python
from llmtuner import Tuner, Dataset, Model, Deployment
# Initialize the Whisper model with parameter-efficient fine-tuning
model = Model("openai/whisper-small", use_peft=True)# Create a dataset instance for the audio files
dataset = Dataset('/path/to/audio_folder')# Set up the tuner with the model and dataset for fine-tuning
tuner = Tuner(model, dataset)# Fine-tune the model
trained_model = tuner.fit()# Inference with Fine-tuned model
tuner.inference('sample.wav')# Launch an interactive UI for the fine-tuned model
tuner.launch_ui('Model Demo UI')# Set up deployment for the fine-tuned model
deploy = Deployment('aws') # Options: 'fastapi', 'aws', 'gcp', etc.# Launch the model deployment
deploy.launch()```
Features 🤖
-
🏋️♂️ Effortless Fine-Tuning: Finetune state-of-the-art LLMs like Whisper, Llama with minimal code -
⚡️ Built-in utilities for techniques like LoRA and QLoRA -
⚡️ Interactive UI: Launch webapp demos for your finetuned models with one click -
🏎️ Simplified Inference: Fast inference without separate code -
🌐 Deployment Readiness: (Coming Soon) Deploy your models with minimal effort to aws, gcp etc, ready to share with the world.
### Supported Models :
| Task Name | Colab Notebook | Status |
|-------------|-------|-------|
| Fine-Tune Whisper | [Fine-Tune Whisper](https://colab.research.google.com/drive/1j_1AcPRk4s1uivVRSwrsOfodjPv55Jpc?usp=sharing) | ✅ |
| Fine-Tune Whisper Quantized | [LoRA](https://colab.research.google.com/drive/1ia9KvqEGOxARtJScPBY6ccF8l41-w_l5?usp=sharing) | ✅ |
| Fine-Tune Llama | [Coming soon..](#) | ✅ |
## Community
If you are interested in Fine-tuning Open source LLMs, Building scalable Large models, Prompt-Engineering, and other latest research discussions, please consider joining PromptsLab
```
@misc{LLMtuner2023,
title = {LLMTuner: Fine-Tune Large Models with best practices through a sleek, scikit-learn-inspired interface.},
author = {Pal, Ankit},
year = {2023},
howpublished = {\url{https://github.com/promptslab/LLMtuner}}
}
```
## 💁 Contributing
We welcome any contributions to our open source project, including new features, improvements to infrastructure, and more comprehensive documentation.
Please see the [contributing guidelines](#)