https://github.com/mltrev23/flan-t5-fine-tune
Flan-t5 model fine tune LoRA and Langchain
https://github.com/mltrev23/flan-t5-fine-tune
ai fine-tune flan-t5 langchain lora model-fine-tune modeling python
Last synced: 17 days ago
JSON representation
Flan-t5 model fine tune LoRA and Langchain
- Host: GitHub
- URL: https://github.com/mltrev23/flan-t5-fine-tune
- Owner: mltrev23
- Created: 2024-07-02T13:32:38.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2024-08-20T12:51:15.000Z (8 months ago)
- Last Synced: 2025-03-22T01:04:34.569Z (about 1 month ago)
- Topics: ai, fine-tune, flan-t5, langchain, lora, model-fine-tune, modeling, python
- Language: Python
- Homepage:
- Size: 34.2 KB
- Stars: 8
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Fine-tuning Flan-T5
This project aims to fine-tune the Flan-T5 language model using Hugging Face's Transformers library. By following these steps, you can fine-tune the model and use it for inference.
---
## Prerequisites
Before getting started, make sure you have the following:
- Hugging Face API token (HF token)
- Python installed on your system
- CUDA-enabled GPU (for training)---
## Setup
1. Clone this repository to your local machine.
```bash
git clone https://github.com/mltrev23/Fine-Tuning-LLaMA-2/
```2. Install the required packages using the following command:
```bash
pip install -r requirements.txt
```---
## Fine-tuning Flan-T5
There are four methods of fine-tuning available:
1. **Additive Fine-Tuning**: Adds layers to train.
```shell
python additive-fine-tuning.py
```2. **Full Fine-Tuning**: Trains all weights of the model.
```shell
python full-fine-tuning.py
```3. **Selective Fine-Tuning**: Chooses specific layers of the original model to train.
```shell
python selective-fine-tuning.py
```4. **Template Fine-Tuning**: Uses a predefined template for training. The template can be modified as needed.
---
### Custom Data Ingestion
To ingest your own data for fine-tuning, modify the code in your scripts to load your dataset. Here’s an example of loading a text dataset:
```python
from transformers import LineByLineTextDatasetdataset = LineByLineTextDataset(tokenizer=tokenizer, file_path='datasets/bittensor.txt', block_size=128)
```---
## Inference
To perform inference using the fine-tuned Flan-T5 model, you can use the following scripts:
1. **Generate Using Forward Pass**: This script demonstrates generating text using a forward pass approach.
```shell
python generate-using-forward.py
```2. **Main Inference Script**: This script provides a straightforward way to generate outputs from the model.
```shell
python main.py
```### Example Inference
You can modify the input text in the `main.py` file to test the model:
```python
input_text = ["What is FAANG?"]
```This will generate an output based on the input provided.
---
## Happy Fine-Tuning!
Feel free to modify the scripts as needed for your specific tasks and datasets. If you encounter any issues, check the Hugging Face documentation or seek help from the community.
### Summary of the README Structure
- **Title and Introduction**: Describes the purpose of the project.
- **Prerequisites**: Lists necessary requirements.
- **Setup**: Instructions for cloning the repo, setting up the environment, and installing dependencies.
- **Fine-tuning Methods**: Details the four methods available for fine-tuning.
- **Custom Data Ingestion**: Provides an example of how to load custom datasets.
- **Inference**: Explains how to run inference with the fine-tuned model, including example scripts.
- **Conclusion**: Encourages users to modify scripts and seek help if needed.Feel free to adjust any sections further based on your specific needs!