An open API service indexing awesome lists of open source software.

https://github.com/zeyadusf/finetuning-llms

Finetuning Large Language Models
https://github.com/zeyadusf/finetuning-llms

bert-model deberta distilbert finetuning finetuning-llms llama2 llms lora peft-fine-tuning-llm quantization roberta summarization t5-model text-classification text-generation

Last synced: 6 months ago
JSON representation

Finetuning Large Language Models

Awesome Lists containing this project

README

          

# FineTuning Large Language Models
![image](https://github.com/user-attachments/assets/b37062ba-6425-4263-a8f0-ffa0be4a697c)




### What is Fine-tuning?

Fine-tuning is a machine learning technique where a pre-trained model is further trained (or fine-tuned) on a new dataset, usually smaller and domain-specific, to adapt it to a particular task. In this process, the pre-trained model retains the knowledge it has learned during its initial training and applies that to the new task, often with fewer resources and training time compared to training a model from scratch.

Fine-tuning is popular in NLP, computer vision, and other AI fields, especially when using large-scale models like **BERT**, **GPT**, **T5**, or **ResNet**, which are pre-trained on general datasets.

### Key Steps in Fine-tuning
1. **Load Pre-trained Model**: Start with a model pre-trained on a large, diverse dataset.
2. **Adapt Architecture**: Adjust the model's layers or output to match the specific task (e.g., for classification or generation).
3. **Train on New Dataset**: Train the model on a new, smaller dataset specific to your task, often using a smaller learning rate to avoid overfitting or disrupting the pre-trained weights.

---

### Challenges in Fine-tuning

1. **Overfitting**: When fine-tuning on a small dataset, there’s a risk of the model overfitting and losing its generalization capabilities.
- **Solution**: Use techniques like data augmentation, early stopping, and regularization. You can also freeze some pre-trained layers and only fine-tune the last few layers to prevent overfitting.

2. **Catastrophic Forgetting**: The model may "forget" the general knowledge it learned during pre-training when fine-tuned on a small, task-specific dataset.
- **Solution**: Use a lower learning rate or freeze parts of the model (e.g., lower layers) to preserve the pre-trained knowledge.

3. **Limited Training Data**: Fine-tuning often involves working with smaller datasets, which may not be sufficient to adapt the model effectively.
- **Solution**: Use data augmentation, transfer learning (by leveraging pre-trained models), and regularization techniques. Additionally, combining multiple small datasets can help.

4. **Domain Mismatch**: If there is a large difference between the domain of the pre-trained model and the target task (e.g., fine-tuning a model trained on English for use in a different language), performance might degrade.
- **Solution**: Gradual unfreezing, where you gradually unfreeze the model’s layers and fine-tune deeper layers slowly to adapt to the new domain, can help.

5. **Hyperparameter Tuning**: Finding the right hyperparameters (e.g., learning rate, batch size, weight decay) can be challenging during fine-tuning.
- **Solution**: Use grid search, random search, or more sophisticated approaches like Bayesian optimization to find the best hyperparameters. Start with lower learning rates since pre-trained models are sensitive to large updates.

6. **Computational Resources**: Fine-tuning large models, especially transformer-based models, can require significant computational resources, especially in terms of memory and processing power.
- **Solution**: Use techniques like **[`Low-Rank Adaptation (LoRA)`](https://github.com/zeyadusf/topics-in-nlp-llm/tree/main/PEFT%20(Parameter-Efficient%20Fine-Tuning)/LoRA)** or other methods of **[`Parameter-Efficient Fine-Tuning (PEFT)`](https://github.com/zeyadusf/topics-in-nlp-llm/tree/main/PEFT%20(Parameter-Efficient%20Fine-Tuning))**, which reduces memory usage, or opt for 4-bit or 8-bit quantization to reduce model size.

7. **Evaluation and Validation**: Properly evaluating a fine-tuned model on new data can be difficult if the dataset is unbalanced or there are no standard metrics for the task.
- **Solution**: Use cross-validation, domain-specific evaluation metrics (e.g., BLEU, ROUGE for text, F1 for classification), and create robust validation sets.

8. **Bias in Pre-trained Models**: The pre-trained models might carry biases from the data they were initially trained on, which can impact performance on new tasks.
- **Solution**: Bias mitigation techniques, like re-sampling the training data or fine-tuning on more representative data, can help reduce the impact of unwanted biases.

---





# Projects


#
Project Name
Model Name
Task
GitHub
Kaggle
Hugging Face
Space
Notes



1
DAIGT
DeBERTa
Classification
DAIGT | Catch the AI
DAIGT | DeBERTa
deberta-DAIGT-MODELS
Detection-of-AI-Generated-Text

Part of my Graduation Project

Catch The AI


2
DAIGT
RoBERTa
Classification
DAIGT | Catch the AI
DAIGT | RoBERTa
roberta-DAIGT-kaggle
Detection-of-AI-Generated-Text

Part of my Graduation Project

Catch The AI


3
DAIGT
BERT
Classification
DAIGT | Catch the AI
DAIGT | BERT
bert-DAIGT-MODELS
Detection-of-AI-Generated-Text

Part of my Graduation Project

Catch The AI


4
DAIGT
DistilBERT
Classification
DAIGT | Catch the AI
DAIGT | DistilBERT
distilbert-DAIGT-MODELS
Detection-of-AI-Generated-Text

Part of my Graduation Project

Catch The AI


5
Summarization-by-Finetuning-FlanT5-LoRA
FlanT5
Summarization
Summarization-by-Finetuning-FlanT5-LoRA
Summarization by Finetuning FlanT5-LoRA
FlanT5Summarization-samsum
Summarization by Flan-T5-Large with PEFT

use PEFT and LoRA


6
Finetune Llama2
Llama2
Text Generation
FineTune-Llama2
FineTune-Llama2
llama2-miniguanaco
---

...



7
Text 2 Pandas
T5 base
Text2Text Generation
Text2Pandas
Text2Pandas|T5
text2pandas-T5
Text2Pandas

Take a look at the repo.



8
...
...
...
...
...
...
...

...



### Related Repositories

**[`LLMs from Scratch`](https://github.com/zeyadusf/LLMs-from-Scratch)**
**[`Topics in NLP and LLMs`](https://github.com/zeyadusf/topics-in-nlp-llm)**


## 📞 Contact :



zeyadusf


zeyadusf


zeyadusf



Zeyad Usf




Zeyad Usf



zeyadusf