https://github.com/ambidextrous9/finetune-llms-using-lora-in-colab-on-custom-datasets
Finetune LLMs using LoRA in Colab on Custom Datasets
https://github.com/ambidextrous9/finetune-llms-using-lora-in-colab-on-custom-datasets
Last synced: 4 months ago
JSON representation
Finetune LLMs using LoRA in Colab on Custom Datasets
- Host: GitHub
- URL: https://github.com/ambidextrous9/finetune-llms-using-lora-in-colab-on-custom-datasets
- Owner: ambideXtrous9
- Created: 2023-12-14T19:27:31.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2023-12-17T15:05:27.000Z (almost 2 years ago)
- Last Synced: 2025-01-11T21:32:56.393Z (9 months ago)
- Language: Jupyter Notebook
- Size: 1.21 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Finetune-LLMs-using-LoRA-in-Colab-on-Custom-Datasets

## trainable params: 9,437,184 || all params: 2,859,194,368 || trainable%: 0.33006444422319176
[Causal LLMs and Seq2Seq Architectures](https://heidloff.net/article/causal-llm-seq2seq/#sequence-to-sequence)
[Understanding Causal LLM’s, Masked LLM’s, and Seq2Seq: A Guide to Language Model Training Approaches](https://medium.com/@tom_21755/understanding-causal-llms-masked-llm-s-and-seq2seq-a-guide-to-language-model-training-d4457bbd07fa)
[ "Compute Metrics" with Huggingface Question Answering](https://stackoverflow.com/questions/75744031/why-do-we-need-to-write-a-function-to-compute-metrics-with-huggingface-questio)