https://github.com/sccsmartcode/deep-learning-03-llm-finetuning
Scalable and modular framework for fine-tuning large language models (LLMs) using LoRA and QLoRA. Supports 4-bit/8-bit quantization, Hugging Face Transformers, and instruction-tuning workflows across various tasks and datasets. Built for reproducibility, extensibility, and efficient experimentation.
https://github.com/sccsmartcode/deep-learning-03-llm-finetuning
bitsandbytes finetuning huggingface llama2 llm peft qlora transformers
Last synced: 8 months ago
JSON representation
Scalable and modular framework for fine-tuning large language models (LLMs) using LoRA and QLoRA. Supports 4-bit/8-bit quantization, Hugging Face Transformers, and instruction-tuning workflows across various tasks and datasets. Built for reproducibility, extensibility, and efficient experimentation.
- Host: GitHub
- URL: https://github.com/sccsmartcode/deep-learning-03-llm-finetuning
- Owner: SCCSMARTCODE
- License: mit
- Created: 2025-05-17T23:36:38.000Z (9 months ago)
- Default Branch: main
- Last Pushed: 2025-05-20T22:44:21.000Z (9 months ago)
- Last Synced: 2025-06-08T15:01:51.518Z (8 months ago)
- Topics: bitsandbytes, finetuning, huggingface, llama2, llm, peft, qlora, transformers
- Language: Jupyter Notebook
- Homepage:
- Size: 66.4 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Deep-Learning-03-LLM-FineTuning
Scalable and modular framework for fine-tuning large language models (LLMs) using LoRA and QLoRA. Supports 4-bit/8-bit quantization, Hugging Face Transformers, and instruction-tuning workflows across various tasks and datasets. Built for reproducibility, extensibility, and efficient experimentation.