https://github.com/rbhatia46/llama2-finetune-finance-alpaca-colab
Fine tuning a LLaMA 2 model on Finance Alpaca using 4/8 bit quantization, easily feasible on Colab.
https://github.com/rbhatia46/llama2-finetune-finance-alpaca-colab
Last synced: 5 months ago
JSON representation
Fine tuning a LLaMA 2 model on Finance Alpaca using 4/8 bit quantization, easily feasible on Colab.
- Host: GitHub
- URL: https://github.com/rbhatia46/llama2-finetune-finance-alpaca-colab
- Owner: rbhatia46
- Created: 2023-08-11T09:51:02.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2023-08-11T10:03:15.000Z (about 2 years ago)
- Last Synced: 2025-03-31T01:11:07.615Z (6 months ago)
- Language: Python
- Size: 2.93 KB
- Stars: 3
- Watchers: 2
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# llama2-finetune-finance-alpaca-colab
Fine tuning a LLaMA 2 model on Finance Alpaca using 4/8 bit quantization, easily feasible on Colab.You will need access to LLaMA-2 via HuggingFace, replace with your Access Token from HuggingFace.
This code easily incorporates quantization to do the training on a limited infra, Tesla T4(Free Colab) would work easily with 4 bit quantization, even 8 bit depending on the input context length.
This can be easily generalized to any other dataset, or any other model architecture apart from LLaMA(anything on HuggingFace Hub).