https://github.com/ajlearner46/deduplicate-flanv2-finetune-llama3-
perform deduplication on FLAN v2 dataset & Finetune LLaMa3 using this dataset
https://github.com/ajlearner46/deduplicate-flanv2-finetune-llama3-
cosine-similarity deduplication fine-tuning flan-t5 gguf llama3 llm qlora unsloth
Last synced: 8 months ago
JSON representation
perform deduplication on FLAN v2 dataset & Finetune LLaMa3 using this dataset
- Host: GitHub
- URL: https://github.com/ajlearner46/deduplicate-flanv2-finetune-llama3-
- Owner: AJlearner46
- Created: 2024-06-15T06:09:46.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-06-19T06:34:01.000Z (over 1 year ago)
- Last Synced: 2025-02-24T09:36:42.132Z (8 months ago)
- Topics: cosine-similarity, deduplication, fine-tuning, flan-t5, gguf, llama3, llm, qlora, unsloth
- Language: Jupyter Notebook
- Homepage:
- Size: 27.3 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Deduplicate-flanv2-finetune-LLaMa3-
### FLAN v2 Cot Deduplicated Dataset
### Data Preprocessing
- Remove instructions with less than 100 in 'targets'.
- Dedepulicate Dataset using cosine similarity with a threshold of 0.95.### Finetune
- Finetuned LLaMa3-8b model with this dataset and quantize it.### Huggingface links
- Dataset : https://huggingface.co/datasets/ayushrupapara/flanv2_cot_dedepulicated
- Model : https://huggingface.co/ayushrupapara/llama3_8b_flanv2_cot### Acknowledgments
- The original dataset is provided by SirNeural/flan_v2.
- Tokenizer used: bert-base-uncased from Hugging Face.