Projects in Awesome Lists tagged with low-rank-adaptation
A curated list of projects in awesome lists tagged with low-rank-adaptation .
https://github.com/serp-ai/llama-8bit-lora
Repository for Chat LLaMA - training a LoRA for the LLaMA (1 or 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only.
chat-llama chatllama large-language-model large-language-models llm lora low-rank-adaptation
Last synced: 11 Apr 2025
https://github.com/chongjie-si/subspace-tuning
A generalized framework for subspace tuning methods in parameter efficient fine-tuning.
adapter commonsense-reasoning glue llama llama2-7b llama3-8b lora lora-dash low-rank-adaptation natural-language-generation natural-language-processing natural-language-understanding parameter-efficient-fine-tuning pretrained-models soft-prompt-tuning subject-driven-generation subspace-tuning
Last synced: 05 Apr 2025
https://github.com/hkproj/pytorch-lora
LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch
lora low-rank-adaptation paper-implementations pytorch
Last synced: 06 May 2025
https://github.com/dvgodoy/llm-visuals
Over 60 figures and diagrams of LLMs, quantization, low-rank adapters (LoRA), and chat templates FREE TO USE in your blog posts, slides, presentations, or papers.
bf16 chat-template data-types fine-tuning fine-tuning-llm hugging-face llm llms lora low-rank-adaptation quantization sft supervised-learning
Last synced: 07 May 2025
https://github.com/samadpls/sentimentfinetuning
Efficient fine-tuned large language model (LLM) for the task of sentiment analysis using the IMDB dataset.
finetuning huggingface llm low-rank-adaptation opensource sentiment-analysis transformer
Last synced: 06 Apr 2025
https://github.com/monk1337/nanopeft
The simplest repository & Neat implementation of different Lora methods for training/fine-tuning Transformer-based models (i.e., BERT, GPTs). [ Research purpose ]
huggingface llama llm lora low-rank-adaptation mistral peft qlora quantization
Last synced: 17 Mar 2025
https://github.com/kreasof-ai/homunculus-project
Long term project about a custom AI architecture. Consist of cutting-edge technique in machine learning such as Flash-Attention, Group-Query-Attention, ZeRO-Infinity, BitNet, etc.
bitnet deep-learning flash-attention jupyter-notebook large-language-models low-rank-adaptation machine-learning python pytorch pytorch-lightning transformer vision-transformer
Last synced: 17 Dec 2024
https://github.com/masaok/machine-learning-notes
Machine Learning Notes 2025 (LoRA, etc.)
clustering low-rank-adaptation machine-learning machine-learning-algorithms ml pattern-search reinforcement-learning
Last synced: 29 Jan 2025
https://github.com/work-nobu/ohanashigpt
OhanashiGPT is an application that generates personalized children's stories based on parameters like age and preferences. It narrates these stories using an AI-generated voice that mimics a parent, trained on their audio samples. The app also creates illustrations to accompany each story, providing a unique and engaging experience for children.
ai audio-generation data-science image-generation large-language-models llama3 llamacpp lora low-rank-adaptation stable-diffusion text-generation xtts
Last synced: 24 Jan 2025
https://github.com/abhiram-kandiyana/error-explainer
Explanation of Programming Errors using Open-source LLMs
code-llama few-shot-prompting finetuning-llms low-rank-adaptation text-alignment
Last synced: 26 Mar 2025
https://github.com/ruvenguna94/dialogue-summary-peft-fine-tuning
This notebook fine-tunes the FLAN-T5 model for dialogue summarization, comparing full fine-tuning with Parameter-Efficient Fine-Tuning (PEFT). It evaluates performance using ROUGE metrics, demonstrating PEFT's efficiency while achieving competitive results.
dialogue-summarization fine-tuning flan-t5 generative-ai hugging-face lora low-rank-adaptation natural-language-processing nlp parameter-efficient-fine-tuning peft pytorch rouge
Last synced: 06 Apr 2025
https://github.com/satyampurwar/large-language-models
Unlocking the Power of Generative AI: In-Context Learning, Instruction Fine-Tuning and Reinforcement Learning Fine-Tuning.
bert conda-environment encoder-decoder-model encoder-model few-shot-prompting flan-t5 generative-ai instruction-fine-tuning kl-divergence large-language-models low-rank-adaptation megacmd memory-management model-quantization peft-fine-tuning-llm prompt-engineering proximal-policy-optimization reinforcement-learning-from-ai-feedback reinforcement-learning-from-human-feedback storage-management
Last synced: 21 Feb 2025
https://github.com/pavansomisetty21/fine-tuning-mlp-model-with-lora-and-dora
In this we fine-tune MLP with LoRA and DoRA on CIFAR-10 Dataset
cifar-10 cifar10 dora finetuning lora low-rank-adaptation mlp
Last synced: 27 Feb 2025
https://github.com/pinsaraperera/fine-tuning-bert-with-low-rank-adaptation
fine-tuned BERT Base 110M parameters model with Low-Rank Adaptation (LoRA). The model is trained on a dataset of URLs labeled as either phishing or legitimate.
bert-fine-tuning low-rank-adaptation phishing-detection tokenizer
Last synced: 16 Apr 2025
https://github.com/architj6/llama2-finetuning
🦙 Llama2-FineTuning: Fine-tune LLAMA 2 with Custom Datasets Using LoRA and QLoRA Techniques
bitsandbytes fine-tuning fine-tuning-llama2 fine-tuning-llm google-colab huggingface large-language-models llama2 lora low-rank-adaptation nlp peft pytorch qlora quantization supervised-fine-tuning text-generation transformer-reinforcement-learning transformers
Last synced: 19 Apr 2025
https://github.com/reshalfahsi/qa-gpt2-lora
Question-Answering using GPT-2's PEFT with LoRA
gpt-2 huggingface lora low-rank-adaptation nlp peft question-answering squad-dataset
Last synced: 01 Apr 2025