awesome-llm-finetuning
Resources to aid your learning of LLM finetuning.
https://github.com/ushakrishnan/awesome-llm-finetuning
Last synced: 4 days ago
JSON representation
-
Is (efficient) LLM Finetuning the key to production ready solutions?
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- _</h6>| Declarative deep learning framework built for scale and efficiency. Ludwig is a low-code framework for building custom AI models like LLMs and other deep neural networks. Learn when and how to use Ludwig! |
- Fine-tuning OpenLLaMA-7B with QLoRA for instruction following - With the availability of powerful base LLMs (e.g. LLaMA, Falcon, MPT, etc.) and instruction tuning datasets, along with the development of LoRA and QLoRA, instruction fine-tuning a base model is increasingly accessible to more people/organizations.|
- Personal Copilot: Train Your Own Coding Assistant - LLM-Workshop/tree/main/personal_copilot/training)_</h6> | In this blog post we show how we created HugCoder, a code LLM fine-tuned on the code contents from the public repositories of the huggingface GitHub organization. |
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- Optimizing LLMs - by-Step Guide to Fine-Tuning with PEFT and QLoRA (A Practical Guide to Fine-Tuning LLM using QLora) |
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- A guide to parameter efficient fine-tuning (PEFT) - tune LLMs on downstream tasks.|
- **(Beginner)** RAG vs Finetuning
- Fine-tuning OpenLLaMA-7B with QLoRA for instruction following - With the availability of powerful base LLMs (e.g. LLaMA, Falcon, MPT, etc.) and instruction tuning datasets, along with the development of LoRA and QLoRA, instruction fine-tuning a base model is increasingly accessible to more people/organizations.|
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
-
Videos
- Efficient Fine-Tuning for Llama-v2-7b on a Single GPU - 2 on for their own projects.
- Efficiently Build Custom LLMs on Your Data with Open-source Ludwig
- Introduction to Ludwig - multi-model sample
- **(Beginner)** Build Your Own LLM in Less Than 10 Lines of YAML (Predibase) - How declarative ML simplifies model building and training; How to use off-the-shelf pretrained LLMs with Ludwig - the open-source declarative ML framework from Uber ; How to rapidly fine-tune an LLM on your data in less than 10 lines of code with Ludwig using parameter efficient methods, deepspeed and Ray |
- **(Beginner)** To Fine Tune or Not Fine Tune? That is the question - tuning models? Curious about when it's the right move for customers? Look no further; we've got you covered! |
- Ludwig: A Toolbox for Training and Testing Deep Learning Models without Writing Code
-
Whitepapers
- LoRA: Low-Rabm Adaptation of Large Language Models
- QLORA: Efficient Finetuning of Quantized LLMs - an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance|
- LoRA: Low-Rabm Adaptation of Large Language Models
- QLORA: Efficient Finetuning of Quantized LLMs - an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance|
Programming Languages
Categories
Sub Categories
Keywords
llm
2
machine-learning
2
pytorch
2
awesome-list
2
mlops
2
deep-learning
2
mistral
1
machinelearning
1
llm-training
1
llama2
1
llama
1
learning
1
fine-tuning
1
deeplearning
1
deep
1
data-science
1
data-centric
1
computer-vision
1
transformers
1
python
1
parameter-efficient-learning
1
lora
1
diffusion
1
responsible-ai
1
production-ml
1
production-machine-learning
1
privacy-preserving-ml
1
privacy-preserving-machine-learning
1
privacy-preserving
1
ml-ops
1
ml-operations
1
machine-learning-operations
1
large-scale-ml
1
large-scale-machine-learning
1
interpretability
1
explainability
1
data-mining
1
awesome
1
llmops
1
ai-development-tools
1
neural-network
1
natural-language-processing
1
natural-language
1
ml
1
adapter
1