Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-llm-finetuning
Resources to aid your learning of LLM finetuning.
https://github.com/ushakrishnan/awesome-llm-finetuning
Last synced: 2 days ago
JSON representation
-
Is (efficient) LLM Finetuning the key to production ready solutions?
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- ![LLM Architecture Patterns - ai-llmops-deployment-architecture-patterns-6d45d1668aba)
- Continuous Delivery for Machine Learning
- ![Continuous delivery for ML
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
- Generative AI — LLMOps Architecture Patterns
-
Code Samples
- Parameter-Efficient Fine-Tuning (PEFT) - tune a small number of (extra) model parameters, significantly decreasing computational and storage costs because fine-tuning large-scale PLMs is prohibitively costly. |
- QLoRA: Efficient Finetuning of Quantized LLMs - an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance |
- Ludwig - code framework for building custom LLMs, neural networks, and other AI models |
- Curated list of the best LLMOps tools for developers
- **(Beginner)** Ludwig 0.8: Hands On Webinar - efficient fine-tuning techniques make training LLMs with Ludwig fast and easy |
- Fine-tuning LLMs with PEFT and LoRA - Efficient Fine-Tuning of Billion-Scale Models on Low-Resource Hardware |
- **(Beginner)** Fine-tuning OpenLLaMA-7B with QLoRA for instruction following - ift/)_</h6>| Tried and tested sample - With the availability of powerful base LLMs (e.g. LLaMA, Falcon, MPT, etc.) and instruction tuning datasets, along with the development of LoRA and QLoRA, instruction fine-tuning a base model is increasingly accessible to more people/organizations.|
- Awesome Production Machine Learning - This repository contains a curated list of awesome open source libraries that will help you deploy, monitor, version, scale and secure your production machine learning |
-
Reading Materials
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG, Fine-tuning or Both? - tuning or Both? A Complete Framework for Choosing the Right Strategy |
- Parameter-Efficient Fine-Tuning (PEFT) - tune a small number of (extra) model parameters, significantly decreasing computational and storage costs because fine-tuning large-scale PLMs is prohibitively costly. |
- Ludwig - Declarative deep learning framework - ai/ludwig)_</h6> <h6>_[Code sample](https://colab.research.google.com/drive/1c3AO8l_H6V_x37RwQ8V7M6A-RmcBf2tG?usp=sharing)_</h6>| Declarative deep learning framework built for scale and efficiency. Ludwig is a low-code framework for building custom AI models like LLMs and other deep neural networks. Learn when and how to use Ludwig! |
- Fine-tuning OpenLLaMA-7B with QLoRA for instruction following - With the availability of powerful base LLMs (e.g. LLaMA, Falcon, MPT, etc.) and instruction tuning datasets, along with the development of LoRA and QLoRA, instruction fine-tuning a base model is increasingly accessible to more people/organizations.|
- Personal Copilot: Train Your Own Coding Assistant - LLM-Workshop/tree/main/personal_copilot/training)_</h6> | In this blog post we show how we created HugCoder, a code LLM fine-tuned on the code contents from the public repositories of the huggingface GitHub organization. |
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- Optimizing LLMs - by-Step Guide to Fine-Tuning with PEFT and QLoRA (A Practical Guide to Fine-Tuning LLM using QLora) |
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- A guide to parameter efficient fine-tuning (PEFT) - tune LLMs on downstream tasks.|
- **(Beginner)** RAG vs Finetuning
- Fine-tuning OpenLLaMA-7B with QLoRA for instruction following - With the availability of powerful base LLMs (e.g. LLaMA, Falcon, MPT, etc.) and instruction tuning datasets, along with the development of LoRA and QLoRA, instruction fine-tuning a base model is increasingly accessible to more people/organizations.|
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
- **(Beginner)** RAG vs Finetuning
-
Videos
- Efficient Fine-Tuning for Llama-v2-7b on a Single GPU - 2 on for their own projects.
- Efficiently Build Custom LLMs on Your Data with Open-source Ludwig
- Introduction to Ludwig - multi-model sample
- **(Beginner)** Build Your Own LLM in Less Than 10 Lines of YAML (Predibase) - How declarative ML simplifies model building and training; How to use off-the-shelf pretrained LLMs with Ludwig - the open-source declarative ML framework from Uber ; How to rapidly fine-tune an LLM on your data in less than 10 lines of code with Ludwig using parameter efficient methods, deepspeed and Ray |
- **(Beginner)** To Fine Tune or Not Fine Tune? That is the question - tuning models? Curious about when it's the right move for customers? Look no further; we've got you covered! |
- Ludwig: A Toolbox for Training and Testing Deep Learning Models without Writing Code
-
Whitepapers
- LoRA: Low-Rabm Adaptation of Large Language Models
- QLORA: Efficient Finetuning of Quantized LLMs - an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance|
- LoRA: Low-Rabm Adaptation of Large Language Models
- QLORA: Efficient Finetuning of Quantized LLMs - an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance|
Programming Languages
Categories
Sub Categories
Keywords
llm
2
machine-learning
2
pytorch
2
awesome-list
2
mlops
2
deep-learning
2
mistral
1
machinelearning
1
llm-training
1
llama2
1
llama
1
learning
1
fine-tuning
1
deeplearning
1
deep
1
data-science
1
data-centric
1
computer-vision
1
transformers
1
python
1
parameter-efficient-learning
1
lora
1
diffusion
1
responsible-ai
1
production-ml
1
production-machine-learning
1
privacy-preserving-ml
1
privacy-preserving-machine-learning
1
privacy-preserving
1
ml-ops
1
ml-operations
1
machine-learning-operations
1
large-scale-ml
1
large-scale-machine-learning
1
interpretability
1
explainability
1
data-mining
1
awesome
1
llmops
1
ai-development-tools
1
neural-network
1
natural-language-processing
1
natural-language
1
ml
1
adapter
1