Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-LLMs-finetuning
Collection of resources for finetuning Large Language Models (LLMs).
https://github.com/pdaicode/awesome-LLMs-finetuning
Last synced: about 22 hours ago
JSON representation
-
1. LLM Performance & Concepts
-
- AlpacaEval Leaderboard - An Automatic Evaluator for Instruction-following Language Models
- Open Ko-LLM Leaderboard - The Open Ko-LLM Leaderboard objectively evaluates the performance of Korean Large Language Model (LLM).
- Yet Another LLM Leaderboard - Leaderboard made with LLM AutoEval using Nous benchmark suite.
- Open LLM Leaderboard - aims to track, rank and evaluate LLMs and chatbots as they are released.
- Chatbot Arena Leaderboard - a benchmark platform for large language models (LLMs) that features anonymous, randomized battles in a crowdsourced manner.
- OpenCompass 2.0 LLM Leaderboard - OpenCompass is an LLM evaluation platform, supporting a wide range of models (InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
-
Kaggle & Colab Notebooks
-
Courses & Lectures
-
-
2. LLM Backbones
-
Blogs
- InternVL
- InternLM2
- Qwen
- Vicuna-13B - source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
- OPT - trained Transformer Language Models by Meta AI. OPT is a series of open-sourced large causal language models which perform similar in performance to GPT3.
- Gemma - based large language model developed by Google AI (2B, 7B).
- Chinchilla
- Mistral AI
- Adept
- Fuyu
- PanGu-α - PanGu-α is a 200B parameter autoregressive pretrained Chinese language model develped by Huawei Noah's Ark Lab, MindSpore Team and Peng Cheng Laboratory.
- PaLM - based large language model developed by Google AI.
- PaLM - based large language model developed by Google AI.
-
Multi-Modal LLMs
-
-
3. LLM and Applications
-
4. Fine-Tuning
-
Papers
-
Frameworks
- Ollama
- LlamaIndex
- Petals - style. Fine-tuning and inference up to 10x faster than offloading. (7768 stars)
- LLaMA-Factory - to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, ChatGLM3). (5532 stars)
- H2O LLM Studio - code GUI for fine-tuning LLMs. Documentation: [https://h2oai.github.io/h2o-llmstudio/](https://h2oai.github.io/h2o-llmstudio/) (2880 stars)
- Phoenix - Evaluate, troubleshoot, and fine tune your LLM, CV, and NLP models in a notebook. (1596 stars)
- LLM-Adapters - Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models". (769 stars)
- Platypus - tuning Platypus fam LLMs using LoRA. (589 stars)
- xtuner - tuning LLM (InternLM, Llama, Baichuan, QWen, ChatGLM2). (540 stars)
- DB-GPT-Hub - tuning techniques for DB-GPT, with the purpose of enhancing model performance, especially in Text-to-SQL, and achieved higher exec acc than GPT-4 in spider eval with 13B LLM used this project. (422 stars)
- LLM-Finetuning-Hub - tuning and deployment scripts along with our research findings. :star: 416
- Finetune_LLMs - tuning Casual LLMs. :star: 391
- llmware - grade LLM-based development framework, tools, and fine-tuned models. :star: 289
- LLM-Kit
- h2o-wizardlm - Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning. :star: 228
- hcgf - model Fine-tuning | LLM微调. :star: 196
- llm_qlora - tuning LLMs using QLoRA. :star: 136
- awesome-llm-human-preference-datasets - tuning, RLHF, and eval. :star: 124
- llm_finetuning - tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes). :star: 114
- lit-gpt - of-the-art open-source LLMs based on nanoGPT. Supports flash attention, 4-bit and 8-bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed. (3469 stars)
- LLM-Finetuning-Hub - tuning and deployment scripts along with our research findings. :star: 416
- MFTCoder - task fine-tuning framework for Code LLMs; 业内首个高精度、高效率、多任务、多模型支持、多训练算法,大模型代码能力微调框架. :star: 337
-
-
This repo is based on the following resources
-
Frameworks
-
-
5. Tools & Software
-
Frameworks
- LLaMA Efficient Tuning - to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon).
- H2O LLM Studio - code GUI for fine-tuning LLMs.
- PEFT - Efficient Fine-Tuning (PEFT) methods for efficient adaptation of pre-trained language models to downstream applications.
- ChatGPT-like model - like model locally on your device.
- Petals - 176B collaboratively, allowing you to load a small part of the model and team up with others for inference or fine-tuning. 🌸
- NVIDIA NeMo - of-the-art conversational AI models and specifically designed for Linux. 🚀
- H2O LLM Studio - code GUI tool for fine-tuning large language models on Windows. 🎛️
- Ludwig AI - code framework for building custom LLMs and other deep neural networks. Easily train state-of-the-art LLMs with a declarative YAML configuration file. 🤖
- bert4torch - source large model weights for reasoning and fine-tuning. 🔥
- Alpaca.cpp - like model locally on your device. A combination of the LLaMA foundation model and an open reproduction of Stanford Alpaca for instruction-tuned fine-tuning. 🦙
- promptfoo
-
Programming Languages
Categories
Sub Categories
Keywords
llm
16
large-language-models
8
gpt
8
fine-tuning
8
llama
7
llama2
6
chatbot
6
nlp
4
rlhf
4
chatgpt
3
pretrained-models
3
llava
3
falcon
3
llm-training
3
llama3
3
datasets
3
qwen
2
phi3
2
qlora
2
peft
2
multimodal
2
agents
2
finetuning
2
mistral
2
chatglm2
2
lora
2
instruction-tuning
2
gpt-4
2
chatglm
2
ai
2
flash-attention
2
agent
2
chinese
2
foundation-models
2
vision-language-model
2
deep-learning
2
machine-learning
2
mixtral
2
bloom
1
llms
1
distributed-systems
1
golang
1
go
1
gemma2
1
gemma
1
vector-database
1
ollama
1
rag
1
phi4
1
multi-agents
1