Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/SunLemuria/OpenGPTAndBeyond
Open efforts to implement ChatGPT-like models and beyond.
https://github.com/SunLemuria/OpenGPTAndBeyond
alpaca chatbot chatglm chatgpt large-language-models llm nlp openai opensource
Last synced: 5 days ago
JSON representation
Open efforts to implement ChatGPT-like models and beyond.
- Host: GitHub
- URL: https://github.com/SunLemuria/OpenGPTAndBeyond
- Owner: SunLemuria
- Created: 2023-03-31T13:48:45.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-07-23T09:17:34.000Z (4 months ago)
- Last Synced: 2024-10-26T20:32:16.713Z (15 days ago)
- Topics: alpaca, chatbot, chatglm, chatgpt, large-language-models, llm, nlp, openai, opensource
- Homepage:
- Size: 241 KB
- Stars: 104
- Watchers: 5
- Forks: 14
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# ChatGPT:开源与超越
简体中文 | English
开源类ChatGPT模型的实现与超越之路
LLaMA权重意外泄露、以及斯坦福小羊驼用以self-instruct方式从gpt-3 api构建的数据对LLaMA进行指令微调取得令人印象深刻的表现以来,开源社区对实现ChatGPT水平的大语言模型感到越来越有希望。
这个repo就是记录这个复刻与超越的过程,为社区提供一个概览。
包括:相关技术进展、基础模型、领域模型、训练、推理、技术、数据、多语言、多模态,等等
# 目录
- [Base Models](#base-models)
- [Domain Models](#domain-models)
- [General Domain Instruction Models](#general-domain-instruction-models)
- [Model Merging](#model-merging)
- [Alternatives To Transformer](#alternatives-to-transformer)
- [Multi-Modal](#multi-modal)
- [MoE](#moe)
- [Data](#data)
- [Pretrain Data](#pretrain-data)
- [Instruction Data](#instruction-data)
- [Synthetic Data Generation](#synthetic-data-generation)
- [Evaluation](#evaluation)
- [Benchmark](#enchmark)
- [LeaderBoard](#leaderboard)
- [Framework/ToolKit/Platform](#frameworktoolkitplatform)
- [Alignment](#alignment)
- [Multi-Language](#multi-language)
- [vocabulary expansion](#vocabulary-expansion)
- [Efficient Training/Fine-Tuning](#efficient-trainingfine-tuning)
- [Low-Cost Inference](#low-cost-inference)
- [quantization](#quantization)
- [projects](#projects)
- [Prompt Compression](#prompt-compression)
- [Prompting](#prompting)
- [Safety](#safety)
- [Truthfulness](#truthfulness)
- [Exceeding Context Window](#exceeding-context-window)
- [Knowledge Editing](#knowledge-editing)
- [Implementations](#implementations)
- [External Knowledge](#external-knowledge)
- [AI搜索引擎](#ai搜索引擎)
- [Chat with Docs](#chat-with-docs)
- [内容解析](#内容解析)
- [Vector DataBase](#vector-database)
- [External Tools](#external-tools)
- [Using Existing Tools](#using-existing-tools)
- [Make New Tools](#make-new-tools)
- [Agent](#agent)
- [LLMs as XXX](#llms-as-xxx)
- [Similar Collections](#similar-collections)# Base Models
| contributor | model/project | license | language | main feature |
| ------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Meta | [LLaMA/LLaMA2](https://github.com/facebookresearch/llama) | | multi | LLaMA-13B outperforms GPT-3(175B) and LLaMA-65B is competitive to PaLM-540M.
Base model for most follow-up works. |
| HuggingFace-BigScience | [BLOOM](https://huggingface.co/bigscience/bloom) | | multi | an autoregressive Large Language Model (LLM) trained by HuggingFace BigScience. |
| HuggingFace-BigScience | [BLOOMZ](https://huggingface.co/bigscience/bloomz) | | multi | instruction-finetuned version of BLOOM & mT5 pretrained multilingual language models on crosslingual task mixture. |
| EleutherAI | [GPT-J](https://huggingface.co/EleutherAI/gpt-j-6b) | | en | transformer model trained using Ben Wang's[Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). |
| Meta | [OPT](https://huggingface.co/facebook/opt-66b) | | en | Open Pre-trained Transformer Language Models, aim in developing this suite of OPT models is to enable reproducible
and responsible research at scale, and to bring more voices to the table in studying the impact of these LLMs. |
| [Cerebras Systems](https://www.cerebras.net/) | [Cerebras-GPT](https://huggingface.co/cerebras/Cerebras-GPT-13B) | | en | Pretrained LLM, GPT-3 like, Commercially available, efficiently trained on the[Andromeda](https://www.cerebras.net/andromeda/) AI supercomputer,
trained in accordance with[Chinchilla scaling laws](https://arxiv.org/abs/2203.15556) (20 tokens per model parameter) which is compute-optimal. |
| EleutherAI | [pythia](https://github.com/EleutherAI/pythia) | | en | combine interpretability analysis and scaling laws to understand how knowledge develops
and evolves during training in autoregressive transformers. |
| Stability-AI | [StableLM](https://github.com/Stability-AI/StableLM) | | en | Stability AI Language Models |
| FDU | [MOSS](https://github.com/OpenLMLab/MOSS) | | en/zh | An open-source tool-augmented conversational language model from Fudan University. |
| ssymmetry & FDU | [BBT-2](https://bbt.ssymmetry.com/) | | zh | 12B open-source LM. |
| @mlfoundations | [OpenFlamingo](https://github.com/mlfoundations/open_flamingo) | | en | An open-source framework for training large multimodal models. |
| EleutherAI | [GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b) | | en | Its architecture intentionally resembles that of GPT-3, and is almost identical to that of[GPT-J- 6B](https://huggingface.co/EleutherAI/gpt-j-6B). |
| UCB | [OpenLLaMA](https://github.com/openlm-research/open_llama) | Apache-2.0 | en | An Open Reproduction of LLaMA. |
| MosaicML | [MPT](https://github.com/mosaicml/llm-foundry) | Apache-2.0 | en | MPT-7B is a GPT-style model, and the first in the MosaicML Foundation Series of models.
Trained on 1T tokens of a MosaicML-curated dataset, MPT-7B is open-source,
commercially usable, and equivalent to LLaMa 7B on evaluation metrics. |
| TogetherComputer | [RedPajama-INCITE-Base-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-3B-v1) | Apache-2.0 | en | A 2.8B parameter pretrained language model, pretrained on[RedPajama-Data-1T](https://huggingface.co/models?dataset=dataset:togethercomputer/RedPajama-Data-1T),
together with an [Instruction-tuned Version](https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-3B-v1) and a [Chat Version](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1). |
| Lightning-AI | [Lit-LLaMA](https://github.com/Lightning-AI/lit-llama) | Apache-2.0 | - | Independent implementation of[LLaMA](https://github.com/facebookresearch/llama) that is fully open source under the **Apache 2.0 license.** |
| @conceptofmind | [PaLM](https://github.com/conceptofmind/PaLM) | MIT License | en | An open-source implementation of Google PaLM models. |
| [TII](https://www.tii.ae/) | [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) | [TII Falcon LLM License](https://huggingface.co/tiiuae/falcon-7b/blob/main/LICENSE.txt) | en | a 7B parameters causal decoder-only model built by[TII](https://www.tii.ae/) and trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. |
| [TII](https://www.tii.ae/) | [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) | [TII Falcon LLM License](https://huggingface.co/tiiuae/falcon-7b/blob/main/LICENSE.txt) | multi | a 40B parameters causal decoder-only model built by[TII](https://www.tii.ae/) and trained on 1,000B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. |
| TigerResearch | [TigerBot](https://github.com/TigerResearch/TigerBot) | Apache-2.0 | en/zh | a multi-language and multitask LLM. |
| BAAI | [Aquila](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/Aquila) / [Aquila2](https://github.com/FlagAI-Open/Aquila2) | [BAAI_Aquila_Model_License](https://github.com/FlagAI-Open/FlagAI/blob/master/BAAI_Aquila_Model_License.pdf) | en/zh | The Aquila language model inherits the architectural design advantages of GPT-3 and LLaMA, replacing a batch of more efficient underlying
operator implementations and redesigning the tokenizer for Chinese-English bilingual support. |
| OpenBMB | [CPM-Bee](https://github.com/OpenBMB/CPM-Bee) | [通用模型许可协议-来源说明-宣传限制-商业授权](https://github.com/OpenBMB/General-Model-License/blob/main/%E9%80%9A%E7%94%A8%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE-%E6%9D%A5%E6%BA%90%E8%AF%B4%E6%98%8E-%E5%AE%A3%E4%BC%A0%E9%99%90%E5%88%B6-%E5%95%86%E4%B8%9A%E6%8E%88%E6%9D%83.md) | en/zh | **CPM-Bee** is a fully open-source, commercially-usable Chinese-English bilingual base model with a capacity of ten billion parameters.
And has been pre-trained on an extensive corpus of trillion-scale tokens. |
| Baichuan | [baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) | Apache-2.0 | en/zh | It has achieved the best performance among models of the same size on standard
Chinese and English authoritative benchmarks (C-EVAL, MMLU, etc). |
| Tencent | [lyraChatGLM](https://huggingface.co/TMElyralab/lyraChatGLM) | MIT License | en/zh | To the best of our knowledge, it is the**first accelerated version of ChatGLM-6B**.
The inference speed of lyraChatGLM has achieved **300x** acceleration upon the early original version.
We are still working hard to further improve the performance. |
| SalesForce | [XGen](https://github.com/salesforce/xgen) | Apache-2.0 | multi | Salesforce open-source LLMs with 8k sequence length |
| Shanghai AI Lab | [InternLM](https://github.com/InternLM/InternLM) | Apache-2.0 | en/zh | InternLM has open-sourced a 7 billion parameter base model and a chat model tailored for practical scenarios. The model has the following characteristics:
It leverages trillions of high-quality tokens for training to establish a powerful knowledge base.
It supports an 8k context window length, enabling longer input sequences and stronger reasoning capabilities.
It provides a versatile toolset for users to flexibly build their own workflows. |
| xverse-ai | [XVERSE](https://github.com/xverse-ai) | Apache-2.0 | multi | Multilingual LLMs developed by XVERSE Technology Inc. |
| Writer | [palmyra](https://huggingface.co/Writer/palmyra-base) | Apache-2.0 | en | extremely powerful while being extremely fast. This model excels at many nuanced tasks
such as sentiment classification and summarization. |
| Mistral AI | [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) | Apache-2.0 | en | Mistral 7B is a 7.3B parameter model that:
1. Outperforms Llama 2 13B on all benchmarks
2. Outperforms Llama 1 34B on many benchmarks
3. Approaches CodeLlama 7B performance on code, while remaining good at English tasks
4. Uses Grouped-query attention (GQA) for faster inference
5. Uses Sliding Window Attention (SWA) to handle longer sequences at smaller cost |
| SkyworkAI | [Skywork](https://github.com/SkyworkAI/Skywork) | - | en/zh | In major evaluation benchmarks, Skywork-13B is at the forefront of Chinese open source models and is the optimal level under the same parameter scale;
it can be used commercially without application; it has also open sourced a 600G (150 billion tokens) Chinese data set. |
| [01.AI](https://01.ai/) | [Yi](https://github.com/01-ai/Yi) | - | en/zh | The**Yi** series models are large language models trained from scratch by developers at [01.AI](https://01.ai/). |
| IEIT Systems | [Yuan-2.0](https://github.com/IEIT-Yuan/Yuan-2.0) | - | en/zh | In this work, the Localized Filtering-based Attention (LFA) is introduced to incorporate prior knowledge of local dependencies of natural language into Attention.
Based on LFA, we develop and release Yuan 2.0, a large language model with parameters ranging from 2.1 billion to 102.6 billion. A data filtering and generation method
is presented to build pretraining and fine-tuning dataset in high quality. A distributed training method with non-uniform pipeline parallel, data parallel, and optimizer parallel is proposed,
which greatly reduces the bandwidth requirements of intra-node communication, and achieves good performance in large-scale distributed training.
Yuan 2.0 models display impressive ability in code generation, math problem-solving, and chat compared with existing models. |
| Nanbeige | [Nanbeige](https://github.com/Nanbeige/Nanbeige) | Apache-2.0 | en/zh | Nanbeige-16B is a 16 billion parameter language model developed by Nanbeige LLM Lab. It uses 2.5T Tokens for pre-training. The training data includes a large amount of high-quality internet corpus, various books, code, etc. It has achieved good results on various authoritative evaluation data sets. This release includes the Base, Chat, Base-32k and Chat-32k. |
| deepseek-ai | [deepseek-LLM](https://github.com/deepseek-ai/deepseek-LLM) | MIT License | en/zh | an advanced language model comprising 67 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese. |
| LLM360 | [LLM360](https://github.com/LLM360) | - | - | Most open-source LLM releases include model weights and evaluation results. However, additional information is often needed to genuinely understand a model's behavior—and this information is not typically available to most researchers. Hence, we commit to releasing all of the intermediate checkpoints (up to 360!) collected during training, all of the training data (and its mapping to checkpoints), all collected metrics (e.g., loss, gradient norm, evaluation results), and all source code for preprocessing data and model training. These additional artifacts can help researchers and practitioners to have a deeper look into LLM’s construction process and conduct research such as analyzing model dynamics. We hope that LLM360 can help make advanced LLMs more transparent, foster research in smaller-scale labs, and improve reproducibility in AI research. |
| FDU, etc. | [CT-LLM](https://github.com/Chinese-Tiny-LLM/Chinese-Tiny-LLM) | - | zh/en | focusing on the Chinese language. Starting from scratch, CT-LLM primarily uses Chinese data from a 1,200 billion token corpus, including 800 billion Chinese, 300 billion English, and 100 billion code tokens. By open-sourcing CT-LLM's training process, including data processing and the Massive Appropriate Pretraining Chinese Corpus (MAP-CC), and introducing the Chinese Hard Case Benchmark (CHC-Bench), we encourage further research and innovation, aiming for more inclusive and adaptable language models. |
| TigerLab | [MAP-NEO](https://github.com/multimodal-art-projection/MAP-NEO) | - | zh/en | 第一个从数据处理到模型训练过程、模型权重全流程开源的大模型。 |
| DataCamp | [DCLM](https://github.com/mlfoundations/dclm) | - | - | 提供了用于处理原始数据、标记化、数据打乱、模型训练以及性能评估的工具和指南。基础baseline 7B模型性能优异。 |# Domain Models
| contributor | model | domain | language | base model | main feature |
| ---------------------------------- | ------------------------------------------------------------------------------------------------------------------ | --------------- | -------- | ------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| UT Southwestern/
UIUC/OSU/HDU | [ChatDoctor](https://github.com/Kent0n-Li/ChatDoctor) | medical | en | LLaMA | Maybe the first domain-specific chat model tuned on LLaMA. |
| Cambridge | [Visual Med-Alpaca](https://github.com/cambridgeltl/visual-med-alpaca) | biomedical | en | LLaMA-7B | a multi-modal foundation model designed specifically for the biomedical domain. |
| HIT | [BenTsao](https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese) / [ChatGLM-Med](https://github.com/SCIR-HI/Med-ChatGLM) | medical | zh | LLaMA/ChatGLM | fine-tuned with Chinese medical knowledge dataset, which is generated by using gpt3.5 api. |
| ShanghaiTech, etc. | [DoctorGLM](https://github.com/xionghonglin/DoctorGLM) | medical | en/zh | ChatGLM-6B | Chinese medical consultation model fine-tuned on ChatGLM-6B. |
| THU AIR | [BioMedGPT-1.6B](https://github.com/BioFM/OpenBioMed) | biomedical | en/zh | - | a pre-trained multi-modal molecular foundation model with 1.6B parameters that associates 2D molecular graphs with texts. |
| @LiuHC0428 | [LawGPT_zh](https://github.com/LiuHC0428/LAW-GPT) | legal | zh | ChatGLM-6B | a general model in Chinese legal domain, trained on data generated via Reliable-Self-Instruction. |
| SJTU | [MedicalGPT-zh](https://github.com/MediaBrain-SJTU/MedicalGPT-zh) | medical | zh | ChatGLM-6B | a general model in Chinese medical domain, a diverse data generated via self-instruct. |
| SJTU | [PMC-LLaMA](https://github.com/chaoyi-wu/PMC-LLaMA) | medical | zh | LLaMA | Continue Training LLaMA on Medical Papers. |
| HuggingFace | [StarCoder](https://github.com/bigcode-project/starcoder) | code generation | en | - | a language model (LM) trained on source code and natural language text. Its training data incorporates more than
80 different programming languages as well as text extracted from GitHub issues and commits and from notebooks. |
| @CogStack | [NHS-LLM](https://github.com/CogStack/opengpt#nhs-llm) | medical | en | not clear | A conversational model for healthcare trained using[OpenGPT](https://github.com/CogStack/opengpt). |
| @pengxiao-song | [LaWGPT](https://github.com/pengxiao-song/LaWGPT) | legal | zh | LLaMA/ChatGLM | expand the vocab with Chinese legal terminologies, instruction fine-tuned on data generated using self-instruct. |
| Duxiaoman | [XuanYuan](https://github.com/Duxiaoman-DI/XuanYuan) | finance | zh | BLOOM-176B | A Large Chinese Financial Chat Model with Hundreds of Billions Parameters. |
| CUHK | [HuatuoGPT](https://github.com/FreedomIntelligence/HuatuoGPT) | medical | zh | not clear | HuatuoGPT, a large language model (LLM) trained on a vast Chinese medical corpus. Our objective with HuatuoGPT is
to construct a more professional ‘ChatGPT’ for medical consultation scenarios. |
| PKU | [Lawyer LLaMA](https://github.com/AndrewZhe/lawyer-llama) | legal | zh | LLaMA | continue pretraining on Chinese legal data, insturction tuned on legal exams and legal consulting qa pairs. |
| THU | [LexiLaw](https://github.com/CSHaitao/LexiLaw) | legal | zh | ChatGLM-6B | trained on a mixture of general data ([BELLE](https://github.com/LianjiaTech/BELLE) 1.5M) and legal data |
| THU, etc. | [taoli](https://github.com/blcuicall/taoli) | education | zh | LLaMA | A large model for international Chinese education. It extends specific vocabulary on the base model,
and uses the domain's proprietary data set for instruction fine-tuning. |
| NUS | [Goat](https://github.com/liutiedong/goat) | arithmetic | en | LLaMA | a fine-tuned LLaMA model that significantly outperforms GPT-4 on a range of arithmetic tasks.
Fine-tuned on a synthetically generated dataset, Goat achieves state-ofthe-art performance on BIG-bench arithmetic sub-task. |
| CU/NYU | [FinGPT](https://github.com/AI4Finance-Foundation/FinGPT) | finance | en | - | an end-to-end open-source framework for financial large language models (FinLLMs). |
| microsoft | [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder) | code generation | en | StarCoder | trained with**78k** evolved code instructions. surpasses **Claude-Plus (+6.8)** , **Bard (+15.3)** and **InstructCodeT5+ (+22.3)** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). |
| UCAS | [Cornucopia](https://github.com/jerry1993-tech/Cornucopia-LLaMA-Fin-Chinese) | finance | zh | LLaMA | finetune LLaMA on Chinese financial knowledge, |
| PKU | [ChatLaw](https://github.com/PKU-YuanGroup/ChatLaw) | legal | zh | [Ziya](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-Pretrain-v1) / [Anima](https://github.com/lyogavin/Anima) | Chinese legal domain model. |
| @michael-wzhu | [ChatMed](https://github.com/michael-wzhu/ChatMed) | medical | zh | LLaMA | Chinese medical LLM based on LLaMA-7B. |
| SCUT | [SoulChat](https://github.com/scutcyr/SoulChat) | mental health | zh | ChatGLM-6B | Chinese dialogue LLM in mental health domain, based on ChatGLM-6B. |
| @shibing624 | [MedicalGPT](https://github.com/shibing624/MedicalGPT) | medical | zh | ChatGLM-6B | Training Your Own Medical GPT Model with ChatGPT Training Pipeline. |
| BJTU | [TransGPT](https://github.com/DUOMO/TransGPT) | transportation | zh | LLaMA-7B | Chinese transportation model. |
| BAAI | [AquilaCode](https://github.com/FlagAI-Open/FlagAI/tree/master/examples/Aquila/Aquila-code) | code generation | multi | Aquila | AquilaCode-multi is a multi-language model that supports high-accuracy code generation for various programming languages, including Python/C++/Java/Javascript/Go, etc.
It has achieved impressive results in HumanEval (Python) evaluation, with Pass@1, Pass@10, and Pass@100 scores of 26/45.7/71.6, respectively. In the HumanEval-X
multi-language code generation evaluation, it significantly outperforms other open-source models with similar parameters (as of July 19, 2023).
AquilaCode-py, on the other hand, is a single-language Python version of the model that focuses on Python code generation.
It has also demonstrated excellent performance in HumanEval evaluation, with Pass@1, Pass@10, and Pass@100 scores of 28.8/50.6/76.9 (as of July 19, 2023). |
| Meta | [CodeLLaMA](https://github.com/facebookresearch/codellama) | code generation | multi | LLaMA-2 | a family of large language models for code based on[Llama 2](https://github.com/facebookresearch/llama) providing state-of-the-art performance among open models, infilling capabilities,
support for large input contexts, and zero-shot instruction following ability for programming tasks. |
| UNSW, etc | [Darwin](https://github.com/MasterAI-EAM/Darwin) | natural science | en | LLaMA-7B | the first open-source LLM for natural science, mainly in physics, chemistry and material science. |
| alibaba | [EcomGPT](https://github.com/Alibaba-NLP/EcomGPT) | e-commerce | en/zh | BLOOMZ | An Instruction-tuned Large Language Model for E-commerce. |
| TIGER-AI-Lab | [MAmmoTH](https://github.com/TIGER-AI-Lab/MAmmoTH) | math | en | LLaMA2/CodeLLaMA | a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct,
a meticulously curated instruction tuning dataset that is lightweight yet generalizable. MathInstruct is compiled from 13 math rationale datasets,
six of which are newly curated by this work. It uniquely focuses on the hybrid use of chain-of-thought (CoT) and program-of-thought (PoT) rationales,
and ensures extensive coverage of diverse mathematical fields. |
| SJTU | [abel](https://github.com/GAIR-NLP/abel) | math | en | LLaMA2 | We propose**Parental Oversight*** , A ***Babysitting Strategy*** for Supervised Fine-tuning, `Parental Oversight` is not limited to any specific data processing method. Instead, it defines the data processing philosophy that should guide supervised fine-tuning in the era of Generative AI GAI). |
| FDU | [DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM) | legal | zh | Baichuan-13B | FudanDISC has released DISC-LawLLM, a Chinese intelligent legal system driven by a large language model.
The system can provide various legal services for different user groups. In addition, DISC-Law-Eval is constructed to evaluate the large legal language model from both objective and subjective aspects.
The model has obvious advantages compared with the existing large legal models.
The team also made available a high-quality Supervised fine-tuning (SFT) dataset of 300,000, DISC-Law-SFT. |
| HKU, etc | [ChatPsychiatrist](https://github.com/EmoCareAI/ChatPsychiatrist) | mental health | en | LLaMA-7B | This repo open-sources the Instruct-tuned LLaMA-7B model that has been fine-tuned with counseling domian instruction data.
To construct our 8K size instruct-tuning dataset, we collected real-world counseling dialogue examples and employed GPT-4 as an extractor and filter.
In addition, we have introduced a comprehensive set of metrics, specifically tailored to the LLM+Counseling domain, by incorporating counseling domain evaluation criteria.
These metrics enable the assessment of performance in generating language content that involves multi-dimensional counseling skills. |
| CAS | [StarWhisper](https://wisemodel.cn/models/LiYuYang/StarWhisper) | astronomical | zh | - | StarWhisper, a large astronomical model, significantly improves the reasoning logic and integrity of the model through the fine-tuning of astrophysical corpus labeled by experts,
logical long text training, and direct preference optimization. In the CG-Eval jointly published by the Keguei AI Research Institute and LanguageX AI Lab, it reached the second place overall,
just below GPT-4, and its mathematical reasoning and astronomical capabilities are close to or exceed the GPT 3.5 Turbo. |
| ZhiPuAI | [FinGLM](https://github.com/MetaGLM/FinGLM) | finance | zh | ChatGLM | solutions of SMP2023-ELMFT(The Evaluation of Large Model of Finance Technology). |
| PKU, etc | [CodeShell](https://github.com/WisdomShell/codeshell) | code generation | en/zh | - | CodeShell is a code large language model (LLM) developed jointly by the[Knowledge Computing Lab at Peking University](http://se.pku.edu.cn/kcl/) and the AI team of Sichuan Tianfu Bank. CodeShell has 7 billion parameters,
was trained on 500 billion tokens, and has a context window length of 8192. On authoritative code evaluation benchmarks (HumanEval and MBPP), CodeShell achieves the best performance for models of its scale. |
| FDU | [DISC-FinLLM](https://github.com/FudanDISC/DISC-FinLLM) | finance | zh | Baichuan-13B-Chat | DISC-FinLLM is a large language model in the financial field. It is a multi-expert intelligent financial system composed of four modules for different financial scenarios: financial consulting,
financial text analysis, financial calculation, and financial knowledge retrieval and question answering. |
| Deepseek | [Deepseek Coder](https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct) | code generation | en/zh | - | Deepseek Coder comprises a series of code language models trained on both 87% code and 13% natural language in English and Chinese, with each model pre-trained on 2T tokens.
For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks. |
| microsoft | [MathOctopus](https://github.com/microsoft/MathOctopus) | math | multi | LLaMA2 | This work pioneers exploring and building powerful Multilingual Math Reasoning (xMR) LLMs. To accomplish this, we make the following works:
1. **MGSM8KInstruct**, the first multilingual math reasoning instruction dataset, encompassing ten distinct languages, thus addressing the issue of training data scarcity in xMR tasks.
2. **MSVAMP**, an out-of-domain xMR test dataset, to conduct a more exhaustive and comprehensive evaluation of the model’s multilingual mathematical capabilities.
3. **MathOctopus**, our effective Multilingual Math Reasoning LLMs, training with different strategies, which notably outperform conventional open-source LLMs and exhibit superiority over ChatGPT in few-shot scenarios. |
| ITREC | [Zh-MT-LLM](https://github.com/ITRECLab/Zh-MT-LLM) | maritime | en/zh | ChatGLM3-6b | The training data use the maritime domain data Zh-mt-sft organized for three main segments, and 30w general conversation data[moss-003-sft-data](https://huggingface.co/datasets/fnlp/moss-003-sft-data). Zh-mt-sft specifically Contains CrimeKgAssitant-1.8w, Zh-law-qa, and Zh-law-court related to maritime laws and regulations Q&A, Zh-edu-qa and Zh-edu-qb related to maritime education and training, and Zh-mt-qa related to maritime specialized knowledge Q&A. |
| @SmartFlowAI | [EmoLLM](https://github.com/SmartFlowAI/EmoLLM) | 心理健康 | zh | - | **EmoLLM** 是一系列能够支持 **理解用户-支持用户-帮助用户** 心理健康辅导链路的心理健康大模型,由 `LLM`指令微调而来。 |some medical models: [here](https://mp.weixin.qq.com/s/c6aPU2FALAaa4LWKQ8W1uA)
some domain llms: [Awesome-Domain-LLM](https://github.com/luban-agi/Awesome-Domain-LLM)
healcare models: [Awesome-Healthcare-Foundation-Models](https://github.com/Jianing-Qiu/Awesome-Healthcare-Foundation-Models)
# General Domain Instruction Models
| contributor | model/project | language | base model | main feature |
| :-------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------ | -------- | :-------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Stanford | [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) | en | LLaMA/OPT | use 52K instruction-following data generated by Self-Instructt techniques to fine-tune 7B LLaMA,
the resulting model, Alpaca, behaves similarly to the `text-davinci-003` model on the Self-Instruct instruction-following evaluation suite.
Alpaca has inspired many follow-up models. |
| LianJiaTech | [BELLE](https://github.com/LianjiaTech/BELLE) | en/zh | BLOOMZ-7B1-mt | maybe the first Chinese model to follow Alpaca. |
| THU | [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B) | en/zh | - | well-known Chinese model. |
| Databricks | [Dolly](https://github.com/databrickslabs/dolly) | en | GPT-J 6B | use Alpaca data to fine-tune a 2-year-old model: GPT-J, which exhibits surprisingly high quality
instruction following behavior not characteristic of the foundation model on which it is based. |
| @tloen | [Alpaca-LoRA](https://github.com/tloen/alpaca-lora) | en | LLaMA-7B | trained within hours on a single RTX 4090,
reproducing the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) results using [low-rank adaptation (LoRA)](https://arxiv.org/pdf/2106.09685.pdf),
and can run on a Raspberry pi. |
| ColossalAI | [Coati7B]() | en/zh | LLaMA-7B | a large language model developed by the ColossalChat project |
| Shanghai AI Lab | [LLaMA-Adapter](https://github.com/ZrrSkywalker/LLaMA-Adapter) | en | LLaMA-7B | Fine-tuning LLaMA to follow instructions within 1 Hour and 1.2M Parameters |
| AetherCortex | [Llama-X](https://github.com/AetherCortex/Llama-X) | en | LLaMA | Open Academic Research on Improving LLaMA to SOTA LLM. |
| TogetherComputer | [OpenChatKit](https://github.com/togethercomputer/OpenChatKit) | en | GPT-NeoX-20B | OpenChatKit provides a powerful, open-source base to create both specialized and general purpose chatbots for various applications.
The kit includes an instruction-tuned language models, a moderation model, and an extensible retrieval system for including
up-to-date responses from custom repositories. |
| nomic-ai | [GPT4All](https://github.com/nomic-ai/gpt4all) | en | LLaMA | trained on a massive collection of clean assistant data including code, stories and dialogue |
| @ymcui | [Chinese-LLaMA-Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca) | en/zh | LLaMA-7B/13B | **expand the Chinese vocabulary** based on the original LLaMA and use Chinese data for secondary pre-training,
further enhancing Chinese basic semantic understanding. Additionally, the project uses Chinese instruction data
for fine-tuning on the basis of the Chinese LLaMA, significantly improving the model's understanding and execution of instructions. |
| UC Berkley
Stanford
CMU | [Vicuna](https://github.com/lm-sys/FastChat) | en | LLaMA-13B | Impressing GPT-4 with 90% ChatGPT Quality. |
| UCSD/SYSU | [baize](https://github.com/project-baize/baize) | en/zh | LLaMA | fine-tuned with[LoRA](https://github.com/microsoft/LoRA). It uses 100k dialogs generated by letting ChatGPT chat with itself.
Alpaca's data is also used to improve its performance. |
| UC Berkley | [Koala](https://github.com/young-geng/EasyLM) | en | LLaMA | Rather than maximizing*quantity* by scraping as much web data as possible, the team focus on collecting a small *high-quality* dataset. |
| @imClumsyPanda | [langchain-ChatGLM](https://github.com/imClumsyPanda/langchain-ChatGLM) | en/zh | ChatGLM-6B | local knowledge based ChatGLM with langchain. |
| @yangjianxin1 | [Firefly](https://github.com/yangjianxin1/Firefly) | zh | bloom-1b4-zh
bloom-2b6-zh | Instruction Tuning on Chinese dataset. Vocabulary pruning, ZeRO, and tensor parallelism
are used to effectively reduce memory consumption and improve training efficiency. |
| microsoft | [GPT-4-LLM](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM) | en/zh | LLaMA | aims to share data generated by GPT-4 for building an instruction-following LLMs with supervised learning and reinforcement learning. |
| Hugging Face | [StackLLaMA](https://huggingface.co/trl-lib/llama-7b-se-rl-peft) | en | LLaMA | trained on StackExchange data and the main goal is to serve as a tutorial and walkthrough on
how to train model with RLHF and not primarily model performance. |
| Nebuly | [ChatLLaMA](https://github.com/nebuly-ai/nebullvm/tree/main/apps/accelerate/chatllam) | en | - | a library that allows you to create hyper-personalized ChatGPT-like assistants using your own data and the least amount of compute possible. |
| @juncongmoo | [ChatLLaMA](https://github.com/juncongmoo/chatllama) | en | LLaMA | LLaMA-based RLHF model, runnable in a single GPU. |
| @juncongmoo | [minichatgpt](https://github.com/juncongmoo/minichatgpt) | en | GPT/OPT ... | To Train ChatGPT In 5 Minutes with ColossalAI. |
| @LC1332 | [Luotuo-Chinese-LLM](https://github.com/LC1332/Luotuo-Chinese-LLM) | zh | LLaMA/ChatGLM | Instruction fine-tuned Chinese Language Models, with colab provided! |
| @Facico | [Chinese-Vicuna](https://github.com/Facico/Chinese-Vicuna) | zh | LLaMA | A Chinese Instruction-following LLaMA-based Model, fine-tuned with Lora, cpp inference supported, colab provided. |
| @yanqiangmiffy | [InstructGLM](https://github.com/yanqiangmiffy/InstructGLM) | en/zh | ChatGLM-6B | ChatGLM based instruction-following model, fine-tuned on a variety of data sources, supports deepspeed accelerating and LoRA. |
| alibaba | [Wombat](https://github.com/GanjinZero/RRHF) | en | LLaMA | a novel learning paradigm called RRHF, as an alternative of RLHF, is proposed, which scores responses generated by
different sampling policies and learns to align them with human preferences through ranking loss. And the performance
is comparable to RLHF, with less models used in the process. |
| @WuJunde | [alpaca-glassoff](https://github.com/WuJunde/alpaca-glassoff) | en | LLaMA | a mini image-acceptable Chat AI can run on your own laptop, based on[stanford-alpaca](https://github.com/tatsu-lab/stanford_alpaca) and [alpaca-lora](https://github.com/tloen/alpaca-lora). |
| @JosephusCheung | [Guanaco](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset) | multi | LLaMA-7B | A Multilingual Instruction-Following Language Model. |
| @FreedomIntelligence | [LLM Zoo](https://github.com/FreedomIntelligence/LLMZoo) | multi | BLOOMZ/LLaMA | a project that provides data, models, and evaluation benchmark for large language models.
model released: Phoenix, Chimera |
| SZU | [Linly](https://github.com/CVI-SZU/Linly) | en/zh | LLaMA | **expand the Chinese vocabulary**, full fine-tuned models, largest LLaMA-based Chinese models, aggregation of Chinese instruction data, reproduceable details.. |
| @lamini-ai | [lamini](https://github.com/lamini-ai/lamini/) | multi | - | data generator for generating instructions to train instruction-following LLMs. |
| Stability-AI | [StableVicuna](https://stability.ai/blog/stablevicuna-open-source-rlhf-chatbot) | en | LLaMA | a further instruction fine tuned and RLHF trained version of Vicuna v0 13b, with better performance than Vicuna. |
| Hugging Face | [HuggingChat](https://huggingface.co/chat/) | en | LLaMA | seems to be the first one available to access as a platform that appears similar to ChatGPT. |
| microsoft | [WizardLM](https://github.com/nlpxucan/WizardLM) | en | LLaMA | trained with 70k evolved instructions,[Evol-Instruct](https://github.com/nlpxucan/evol-instruct) is a novel method using LLMs instead of humans to automatically mass-produce
open-domain instructions of various difficulty levels and skills range, to improve the performance of LLMs. |
| FDU | [OpenChineseLLaMA](https://github.com/OpenLMLab/OpenChineseLLaMA) | en/zh | LLaMA-7B | further pretrain LLaMA on Chinese data, improving LLaMA preformance on Chinese tasks. |
| @chenfeng357 | [open-Chinese-ChatLLaMA](https://github.com/chenfeng357/open-Chinese-ChatLLaMA) | en/zh | LLaMA | The complete training code of the open-source Chinese-Llama model, including the full process from pre-training instructing and RLHF. |
| @FSoft-AI4Code | [CodeCapybara](https://github.com/FSoft-AI4Code/CodeCapybara) | en | LLaMA | Open Source LLaMA Model that Follow Instruction-Tuning for Code Generation. |
| @mbzuai-nlp | [LaMini-LM](https://github.com/mbzuai-nlp/LaMini-LM) | en | LLaMA/Flan-T5 ... | A Diverse Herd of Distilled Models from Large-Scale Instructions. |
| NTU | [Panda](https://github.com/dandelionsllm/pandallm) | en/zh | LLaMA | further pretraining on Chinese data, full-size of LLaMA models. |
| IBM/CMU/MIT | [Dromedary](https://github.com/IBM/Dromedary) | en | LLaMA-65B | Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision. |
| @melodysdreamj | [WizardVicunaLM](https://github.com/melodysdreamj/WizardVicunaLM) | multi | Vicuna | Wizard's dataset + ChatGPT's conversation extension + Vicuna's tuning method,
achieving approximately 7% performance improvement over Vicuna. |
| sambanovasystems | [BLOOMChat](https://huggingface.co/sambanovasystems/BLOOMChat-176B-v1) | multi | BLOOM | BLOOMChat is a 176 billion parameter multilingual chat model. It is instruction tuned from[BLOOM (176B)](https://huggingface.co/bigscience/bloom) on
assistant-style conversation datasets and supports conversation, question answering and generative answers in multiple languages. |
| [TII](https://www.tii.ae/) | [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) | en | [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) | a 7B parameters causal decoder-only model built by[TII](https://www.tii.ae/) based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets. |
| [TII](https://www.tii.ae/) | [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) | multi | [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) | a 40B parameters causal decoder-only model built by[TII](https://www.tii.ae/) based on [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) and finetuned on a mixture of [Baize](https://github.com/project-baize/baize-chatbot). |
| USTC, etc. | [ExpertLLaMA](https://github.com/OFA-Sys/ExpertLLaMA) | en | LLaMA | use In-Context Learning to automatically write customized expert identity and find the quality quite satisfying.
We then prepend corresponding expert identity to each instruction to produce augmented instruction-following data.
We refer to the overall framework as **ExpertPrompting**, find more details in our [paper](https://arxiv.org/abs/2305.14688). |
| ZJU | [CaMA](https://github.com/zjunlp/CaMA) | en/zh | LLaMA | further pretrained on Chinese courpus without expansion of vocabulary; optimized on the Information Extraction (IE) tasks.
pre-training script is available, which includes transformations, construction, and loading of large-scale corpora, as well as the LoRA instruction fine-tuning script. |
| THU | [UltraChat](https://github.com/thunlp/UltraChat) | en | LLaMA | First, the UltraChat dataset provides a rich resource for the training of chatbots. Second, by fine-tuning the LLaMA model,
the researchers successfully created a dialogue model UltraLLaMA with superior performance. |
| RUC | [YuLan-Chat](https://github.com/RUC-GSAI/YuLan-Chat) | en/zh | LLaMA | developed based on fine-tuning LLaMA with high-quality English and Chinese instructions. |
| AI2 | [Tülu](https://github.com/allenai/open-instruct) | en | LLaMA/Pythia/OPT | a suite of LLaMa models fully-finetuned on a strong mix of datasets. |
| KAIST | [SelFee](https://github.com/kaistAI/SelFee) | en | LLaMA | Iterative Self-Revising LLM Empowered by Self-Feedback Generation. |
| @lyogavin | [Anima](https://github.com/lyogavin/Anima) | en/zh | LLaMA | trained based on QLoRA's[33B guanaco](https://huggingface.co/timdettmers/guanaco-33b), finetuned for 10000 steps. |
| THU | [ChatGLM2-6B](https://github.com/THUDM/ChatGLM2-6B) | en/zh | - | ChatGLM**2** -6B is the second-generation version of the open-source bilingual (Chinese-English) chat model [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B).
It retains the smooth conversation flow and low deployment threshold of the first-generation model, while introducing the following new features:
- Stronger Performance
- Longer Context
- More Efficient Inference- More Open License |
| OpenChat | [OpenChat](https://github.com/imoneoi/openchat) | en | LLaMA, etc. | a series of open-source language models fine-tuned on a small, yet diverse and high-quality dataset of multi-round conversations.
Specifically, we utilize only ~6K GPT-4 conversations directly filtered from the ~90K ShareGPT conversations.
Despite the small size of the dataset, OpenLLMs has demonstrated remarkable performance. |
| CAS | [BayLing](https://github.com/ictnlp/BayLing) | multi | LLaMA | BayLing is an English/Chinese LLM equipped with advanced language alignment,
showing superior capability in English/Chinese generation, instruction following and multi-turn interaction. |
| stabilityai | [FreeWilly](https://huggingface.co/stabilityai/FreeWilly1-Delta-SafeTensor)/[FreeWilly2](https://huggingface.co/stabilityai/FreeWilly2) | en | LLaMA/LLaMA2 | `FreeWilly` is a Llama65B model fine-tuned on an [Orca](https://arxiv.org/pdf/2306.02707.pdf) style Dataset.
`FreeWilly2` is a Llama2 70B model finetuned on an [Orca](https://arxiv.org/pdf/2306.02707.pdf) style Dataset.
`FreeWilly2` outperforms Llama2 70B on the [huggingface Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). |
| alibaba | [Qwen-7B](https://github.com/QwenLM/Qwen-7B) | en/zh | - | 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. |
| ZJU | [KnowLM](https://github.com/zjunlp/KnowLM) | en/zh | LLaMA | With the rapid development of deep learning technology, large language models such as ChatGPT have made substantial strides in the realm of natural language processing.
However, these expansive models still encounter several challenges in acquiring and comprehending knowledge, including the difficulty of updating knowledge and potential knowledge
discrepancies and biases, collectively known as**knowledge fallacies** .
The KnowLM project endeavors to tackle these issues by launching an open-source large-scale knowledgable language model framework and releasing corresponding models. |
| NEU | [TechGPT](https://github.com/neukg/TechGPT) | en/zh | LLAMA | TechGPT mainly strengthens the following three types of tasks:
- Various information extraction tasks such as relation triplet extraction with "knowledge graph construction" as the core
- Various intelligent question-and-answer tasks centered on "reading comprehension".
- Various sequence generation tasks such as keyword generation with "text understanding" as the core. |
| @MiuLab | [Taiwan-LLaMa](https://github.com/MiuLab/Taiwan-LLaMa) | en/zh | LLaMA2 | Traditional Chinese LLMs for Taiwan. |
| Xwin-LM | [Xwin-LM](https://github.com/Xwin-LM/Xwin-LM) | en | LLaMA2 | Xwin-LM aims to develop and open-source alignment technologies for large language models, including supervised fine-tuning (SFT),
reward models (RM), reject sampling, reinforcement learning from human feedback (RLHF), etc. Our first release, built-upon on the
Llama2 base models, ranked**TOP-1** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Notably, it's **the first to surpass GPT-4** on this benchmark. |
| wenge-research | [YaYi](https://github.com/wenge-research/YaYi) | en/zh | LLaMA/LLaMA2 | [YaYi](https://www.wenge.com/yayi/index.html) was fine-tuned on millions of artificially constructed high-quality domain data. This training data covers five key domains:
media publicity, public opinion analysis, public safety, financial risk control, and urban governance, encompassing over a hundred natural language instruction tasks. |
| HuggingFace | [zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) | en | Mistral | Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-α is the first model in the series, and is a fine-tuned version of
[mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) that was trained on on a mix of publicly available, synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). |
| Cohere | [Command-R](https://huggingface.co/CohereForAI/c4ai-command-r-v01) / [Command R+](https://huggingface.co/CohereForAI/c4ai-command-r-plus) | multi | - | Command-R has the capability for multilingual generation evaluated in 10 languages and highly performant RAG capabilities. |
| XAI | [grok](https://github.com/xai-org/grok-1) | en | - | 314B MoE; context length: 8192 |
| databricks | [dbrx-instruct](https://huggingface.co/databricks/dbrx-instruct) | - | - | a*fine-grained* mixture-of-experts (MoE) architecture with 132B total parameters of which 36B parameters are active on any input. It was pre-trained on 12T tokens of text and code data. Compared to other open MoE models like Mixtral-8x7B and Grok-1, DBRX is fine-grained, meaning it uses a larger number of smaller experts. DBRX has 16 experts and chooses 4, while Mixtral-8x7B and Grok-1 have 8 experts and choose 2. |# Model Merging
| contributor | model/method | main feature | main feature |
| ----------- | ----------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| FuseAI | [FuseChat](https://huggingface.co/FuseAI/FuseChat-7B-VaRM) | Firstly, it undertakes pairwise knowledge fusion for source LLMs to derive multiple target LLMs of identical structure and size via lightweight fine-tuning. Then, these target LLMs are merged within the parameter space, wherein we propose a novel method VaRM for determining the merging weights based on the variation ratio of parameter matrices before and after fine-tuning. | a fusion of three prominent chat LLMs with diverse architectures and scales, namely[NH2-Mixtral-8x7B](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO), [NH2-Solar-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B), and [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5). FuseChat-7B-VaRM achieves an average performance of **8.22** on MT-Bench, outperforming various powerful chat LLMs at 7B and 34B scales like [Starling-7B](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) and [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat), even surpassing [GPT-3.5 (March)](https://platform.openai.com/docs/models/gpt-3-5-turbo), [Claude-2.1](https://www.anthropic.com/news/claude-2-1), and approaching [Mixtral-8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1). |
| arcee-ai | [mergekit](https://github.com/arcee-ai/mergekit) | Tools for merging pretrained large language models. | |
| SakanaAI | [EvoLLM](https://github.com/SakanaAI/evolutionary-model-merge) | Evolutionary Optimization of Model Merging Recipes. | |# Alternatives To Transformer
(maybe successors?)
| contributor | method | main feature |
| ---------------- | ------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| BlinkDL | [RWKV-LM](https://github.com/BlinkDL/RWKV-LM) | RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable).
So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding. |
| msra | [RetNet](https://arxiv.org/abs/2307.08621) | simultaneously achieving training parallelism, low-cost inference, and good performance. We theoretically derive the connection between recurrence and attention.
Then we propose the retention mechanism for sequence modeling, which supports three computation paradigms, i.e., parallel, recurrent, and chunkwise recurrent.
Specifically, the parallel representation allows for training parallelism. The recurrent representation enables low-cost**O**(**1**) inference, which improves decoding throughput,
latency, and GPU memory without sacrificing performance. The chunkwise recurrent representation facilitates efficient long-sequence modeling with linear complexity,
where each chunk is encoded parallelly while recurrently summarizing the chunks. Experimental results on language modeling show that RetNet achieves favorable scaling results,
parallel training, low-cost deployment, and efficient inference. The intriguing properties make RetNet a strong successor to Transformer for large language models. |
| stanford | [Bapcpack](https://backpackmodels.science) | A[Backpack](https://arxiv.org/abs/2305.16765) is a drop-in replacement for a Transformer that provides new tools for **interpretability-through-control** while still enabling strong language models.
Backpacks decompose the predictive meaning of words into components non-contextually, and aggregate them by a weighted sum, allowing for precise, predictable interventions. |
| stanford, etc. | [Monarch Mixer (M2)](https://github.com/HazyResearch/m2) | The basic idea is to replace the major elements of a Transformer with Monarch matrices — which are a class of structured matrices that generalize the FFT and are sub-quadratic,
hardware-efficient, and expressive. In Monarch Mixer, we use layers built up from Monarch matrices to do both mixing across the sequence (replacing the Attention operation) and mixing across the model dimension (replacing the dense MLP). |
| CMU, etc. | [Mamba](https://github.com/state-spaces/mamba) | Mamba is a new state space model architecture showing promising performance on information-dense data such as language modeling, where previous subquadratic models fall short of Transformers. It is based on the line of progress on[structured state space models](https://github.com/state-spaces/s4), with an efficient hardware-aware design and implementation in the spirit of [FlashAttention](https://github.com/Dao-AILab/flash-attention). |
| TogetherComputer | [StripedHyena](https://github.com/togethercomputer/stripedhyena) | StripedHyena is the**first alternative model competitive with the best open-source Transformers** of similar sizes in short and long-context evaluations.
StripedHyena is a hybrid architecture composed of multi-head, grouped-query attention and gated convolutions arranged in[Hyena](https://arxiv.org/abs/2302.10866) blocks, different from traditional decoder-only Transformers.
1. Costant memory decoding in Hyena blocks via representation of convolutions as state-space models (modal or canonical form), or as truncated filters.
2. Low latency, faster decoding and higher throughput than Transformers.
3. Improvement to training and inference-optimal scaling laws, compared to optimized Transformer architectures such as Llama-2.
4. Trained on sequences of up to 32k, allowing it to process longer prompts. |
| microsoft | [bGPT](https://github.com/sanderwood/bgpt) | bGPT supports generative modelling via next byte prediction on any type of data and can perform any task executable on a computer, showcasing the capability to simulate all activities within the digital world, with its potential only limited by computational resources and our imagination. |
| DeepMind | [Griffin-Jax](https://github.com/simudt/Griffin-Jax) | Jax + Flax implementation of the[Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models](https://arxiv.org/abs/2402.19427), not official code(official code is not released yet);
the RG-LRU layer, a novel gated linear recurrent layer, around which we design a new recurrent block to replace MQA. We build two new models using this recurrent block: Hawk, a model which interleaves MLPs with recurrent blocks, and Griffin, a hybrid model which interleaves MLPs with a mixture of recurrent blocks and local attention
Griffin-3B outperforms Mamba-3B, and Griffin-7B and Griffin-14B achieve performance competitive with Llama-2, despite being trained on nearly 7 times fewer tokens; Griffin can extrapolate on sequences significantly longer than those seen during training. |
| AI21 | [Jamba](https://huggingface.co/ai21labs/Jamba-v0.1) | Jamba is the first production-scale Mamba implementation. It’s a pretrained, mixture-of-experts (MoE) generative text model, with 12B active parameters and a total of 52B parameters across all experts. It supports a 256K context length, and can fit up to 140K tokens on a single 80GB GPU. |
| Meta | [Megalodon](https://github.com/XuezheMax/megalodon) | Megalodon inherits the architecture of Mega (exponential moving average with gated attention), and further introduces multiple technical components to improve its capability and stability, including complex exponential moving average (CEMA), timestep normalization layer, normalized attention mechanism and pre-norm with two-hop residual configuration. In a controlled head-to-head comparison with Llama2, Megalodon achieves better efficiency than Transformer in the scale of 7 billion parameters and 2 trillion training tokens. |# MoE
| contributor | model/project | main feature |
| --------------------- | --------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| mistralai | [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) | The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested. |
| Shanghai AI Lab, etc. | [LLaMA-MoE](https://github.com/pjlab-sys4nlp/llama-moe) | A small and affordable MoE model based on[LLaMA](https://github.com/facebookresearch/llama) and [SlimPajama](https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama). The number of activated model parameters is only 3.0~3.5B, which is friendly for deployment and research usage. |
| NUS, etc. | [OpenMoE](https://github.com/XueFuzhao/OpenMoE) | A family of open-sourced Mixture-of-Experts (MoE) Large Language Models. |
| Snowflake | [Arctic](https://github.com/Snowflake-Labs/snowflake-arctic) | Arctic uses a unique Dense-MoE Hybrid transformer architecture. It combines a 10B dense transformer model with a residual 128x3.66B MoE MLP resulting in 480B total and 17B active parameters chosen using a top-2 gating. |# Multi-Modal
| contributor | project | language | base model | main feature |
| ----------- | -------------------------------------------------- | -------- | --------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| BaihaiAIen | [IDPChat](https://github.com/BaihaiAI/IDPChat) | en/zh | LLaMA-13B
Stable Diffusion | Open Chinese multi-modal model, single GPU runnable, easy to deploy, UI provided. |
| KAUST | [MiniGPT-4](https://github.com/Vision-CAIR/MiniGPT-4) | en/zh | LLaMA | MiniGPT-4 aligns a frozen visual encoder from BLIP-2 with a frozen LLM, Vicuna, using just one projection layer,
and yields many emerging vision-language capabilities similar to those demonstrated in GPT-4. |
| MSR, etc. | [LLaVA](https://github.com/haotian-liu/LLaVA) | en | LLaMA | visual instruction tuning is proposed, towards building large language and vision models with GPT-4 level capabilities. |
| NUS/THU | [VPGTrans](https://github.com/VPGTrans/VPGTrans) | en | LLaMA/OPT/
Flan-T5/BLIP-2
... | transferring VPG across LLMs to build VL-LLMs at significantly lower cost. The GPU hours
can be reduced over 10 times and the training data can be reduced to around 10%.
Two novel VL-LLMs are released via VPGTrans, including **[VL-LLaMA](https://github.com/VPGTrans/VPGTrans#vl-llama)** and **[VL-Vicuna](https://github.com/VPGTrans/VPGTrans#vl-vicuna)**.
**VL-LLaMA** is a multimodal version LLaMA by transferring the BLIP-2 OPT-6.7B to LLaMA via VPGTrans.
**VL-Vicuna** is a GPT-4-like multimodal chatbot, based on the Vicuna LLM. |
| CAS, etc | [X-LLM](https://github.com/phellonchen/X-LLM) | en/zh | ChatGLM-6B | X-LLM converts multi-modalities (images, speech, videos) into foreign languages using X2L interfaces and feed them into
a large Language Model (ChatGLM) to accomplish a Multimodal LLM, achieving impressive multimodal chat capabilities. |
| NTU | [Otter](https://github.com/Luodian/Otter) | en | OpenFlamingo | a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo),
trained on MIMIC-IT and showcasing improved instruction-following ability and in-context learning.
Futhermore, optimize OpenFlamingo's implementation, democratizing the required
training resources from 1x A100 GPU to 4x RTX-3090 GPUs. |
| XMU | [LaVIN](https://github.com/luogen1996/LaVIN) | en | LLaMA | propose a novel and affordable solution for vision-language instruction tuning, namely Mixture-of-Modality Adaptation (MMA).
Particularly, MMA is an end-to-end optimization regime, which connects the image encoder and LLM via lightweight adapters.
Meanwhile, we also propose a novel routing algorithm in MMA, which can help the model automatically shifts the reasoning paths
for single- and multi-modal instructions. |
| USTC | [Woodpecker](https://github.com/BradyFU/Woodpecker) | - | - | the first work to correct hallucination in multimodal large language models. |
| hpcaitech | [Open-Sora](https://github.com/hpcaitech/Open-Sora) | - | - | open source alternative to Openai Sora. |see also: [awesome-Multimodal-Large-Language-Models](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models)
# Data
## Pretrain Data
| contributor | data/project | language | main feature |
| ---------------- | ----------------------------------------------------------------- | -------- | ---------------------------------------------------------- |
| TogetherComputer | [RedPajama-Data](https://github.com/togethercomputer/RedPajama-Data) | en | An Open Source Recipe to Reproduce LLaMA training dataset. |
| @goldsmith | [Wikipedia](https://github.com/goldsmith/Wikipedia) | multi | A Pythonic wrapper for the Wikipedia API. |## Instruction Data
see [Alpaca-CoT data collection](https://github.com/PhoebusSi/Alpaca-CoT/blob/main/CN_README.md#3-%E6%95%B0%E6%8D%AE%E9%9B%86%E5%90%88-data-collection)
| contributor | data | language | main feature |
| ----------- | ------------------------------------------------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------- |
| salesforce | [DialogStudio](https://github.com/salesforce/DialogStudio) | en | DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection and Instruction-Aware Models for Conversational AI. |## Synthetic Data Generation
| contributor | method | main feature |
| ----------- | --------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| UW, etc. | [self-instruct](https://github.com/yizhongw/self-instruct) | using the model's own generations to create a large collection of instructional data. |
| @LiuHC0428 | [Reliable-Self-Instruction](https://github.com/LiuHC0428/LAW-GPT#%E7%9F%A5%E8%AF%86%E9%97%AE%E7%AD%94) | use ChatGPT to generate some questions and answers based on a given text. |
| PKU | [Evol-Instruct](https://github.com/nlpxucan/evol-instruct) | a novel method, proposed in[WizardLM](https://github.com/nlpxucan/WizardLM), by using LLMs instead of humans to automatically mass-produce open-domain
instructions of various difficulty levels and skills range, to improve the performance of LLMs. |
| KAUST, etc. | [CAMEL](https://github.com/lightaime/camel) | a novel communicative agent framework named*role-playing* is proposed, which involves using *inception prompting* to guide chat agents
toward task completion while maintaining consistency with human intentions.
*role-playing* can be used to generate conversational data in a specific task/domain. |
| @chatarena | [ChatArena](https://github.com/chatarena/chatarena) | a library that provides multi-agent language game environments and facilitates research about autonomous LLM agents and their social interactions.
it provides a flexible framework to define multiple players, environments and the interactions between them, based on Markov Decision Process. |# Evaluation
| contributor | method | main feature |
| ---------------- | ------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| - | human evaluation | - |
| OpenAI | GPT-4/ChatGPT | - |
| PKU/CMU/MSRA ... | [PandaLM](https://github.com/WeOpenML/PandaLM) | Reproducible and Automated Language Model Assessment. |
| UCB | [Chatbot Arena](https://github.com/lm-sys/FastChat) | Chat with two anonymous models side-by-side and vote for which one is better,
then use the Elo rating system to calculate the relative performance of the models. |
| Stanford | [AlpacaEval](https://github.com/tatsu-lab/alpaca_eval) | GPT-4/Claude evaluation on[AlpacaFarm](https://github.com/tatsu-lab/alpaca_farm/tree/main) dataset. |
| clueai | [SuperCLUElyb](https://www.superclueai.com/) | Chinese version of[Chatbot Arena](https://github.com/lm-sys/FastChat) developed by clueai. |
| SJTU, etc. | [Auto-J](https://github.com/GAIR-NLP/auto-j) | a new open-source generative judge that can effectively evaluate different LLMs on how they align to human preference. |
| CMU | [CodeBERTScore](https://github.com/neulab/code-bert-score) | an automatic metric for code generation, based on[BERTScore](https://arxiv.org/abs/1904.09675).
As BERTScore, CodeBERTScore leverages the pre-trained contextual embeddings from a model such as CodeBERT and matches words in candidate and reference sentences by cosine similarity.
Differently from BERTScore, CodeBERTScore also encodes natural language input or other context along with the generated code, but does not use that context to compute cosine similarities. |## Benchmark
[国内大模型测评现状](https://mp.weixin.qq.com/s/ppRDj0tBJgcpGGx5JbzZIA)
| contributor | benchmark | main feature |
| ----------- | ---------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| princeton | [SWE-bench](https://github.com/princeton-nlp/SWE-bench) | a benchmark for evaluating large language models on real world software issues collected from GitHub. Given a*codebase* and an *issue*,
a language model is tasked with generating a *patch* that resolves the described problem. |
| microsoft | [AGIEval](https://github.com/ruixiangcui/AGIEval) | a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. |
| clueai | [SuperCLUE-Agent](https://github.com/CLUEbenchmark/SuperCLUE-Agent) | Agent evaluation benchmark based on Chinese native tasks. |
| bytedance | [GPT-Fathom](https://github.com/GPT-Fathom/GPT-Fathom) | GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well as OpenAI's earlier models on 20+ curated benchmarks under aligned settings. |## LeaderBoard
[opencompass](https://opencompass.org.cn/leaderboard-llm), huggingface
# Framework/ToolKit/Platform
| contributor | project | main feature |
| --------------- | ------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| CAS | [Alpaca-CoT](https://github.com/PhoebusSi/Alpaca-CoT) | extend CoT data to Alpaca to boost its reasoning ability.
aims at building an instruction finetuning (IFT) platform with extensive instruction collection (especially the CoT datasets)
and a unified interface for various large language models. |
| @hiyouga | [ChatGLM-Efficient-Tuning](https://github.com/hiyouga/ChatGLM-Efficient-Tuning) | efficient fine-tuning ChatGLM-6B with PEFT. |
| @hiyouga | [LLaMA-Efficient-Tuning](https://github.com/hiyouga/LLaMA-Efficient-Tuning) | Fine-tuning LLaMA with PEFT (PT+SFT+RLHF with QLoRA). |
| @jianzhnie | [Efficient-Tuning-LLMs](https://github.com/jianzhnie/Efficient-Tuning-LLMs) | Efficient Finetuning of QLoRA LLMs. |
| ColossalAI | [ColossalChat](https://github.com/hpcaitech/ColossalAI/blob/main/applications/Chat/README.md) | An open-source low cost solution for cloning[ChatGPT](https://openai.com/blog/chatgpt/) with a complete RLHF pipeline. |
| microsoft | [deepspeed-chat](https://github.com/microsoft/DeepSpeed/tree/master/blogs/deepspeed-chat) | Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales. |
| LAION-AI | [Open Assistant](https://github.com/LAION-AI/Open-Assistant) | a project meant to give everyone access to a great chat based large language model. |
| HKUST | [LMFlow](https://github.com/OptimalScale/LMFlow) | an extensible, convenient, and efficient toolbox for finetuning large machine learning models,
designed to be user-friendly, speedy and reliable, and accessible to the entire community. |
| UCB | [EasyLM](https://github.com/young-geng/EasyLM) | EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.
EasyLM can scale up LLM training to hundreds of TPU/GPU accelerators by leveraging JAX's pjit functionality. |
| @CogStack | [OpenGPT](https://github.com/CogStack/opengpt) | A framework for creating grounded instruction based datasets and training conversational domain expert Large Language Models (LLMs). |
| HugAILab | [HugNLP](https://github.com/HugAILab/HugNLP) | a unified and comprehensive NLP library based on HuggingFace Transformer. |
| ProjectD-AI | [LLaMA-Megatron-DeepSpeed](https://github.com/ProjectD-AI/LLaMA-Megatron-DeepSpeed) | Ongoing research training transformer language models at scale, including: BERT & GPT-2. |
| @PanQiWei | [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) | An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm. |
| alibaba | [swift](https://github.com/modelscope/swift) | SWIFT (Scalable lightWeight Infrastructure for Fine-Tuning) is an extensible framwork designed to faciliate lightweight model fine-tuning and inference.
It integrates implementations for various efficient fine-tuning methods, by embracing approaches that is parameter-efficient, memory-efficient, and time-efficient. |
| alibaba | [Megatron-LLaMA](https://github.com/alibaba/Megatron-LLaMA) | to facilitate the training of LLaMA-based models and reduce the cost on occupying hardware resources,
Alibaba decides to release the internal optimized Megatron-LLaMA training framework to the community. |
| @OpenLLMAI | [OpenRLHF](https://github.com/OpenLLMAI/OpenRLHF) | OpenRLHF aims to develop a**High-performance RLHF training framework** based on Ray and DeepSpeed.
OpenRLHF is the **Simplest** high-performance RLHF librarythat supports 34B models RLHF training with Single DGXA100 ([script](https://github.com/OpenLLMAI/OpenRLHF/blob/main/examples/scripts/train_ppo_llama_ray_34b.sh))).
The key idea of OpenRLHF is to distribute the Actor Model, Reward Model, Reference Model, and the Critic Model onto separate GPUs using Ray,
while placing the Adam Optimizer on the CPU. This enables full-scale fine-tuning of 7B models across multiple 24GB RTX4090 GPUs
(or 34B models with multiple A100 80G), with high training efficiency thanks to the ability to use a large generate batch size with Adam Offload and Ray.
**Our PPO performance with the 13B llama2 models is 4 times that of DeepSpeedChat.** |
| @zejunwang1 | [LLMTuner](https://github.com/zejunwang1/LLMTuner) | LLMTuner is an LLM instruction tuning tool that supports LoRA, QLoRA and full parameter fine-tuning. During training, flash attention and xformers attention technologies
can be used to improve training efficiency, and combined with technologies such as LoRA, DeepSpeed ZeRO, gradient checkpointing and 4-bit quantification, to effectively
reduce video memory usage and achieve the same goal on a single consumer-grade graphics card (A100/A40/A30 /RTX3090/V100) to fine-tune 7B/13B/34B large models. |
| Shanghai AI Lab | [XTuner](https://github.com/InternLM/xtuner) | A toolkit for efficiently fine-tuning LLM (InternLM, Llama, Baichuan, QWen, ChatGLM2). |
| alibaba | [MFTCoder](https://github.com/codefuse-ai/MFTCoder) | **CodeFuse-MFTCoder** is an open-source project of CodeFuse for multitasking Code-LLMs(large language model for code tasks),
which includes models, datasets, training codebases and inference guides. |
| facebook | [llama-recipes](https://github.com/facebookresearch/llama-recipes) | Examples and recipes for Llama 2 model. |
| microsoft | [MS-AMP](https://github.com/Azure/MS-AMP) | The FP8-LM framework is highly optimized and uses the FP8 format throughout the forward and backward passes, which greatly reduces the system's computing, memory and communication overhead. |# Alignment
| contributor | method | used in | main feature |
| ----------- | ---------------------------------------------------------------------------------------- | --------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| - | [IFT](https://arxiv.org/pdf/2109.01652.pdf) | [ChatGPT](https://openai.com/blog/chatgpt/) | Instruction Fine-Tuning. |
| - | [RLHF](https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback) | [ChatGPT](https://openai.com/blog/chatgpt/) | RL from Human Feedback. |
| Anthropic | [RLAIF](https://arxiv.org/abs/2212.08073) | [Claude](https://www.anthropic.com/index/introducing-claude) | RL from AI Feedback. |
| alibaba | [RRHF](https://arxiv.org/pdf/2304.05302v1.pdf) | [Wombat](https://github.com/GanjinZero/RRHF) | a novel learning paradigm called RRHF, as an alternative of RLHF, is proposed, which scores responses generated by
different sampling policies and learns to align them with human preferences through ranking loss. And the performance
is comparable to RLHF, with less models used in the process. |
| HKUST | [RAFT](https://optimalscale.github.io/LMFlow/examples/raft.html) | - | RAFT is a new alignment algorithm, which is more efficient than conventional (PPO-based) RLHF. |
| IBM/CMU/MIT | [SELF-ALIGN](https://arxiv.org/abs/2305.03047) | [Dromedary](https://github.com/IBM/Dromedary) | combines principle-driven reasoning and the generative power of LLMs for the self-alignment of AI agents with minimal human supervision. |
| PKU | [CVA](https://github.com/PKU-Alignment/safe-rlhf#constrained-value-alignment-via-safe-rlhf) | [Beaver](https://github.com/PKU-Alignment/safe-rlhf) | Constrained Value Alignment via Safe RLHF. |
| tencent | [RLTF](https://github.com/Zyq-scut/RLTF) | - | Reinforcement Learning from Unit Test Feedback. |
| stanford | [DPO]() | - | implicitly optimizes the same objective as existing RLHF algorithms (reward maximization with a KL-divergence constraint) but is simple to implement and straightforward to train. Intuitively,
the DPO update increases the relative log probability of preferred to dispreferred responses, but it incorporates a dynamic, per-example importance weight that prevents the model degeneration that we find occurs with a naive probability ratio objective. |
| THU | [BPO](https://github.com/thu-coai/BPO) | - | The central idea behind BPO is to create an automatic prompt optimizer that rewrites human prompts, which are usually less organized or ambiguous, to prompts that better deliver human intent.
Consequently, these prompts could be more LLM-preferred and hence yielding better human-preferred responses.and the empirical results demonstrate that the BPO-aligned ChatGPT yields a 22% increase in the win rate against its original version, and 10% for GPT-4. |
| AI2, etc. | [URIAL](https://github.com/Re-Align/urial) | - | URIAL is a simple,*tuning-free* alignment method, URIAL (**U**ntuned LLMs with **R**estyled **I**n-context **AL**ignment). URIAL achieves effective alignment purely through in-context learning (ICL), requiring as few as three constant stylistic examples and a system prompt for achieving a comparable performance with SFT/RLHF. |
| openai | [weak-to-strong](https://github.com/openai/weak-to-strong) | - | naively finetune strong pretrained models on labels generated by a weak model, they consistently perform better than their weak supervisors. |# Multi-Language
## vocabulary expansion
according to the official [FAQ](https://github.com/facebookresearch/llama/blob/main/FAQ.md#4-other-languages) in LLaMA repo, there's not many tokens other than latin languages, so one of the efforts is to expand the vocabulary, some works are shown below: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
| contributor | model/project | language | base model | main feature |
| -------------- | ------------------------------------------------------------------ | -------- | ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| @ymcui | [Chinese-LLaMA-Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca) | zh | LLaMA | |
| SZU | [Linly](https://github.com/CVI-SZU/Linly) | en/zh | LLaMA | full-size LLaMA, further pretrained on Chineses Corpus. |
| @Neutralzz | [BiLLa](https://github.com/Neutralzz/BiLLa) | en/zh | LLaMA-7B | further pretrained on[Wudao](https://www.sciencedirect.com/science/article/pii/S2666651021000152)、[PILE](https://arxiv.org/abs/2101.00027)、[WMT](https://www.statmt.org/wmt22/translation-task.html). |
| @pengxiao-song | [LaWGPT](https://github.com/pengxiao-song/LaWGPT) | zh | LLaMA/ChatGLM | expand the vocab with Chinese legal terminologies, instruction fine-tuned on data generated using self-instruct. |
| IDEA | [Ziya](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-Pretrain-v1) | en/zh | LLaMA | large-scale pre-trained model based on LLaMA with 13 billion parameters.
We optimizes LLaMAtokenizer on chinese, and incrementally train 110 billion tokens of data based on LLaMa-13B model,
which significantly improved the understanding and generation ability on Chinese. |
| OpenBuddy | [OpenBuddy](https://github.com/OpenBuddy/OpenBuddy) | multi | LLaMA/Falcon ... | Built upon Tii's Falcon model and Facebook's LLaMA model, OpenBuddy is fine-tuned to include an extended vocabulary,
additional common characters, and enhanced token embeddings. By leveraging these improvements and multi-turn dialogue datasets,
OpenBuddy offers a robust model capable of answering questions and performing translation tasks across various languages. |
| FDU | [CuteGPT](https://github.com/Abbey4799/CuteGPT) | en/zh | LLaMA | CuteGPT expands the Chinese vocabulary and performs pre-training on the Llama model, improving its ability to understand Chinese.
Subsequently, it is fine-tuned with conversational instructions to enhance the model's ability to understand instructions. |
| FlagAlpha | [FlagAlpha](https://github.com/FlagAlpha/Llama2-Chinese) | en/zh | LLaMA/LLaMA2 | based on largs-scale Chinese data, and starting from pre-training, the Chinese abilities of the models are being continuously and iteratively upgraded. |# Efficient Training/Fine-Tuning
| contributor | method | main feature |
| --------------- | ------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| microsoft | [LoRA](https://arxiv.org/abs/2106.09685) | Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices
into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. |
| stanford | [Prefix Tuning](https://aclanthology.org/2021.acl-long.353/) | a lightweight alternative to fine-tuning for natural language generation tasks, which keeps language model parameters frozen
and instead optimizes a sequence of continuous task-specific vectors, which we call the prefix. |
| THU | [P-Tuning](https://arxiv.org/abs/2103.10385) | P-tuning leverages few continuous free parameters to serve as prompts fed as the input to the pre-trained language models.
We then optimize the continuous prompts using gradient descent as an alternative to discrete prompt searching. |
| THU, etc. | [P-Tuning v2](https://arxiv.org/pdf/2110.07602.pdf) | a novel empirical finding that properly optimized prompt tuning can be comparable to fine-tuning universally across various model scales and NLU tasks.
Technically, P-tuning v2 is not conceptually novel. It can be viewed as an optimized and adapted implementation of Deep Prompt Tuning. |
| Google | [Prompt Tuning](https://arxiv.org/abs/2104.08691) | a simple yet effective mechanism for learning "soft prompts" to condition frozen language models to perform specific downstream tasks.
Prompt Tuning can be seen as a simplification of "prefix tuning". |
| microsoft, etc. | [AdaLoRA](https://arxiv.org/abs/2303.10512) | adaptively allocates the parameter budget among weight matrices according to their importance score.
In particular, AdaLoRA parameterizes the incremental updates in the form of singular value decomposition. |
| UW | [QLoRA](https://github.com/artidoro/qlora) | an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving
full 16-bit finetuning task performance. QLoRA backpropagates gradients through a frozen, 4-bit quantized pretrained language model into Low Rank Adapters (LoRA). |
| FDU | [LOMO](https://github.com/OpenLMLab/LOMO) | a new optimizer,**LO**w-Memory **O**ptimization ( **LOMO** ), which fuses the gradient computation and the parameter update in one step to reduce memory usage,
which enables the full parameter fine-tuning of a 7B model on a single RTX 3090, or a 65B model on a single machine with 8×RTX 3090, each with 24GB memory. |
| MBZUAI, etc. | [GLoRA](https://github.com/Arnav0400/ViT-Slim/tree/master/GLoRA) | Enhancing Low-Rank Adaptation (LoRA), GLoRA employs a generalized prompt module to optimize pre-trained model weights and adjust intermediate activations,
providing more flexibility and capability across diverse tasks and datasets. |
| UMass Lowell | [ReLoRA](https://github.com/Guitaricet/relora) | ReLoRA performs a high-rank update and achieves performance similar to regular neural network training.
The components of ReLoRA include initial full-rank training of the neural network, LoRA training, restarts, a jagged learning rate schedule, and partial optimizer resets. |
| Huawei | [QA-LoRA](https://arxiv.org/pdf/2309.14717.pdf) | equips the original LoRA with two-fold abilities:
(i) during fine-tuning, the LLM’s weights are quantized (e.g., into INT4) to reduce time and memory usage;
(ii) after fine-tuning, the LLM and auxiliary weights are naturally integrated into a quantized model without loss of accuracy. |
| UMD, etc. | [NEFTune](https://github.com/neelsjain/NEFTune/tree/main) | we propose to add random noise to the embedding vectors of the training data during the forward pass of fine-tuning. We show that this simple trick
can improve the outcome of instruction fine-tuning, often by a large margin, with no additional compute or data overhead. |
| THU | [SoRA](https://github.com/TsinghuaC3I/SoRA) | sparse low-rank adaptation (SoRA) that enables dynamic adjustments to the intrinsic rank during the adaptation process. We achieve this through the incorporation of
a gate unit optimized with proximal gradient method in the training stage, controlling the cardinality of rank under the sparsity of the gate. In the subsequent inference stage,
we eliminate the parameter blocks corresponding to the zeroed-out ranks, to reduce each SoRA module back to a concise yet rank-optimal LoRA.
experimental results demonstrate that SoRA can outperform other baselines even with 70% retained parameters and 70% training time. |
| FDU, etc. | [O-LoRA](https://github.com/cmnfriend/O-LoRA) | O-LoRA mitigates catastrophic forgetting of past task knowledge by constraining the gradient updates of the current task to be orthogonal to the gradient subspace of the past tasks. |
| TUDB-Labs | [mLoRA](https://github.com/TUDB-Labs/multi-lora-fine-tune) | m-LoRA (a.k.a Multi-Lora Fine-Tune) is an open-source framework for fine-tuning Large Language Models (LLMs) using the efficient multiple LoRA/QLoRA methods. Key features of m-LoRA include:
1. Efficient LoRA/QLoRA: Optimizes the fine-tuning process, significantly reducing GPU memory usage by leveraging a shared frozen-based model.
2. Multiple LoRA Adapters: Support for concurrent fine-tuning of multiple LoRA/QLoRA adapters.
3. LoRA based Mix-of-Expert: Support for [MixLoRA](https://github.com/TUDB-Labs/multi-lora-fine-tune/blob/main/MixLoRA.md), which implements Mix-of-Expert architecture based on multiple LoRA adapters for frozen FFN layer. |# Low-Cost Inference
## quantization
| contributor | algorithm | main feature |
| ----------- | ------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| UW, etc. | [SpQR](https://github.com/Vahe1994/SpQR) | a new compressed format and quantization technique which enables for the first time near-lossless compression of LLMs across model scales,
while reaching similar compression levels to previous methods. |
| THU | [Train_Transformers_with_INT4](https://github.com/xijiu9/Train_Transformers_with_INT4) | For forward propagation, we identify the challenge of outliers and propose a Hadamard quantizer to suppress the outliers.
For backpropagation, we leverage the structural sparsity of gradients by proposing bit splitting and leverage score sampling techniques to quantize gradients accurately. |
| INTEL | [neural-compressor](https://github.com/intel/neural-compressor) | targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation,
across different deep learning frameworks to pursue optimal inference performance. |
| INTEL | [intel-extension-for-transformers](https://github.com/intel/intel-extension-for-transformers) | Intel® Extension for Transformers is an innovative toolkit to accelerate Transformer-based models on Intel platforms, in particular, effective on 4th Intel Xeon Scalable processor Sapphire Rapids. |
| UCB | [KVQuant](https://github.com/SqueezeAILab/KVQuant/) | **Per-channel, Pre-RoPE** Key quantization to better match the outlier channels in Keys; Non-Uniform Quantization ( **NUQ** ) to better represent the non-uniform activations; **Dense-and-Sparse Quantization** to mitigate the impacts of numerical outliers on quantization difficulty; **Q-Norm** to mitigate distribution shift at ultra low precisions (eg. 2-bit); KVQuant enables serving the **LLaMA-7B model with 1M context length on a single A100-80GB GPU** , or even the **LLaMA-7B model with 10M context length on an 8-GPU system** 🔥 |## projects
| contributor | project | main feature |
| -------------------------------- | ------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| @ggerganov | [llama.cpp](https://github.com/ggerganov/llama.cpp) | c/cpp implementation for llama and some other models, using quantization. |
| @NouamaneTazi | [bloomz.cpp](https://github.com/NouamaneTazi/bloomz.cpp) | C++ implementation for BLOOM inference. |
| @mlc-ai | [MLC LLM](https://github.com/mlc-ai/mlc-llm) | a universal solution that allows any language models to be deployed natively on a diverse set of hardware backends and native applications,
plus a productive framework for everyone to further optimize model performance for their own use cases. |
| alibaba | [ChatGLM-MNN](https://github.com/wangzhaode/ChatGLM-MNN) | converts the ChatGLM-6B model to MNN and performs inference using C++. |
| Jittor | [JittorLLMs](https://github.com/Jittor/JittorLLMs) | Significantly reduce hardware costs (by 80%), currently known as the lowest-cost deployment library, supports multiple platforms. |
| OpenBMB | [BMInf](https://github.com/OpenBMB/BMInf) | BMInf supports running models with more than 10 billion parameters on a single NVIDIA GTX 1060 GPU in its minimum requirements.
In cases where the GPU memory supports the large model inference (such as V100 or A100),
BMInf still has a significant performance improvement over the existing PyTorch implementation. |
| hpcaitech | [EnergonAI](https://github.com/hpcaitech/EnergonAI) | With tensor parallel operations, pipeline parallel wrapper, distributed checkpoint loading, and customized CUDA kernel,
EnergonAI can enable efficient parallel inference for larges-scale models. |
| MegEngine | [InferLLM](https://github.com/MegEngine/InferLLM) | a lightweight LLM model inference framework that mainly references and borrows from[the llama.cpp project](https://github.com/ggerganov/llama.cpp).
llama.cpp puts almost all core code and kernels in a single file and use a large number of macros, making it difficult for developers to read and modify. |
| @saharNooby | [rwkv.cpp](https://github.com/saharNooby/rwkv.cpp) | a port of[BlinkDL/RWKV-LM](https://github.com/BlinkDL/RWKV-LM) to [ggerganov/ggml](https://github.com/ggerganov/ggml). |
| FMInference | [FlexGen](https://github.com/FMInference/FlexGen) | FlexGen is a high-throughput generation engine for running large language models with limited GPU memory.
FlexGen allows**high-throughput** generation by IO-efficient offloading, compression, and **large effective batch sizes** . |
| huggingface
bigcode-project | [starcoder.cpp](https://github.com/bigcode-project/starcoder.cpp) | C++ implemention for 💫 StarCoder inference using the[ggml](https://github.com/ggerganov/ggml) library. |
| CMU | [SpecInfer](https://github.com/flexflow/FlexFlow/tree/inference) | SpecInfer is an open-source distributed multi-GPU system that accelerates generative LLM inference with**speculative inference** and **token tree verification**.
A key insight behind SpecInfer is to combine various collectively boost-tuned small speculative models (SSMs) to jointly predict the LLM’s outputs. |
| @ztxz16 | [fastllm](https://github.com/ztxz16/fastllm) | full-platform pure c++ llm acceleration library, supports moss, chatglm, baichuan models, runs smoothly on mobile phones. |
| UCB | [vllm](https://github.com/vllm-project/vllm) | a fast and easy-to-use library for LLM inference and serving. fast with Efficient management of attention key and value memory with**PagedAttention.** |
| stanford | [mpt-30B-inference](https://github.com/abacaj/mpt-30B-inference) | Run inference on the latest MPT-30B model using your CPU. This inference code uses a[ggml](https://github.com/ggerganov/ggml) quantized model. |
| Shanghai AI Lab | [lmdeploy](https://github.com/InternLM/lmdeploy) | a toolkit for compressing, deploying, and serving LLM. |
| @turboderp | [ExLlama](https://github.com/turboderp/exllama) / [ExLlamaV2](https://github.com/turboderp/exllamav2) | A fast inference library for running LLMs locally on modern consumer-class GPUs |
| PyTorch | [ExecuTorch](https://github.com/pytorch/executorch) | End-to-end solution for enabling on-device AI across mobile and edge devices for PyTorch models. |
| Xorbitsai | [Xinference](https://github.com/xorbitsai/inference) | a powerful and versatile library designed to serve language, speech recognition, and multimodal models.
With Xorbits Inference, you can effortlessly deploy and serve your or state-of-the-art built-in models using just a single command. |
| NVIDIA | [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) | TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build[TensorRT](https://developer.nvidia.com/tensorrt) engines that contain
state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes
that execute those TensorRT engines. It also includes a [backend](https://github.com/triton-inference-server/tensorrtllm_backend) for integration with the [NVIDIA Triton Inference Server](https://developer.nvidia.com/nvidia-triton-inference-server); a production-quality system to serve LLMs.
Models built with TensorRT-LLM can be executed on a wide range of configurations going from a single GPU to
multiple nodes with multiple GPUs (using [Tensor Parallelism](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/parallelisms.html#tensor-parallelism) and/or [Pipeline Parallelism](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/parallelisms.html#pipeline-parallelism)). |
| @sabetAI | [Batched LoRAs](https://github.com/sabetAI/BLoRA) | Maximize GPU util by routing inference through multiple LoRAs in the same batch. |
| huggingface | [TGI](https://github.com/huggingface/text-generation-inference) | Text Generation Inference (TGI) is a toolkit for deploying and serving Large Language Models (LLMs). TGI enables high-performance text generation for the most popular open-source LLMs,
including Llama, Falcon, StarCoder, BLOOM, GPT-NeoX, and[more](https://huggingface.co/docs/text-generation-inference/supported_models). TGI implements many features, such as:
- Simple launcher to serve most popular LLMs
- Production ready (distributed tracing with Open Telemetry, Prometheus metrics)
- Tensor Parallelism for faster inference on multiple GPUs
- Token streaming using Server-Sent Events (SSE)
- Continuous batching of incoming requests for increased total throughput
- Optimized transformers code for inference using [Flash Attention](https://github.com/HazyResearch/flash-attention) and [Paged Attention](https://github.com/vllm-project/vllm) on the most popular architectures |
| microsoft | [DeepSpeed-MII](https://github.com/microsoft/DeepSpeed-MII) | Under-the-hood MII is powered by[DeepSpeed-Inference](https://arxiv.org/abs/2207.00032). Based on model type, model size, batch size, and available hardware resources, MII automatically applies the appropriate set of
system optimizations from DeepSpeed-Inference to minimize latency and maximize throughput. It does so by using one of many pre-specified model injection policies, that allows MII and
DeepSpeed-Inference to identify the underlying PyTorch model architecture and replace it with an optimized implementation. In doing so, MII makes the expansive set of
optimizations in DeepSpeed-Inference automatically available for thousands of popular models that it supports. |
| flexflow | [FlexFlow](https://github.com/flexflow/FlexFlow/) | A key technique that enables FlexFlow Serve to accelerate LLM serving is speculative inference, which combines various collectively boost-tuned small speculative models (SSMs)
to jointly predict the LLM’s outputs; the predictions are organized as a token tree, whose nodes each represent a candidate token sequence. The correctness of all candidate token sequences
represented by a token tree is verified against the LLM’s output in parallel using a novel tree-based parallel decoding mechanism. FlexFlow Serve uses an LLM as a token tree verifier instead of
an incremental decoder, which largely reduces the end-to-end inference latency and computational requirement for serving generative LLMs while provably preserving model quality. |
| BentoML | [BentoML](https://github.com/bentoml/BentoML) | an open platform for machine learning in production. It simplifies model packaging and model management, optimizes model serving workloads
to run at production scale, and accelerates the creation, deployment, and monitoring of prediction services. |
| @ModelTC | [LightLLM](https://github.com/ModelTC/lightllm) | a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance. |
| @FasterDecoding | [Medusa](https://github.com/FasterDecoding/Medusa) | Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads. |
| UCB, etc. | [S-LoRA](https://github.com/S-LoRA/S-LoRA) | S-LoRA stores all adapters in the main memory and fetches the adapters used by the currently running queries to the GPU memory. To efficiently use the GPU memory and reduce fragmentation,
S-LoRA proposes Unified Paging. Unified Paging uses a unified memory pool to manage dynamic adapter weights with different ranks and KV cache tensors with varying sequence lengths.
Additionally, S-LoRA employs a novel tensor parallelism strategy and highly optimized custom CUDA kernels for heterogeneous batching of LoRA computation. Collectively, these features enable S-LoRA
to serve thousands of LoRA adapters on a single GPU or across multiple GPUs with a small overhead. Compared to state-of-the-art libraries such as HuggingFace PEFT and vLLM (with naive support of LoRA serving),
S-LoRA can improve the throughput by up to 4 times and increase the number of served adapters by several orders of magnitude. As a result, S-LoRA enables scalable serving of many task-specific fine-tuned models
and offers the potential for large-scale customized fine-tuning services. |
| @lyogavin | [AirLLM](https://github.com/lyogavin/Anima/tree/main/air_llm) | When executing at a certain layer, the corresponding layer will be loaded from the hard drive, and the calculation of that layer will be performed. Once the calculation is complete,
the memory of that layer can be completely released. This way, the GPU memory usage will only be approximately the size of one layer of transformer parameters. |
| UW, etc. | [Punica](https://github.com/punica-ai/punica) | We present Punica, a system to serve multiple LoRA models in a shared GPU cluster. Punica contains a new CUDA kernel design that allows batching of GPU operations for different LoRA models.
This allows a GPU to hold only a single copy of the underlying pre-trained model when serving multiple, different LoRA models, significantly enhancing GPU efficiency in terms of both memory and computation.
Our scheduler consolidates multi-tenant LoRA serving workloads in a shared GPU cluster. With a fixed-sized GPU cluster, our evaluations show that Punica achieves 12x higher throughput in serving multiple LoRA models
compared to state-of-the-art LLM serving systems while only adding 2ms latency per token. |
| alibaba | [MergeLM](https://github.com/yule-BUAA/MergeLM) | In this work, we uncover that Language Models (LMs), either encoder- or decoder-based, can**obtain new capabilities by assimilating the parameters of homologous models without the need for retraining or GPUs**.
1. We introduce a novel operation called **DARE** to directly set most of (90% or even 99%) the delta parameters to zeros without affecting the capabilities of SFT LMs.
2. We sparsify delta parameters of multiple SFT homologous models with DARE as a **general preprocessing technique** and subsequently merge them into a single model by parameter averaging. |
| ETH Zürich | [UltraFastBERT](https://github.com/pbelcak/UltraFastBERT) | a BERT variant that uses 0.3% of its neurons during inference while performing on par with similar BERT models. UltraFastBERT selectively engages just 12 out of 4095 neurons for each layer inference.
This is achieved by replacing feedforward networks with fast feedforward networks (FFFs). |
| UCB, etc. | [LookaheadDecoding](https://github.com/hao-ai-lab/LookaheadDecoding) | Lookahead decoding breaks the sequential dependency in autoregressive decoding by concurrently extracting and verifying n-grams directly with the LLM, utilizing the[Jacobi iteration method](https://en.wikipedia.org/wiki/Jacobi_method).
Lookahead decoding functions **without** the need for a draft model or a data store. It linearly decreases the number of decoding steps directly correlating with the log(FLOPs) used per decoding step. |
| Intel | [BigDL](https://github.com/intel-analytics/BigDL) | **[`bigdl-llm`](https://github.com/intel-analytics/BigDL/blob/main/python/llm)** is a library for running **LLM** (large language model) on Intel **XPU** (from *Laptop* to *GPU* to *Cloud* ) using **INT4/FP4/INT8/FP8** with very low latency (for any **PyTorch** model). |
| SenseTime, etc. | [LightLLM](https://github.com/ModelTC/lightllm) | 1. Tri-process asynchronous collaboration: tokenization, model inference, and detokenization are performed asynchronously, leading to a considerable improvement in GPU utilization.
2. [Token Attention](https://github.com/ModelTC/lightllm/blob/main/docs/TokenAttention.md): implements token-wise's KV cache memory management mechanism, allowing for zero memory waste during inference.
3. High-performance Router: collaborates with Token Attention to meticulously manage the GPU memory of each token, thereby optimizing system throughput. |
| THU, etc. | [SoT](https://github.com/imagination-research/sot/) | to guide LLMs to generate the skeleton of the answer, and then conducts parallel API calls or batched decoding to complete the contents of each skeleton point in parallel. Not only does SoT provide considerable speed-ups across 12 LLMs, but it can also potentially improve the answer quality on several question categories. |
| ollama | [ollama](https://github.com/ollama/ollama?tab=readme-ov-file) | docker-style interation of llm inference. |
| alibaba | [RTP-LLM](https://github.com/alibaba/rtp-llm) | Alibaba's high-performance LLM inference engine for diverse applications. The project is mainly based on[FasterTransformer](https://github.com/NVIDIA/FasterTransformer), and on this basis, we have integrated some kernel implementations from [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM). FasterTransformer and TensorRT-LLM have provided us with reliable performance guarantees. [Flash-Attention2](https://github.com/Dao-AILab/flash-attention) and [cutlass](https://github.com/NVIDIA/cutlass) have also provided a lot of help in our continuous performance optimization process. Our continuous batching and increment decoding draw on the implementation of [vllm](https://github.com/vllm-project/vllm); sampling draws on [transformers](https://github.com/huggingface/transformers), with speculative sampling integrating [Medusa](https://github.com/FasterDecoding/Medusa)'s implementation, and the multimodal part integrating implementations from [llava](https://github.com/haotian-liu/LLaVA) and [qwen-vl](https://github.com/QwenLM/Qwen-VL). |
| 腾讯 | [一念](https://github.com/pcg-mlp/KsanaLLM/tree/main) | 一念LLM是面向LLM推理和服务的高性能和高易用的推理引擎。
高性能和高吞吐:
使用极致优化的 CUDA kernels, 包括来自 vllm, TensorRT-LLM, FastTransformer 等工作的高性能算子
基于PagedAttention实现地对注意力机制中key和value的高效显存管理
对任务调度和显存占用精细调优的动态batching
(实验版) 支持前缀缓存(Prefix caching) |## Prompt Compression
| contributor | project | main feature |
| ----------- | ------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| microsoft | [LLMLingua](https://github.com/microsoft/LLMLingua) | LLMLingua, that uses a well-trained small language model after alignment, such as GPT2-small or LLaMA-7B, to detect the unimportant tokens in the prompt and enable inference with the compressed prompt in black-box LLMs, achieving up to 20x compression with minimal performance loss. |# Prompting
[Prompt Engineering Guide](https://www.promptingguide.ai/)
| contributor | method | main feature |
| --------------- | --------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Google | [CoT](https://arxiv.org/pdf/2201.11903.pdf) | a technique that allows large language models (LLMs) to solve a problem as a series of intermediate steps before giving a final answer. |
| Princeton, etc. | ToT([Yao et el. (2023)](https://arxiv.org/abs/2305.10601) and [Long (2023)](https://arxiv.org/abs/2305.08291)) | ToT maintains a tree of thoughts, where thoughts represent coherent language sequences that serve as intermediate steps toward solving a problem.
This approach enables an LM to self-evaluate the progress intermediate thoughts make towards solving a problem through a deliberate reasoning process. |
| SJTU, etc. | [GoT](https://arxiv.org/pdf/2305.16582.pdf) | we propose Graph-of-Thought (GoT) reasoning, which models human thought processes not only as a chain but also as a graph. By representing thought units as nodes
and connections between them as edges, our approach captures the non-sequential nature of human thinking and allows for a more realistic modeling of thought processes. |
| Princeton, etc. | [ReAct](https://github.com/ysymyth/ReAct) | LLMs are used to generate both*reasoning traces* and *task-specific actions* in an interleaved manner. |
| SJTU | [Meta-CoT](https://github.com/Anni-Zou/Meta-CoT) | **Meta-CoT** is a generalizable CoT prompting method in mixed-task scenarios where the type of input questions is unknown. It consists of three phases:
(i) *scenario identification* : categorizes the scenario of the input question;
(ii) *demonstration selection* : fetches the ICL demonstrations for the categorized scenario;
(iii) *answer derivation* : performs the answer inference by feeding the LLM with the prompt comprising the fetched ICL demonstrations and the input question. |
| UCLA | [RaR](https://uclaml.github.io/Rephrase-and-Respond/) | we present a method named `Rephrase and Respond' (RaR), which allows LLMs to rephrase and expand questions posed by humans and provide responses in a single prompt.
Our experiments demonstrate that our methods significantly improve the performance of different models across a wide range to tasks. |
| CAS, etc. | [EmotionPrompt](https://arxiv.org/pdf/2307.11760.pdf) | Our automatic experiments show that LLMs have a grasp of emotional intelligence, and their performance can be improved with emotional prompts (which we call “EmotionPrompt” that combines the original prompt with emotional stimuli),
Our human study results demonstrate that EmotionPrompt significantly boosts the performance of generative tasks. |
| Meta | [S2A](https://arxiv.org/pdf/2311.11829.pdf) | S2A regenerates the input context to only include the relevant portions, before attending to the regenerated context to elicit the final response. |
| Google | [Step-Back Prompting](https://arxiv.org/pdf/2310.06117.pdf) | We present STEP-BACK PROMPTING, a simple prompting technique that enables LLMs to do abstractions to derive high-level concepts and first principles from instances containing specific details. Using the concepts and principles to guide the reasoning steps, LLMs significantly improve their abilities in following a correct reasoning path towards the solution. We conduct experiments of STEP-BACK PROMPTING with PaLM-2L models and observe substantial performance gains on a wide range of challenging reasoning-intensive tasks including STEM, Knowledge QA, and Multi-Hop Reasoning. For instance, STEP-BACK PROMPTING improves PaLM-2L performance on MMLU Physics and Chemistry by 7% and 11%, TimeQA by 27%, and MuSiQue by 7%. |# Safety
| contributor | method | main feature |
| ----------- | --------------------------------------------------------- | ----------------------------------------------------------------------- |
| thu-coai | [Safety-Prompts](https://github.com/thu-coai/Safety-Prompts) | Chinese safety prompts for evaluating and improving the safety of LLMs. |# Truthfulness
| contributor | method | main feature |
| ----------- | --------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Harvard | [ITI](https://github.com/likenneth/honest_llama) | ITI operates by shifting model activations during inference, following a set of directions across a limited number of attention heads.
This intervention significantly improves the performance of LLaMA models on the TruthfulQA benchmark.
On an instruction-finetuned LLaMA called Alpaca, ITI improves its truthfulness from 32.5 to 65.1. |# Exceeding Context Window
https://zhuanlan.zhihu.com/p/670280576
## Extending Context Window
| contributor | method | main feature |
| ---------------- | ----------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| UW, etc. | [ALiBi](https://github.com/ofirpress/attention_with_linear_biases) | Instead of adding position embeddings at the bottom of the transformer stack,
ALiBi adds a linear bias to each attention score, allowing the model to be trained on,
for example, 1024 tokens, and then do inference on 2048 (or much more) tokens without any finetuning. |
| DeepPavlov, etc. | [RMT](https://arxiv.org/abs/2304.11062) | use a recurrent memory to extend the context length. |
| bytedance | [SCM](https://arxiv.org/abs/2304.11062) | unleash infinite-length input capacity for large-scale language models. |
| Meta | [Position Interpolation](https://arxiv.org/pdf/2306.15595.pdf) | extends the context window sizes of RoPE-based pretrained LLMs such as LLaMA models to up to 32768 with minimal fine-tuning (within 1000 steps).
Position Interpolation linearly down-scales the input position indices to match the original context window size, rather than extrapolating beyond
the trained context length which may lead to catastrophically high attention scores that completely ruin the self-attention mechanism. |
| UCB | [LongChat](https://github.com/DachengLi1/LongChat) | Instead of forcing the LLaMA model to adapt to position_ids > 2048, we condense position_ids > 2048 to be within 0 to 2048 (the same machenism as[Position Interpolation](https://arxiv.org/pdf/2306.15595.pdf), surprisingly!).
we observed that our LongChat-13B-16K model reliably retrieves the first topic, with comparable accuracy to gpt-3.5-turbo. |
| microsoft | [LongNet](https://github.com/microsoft/unilm/tree/master#revolutionizing-transformers-for-mllms-and-agi) | replaces the attention of vanilla Transformers with a novel component named**dilated attention**, and successfully scale the sequence length to 1 billion tokens. |
| IDEAS NCBR, etc. | [LongLLaMA](https://github.com/CStanKonrad/long_llama) | LongLLaMA is built upon the foundation of[OpenLLaMA](https://github.com/openlm-research/open_llama) and fine-tuned using the [Focused Transformer (FoT)](https://arxiv.org/abs/2307.03170) method, and is capable of handling long contexts of 256k tokens or even more. |
| Abacus.AI | [Giraffe](https://huggingface.co/abacusai/Giraffe-v2-13b-32k) | a range of experiments with different schemes for extending context length capabilities of Llama are conducted. |
| TogetherComputer | [Llama-2-7B-32K-Instruct](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct) | long-context chat model finetuned from[Llama-2-7B-32K](https://huggingface.co/togethercomputer/Llama-2-7B-32K), over high-quality instruction and chat data. |
| Jianlin Su | [ReRoPE](https://github.com/bojone/rerope) | set a window with size$w$, the interval between positions inside the window is **1**, while the interval outside the window is $\frac 1 k$. |
| CUHK/MIT | [longlora](https://github.com/dvlab-research/longlora) | an efficient fine-tuning approach that extends the context sizes of pre-trained large language models (LLMs), with limited computation cost. |## Without Extending Context Window
| contributor | method | main feature |
| ------------ | ------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| MIT/Meta/CMU | [StreamingLLM](https://github.com/mit-han-lab/streaming-llm)/
[SwiftInfer](https://github.com/hpcaitech/SwiftInfer) | deploy LLMs for**infinite-length inputs** without sacrificing efficiency and performance.
an efficient framework that enables LLMs trained with a finite length attention window to generalize to infinite sequence length without any fine-tuning.
We show that StreamingLLM can enable Llama-2, MPT, Falcon, and Pythia to perform stable and efficient language modeling with up to 4 million tokens and more.
In addition, we discover that adding a placeholder token as a dedicated attention sink during pre-training can further improve streaming deployment.
In streaming settings, StreamingLLM outperforms the sliding window recomputation baseline by up to 22.2x speedup.
SwiftInfer:implement StreamingLLM based on [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM). |
| UCB | [Ring Attention](https://browse.arxiv.org/pdf/2310.01889.pdf) | We present a distinct approach, Ring Attention, which leverages blockwise computation of self-attention to distribute long sequences across multiple devices while overlapping the communication of
key-value blocks with the computation of blockwise attention. Ring Attention enables training and inference of sequences that are up to device count times longer than those of prior memory-efficient Transformers,
effectively eliminating the memory constraints imposed by individual devices. Extensive experiments on language modeling tasks demonstrate the effectiveness of Ring Attention in allowing large sequence input size
and improving performance. |
| UCB | [MemGPT](https://github.com/cpacker/MemGPT) | a system that intelligently manages different memory tiers in LLMs in order to effectively provide extended context within the LLM's limited context window.
For example, MemGPT knows when to push critical information to a vector database and when to retrieve it later in the chat, enabling perpetual conversations. |
| FDU, etc. | [ScalingRoPE](https://github.com/OpenLMLab/scaling-rope) | we first observe that fine-tuning a RoPE-based LLM with either a smaller or larger base in pre-training context length could significantly enhance its extrapolation performance.
After that, we propose Scaling Laws of RoPE-based Extrapolation, a unified framework from the periodic perspective,
to describe the relationship between the extrapolation performance and base value as well as tuning context length. |
| THU | [InfLLM](https://github.com/thunlp/InfLLM) | training-free, memory-base, InfLLM stores distant contexts into additional memory units and employs an efficient mechanism to lookup token-relevant units for attention computation. Even when the sequence length is scaled to 1, 024K, InfLLM still effectively captures long-distance dependencies. |# Knowledge Editing
Must-read Papers on Model Editing: [ModelEditingPapers](https://github.com/zjunlp/ModelEditingPapers)
| contributor | method | main feature |
| ----------- | ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| MIT, etc. | [ROME](https://rome.baulab.info/) | First, we trace the causal effects of hidden state activations within GPT using causal mediation analysis to identify the specific modules that mediate recall of a fact about a subject.
Our analysis reveals that feedforward MLPs at a range of middle layers are decisive when processing the last token of the subject name.
Second, we test this finding in model weights by introducing a Rank-One Model Editing method (ROME) to alter the parameters that determine a feedfoward layer’s behavior at the decisive token.
Despite the simplicity of the intervention, we find that ROME is similarly effective to other modelediting approaches on a standard zero-shot relation extraction benchmark. |## Implementations
| contributor | project | main feature |
| ----------- | -------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| PKU | [FastEdit](https://github.com/hiyouga/FastEdit) | injecting**fresh** and **customized** knowledge into large language models efficiently using one single command. |
| ZJU | [EasyEdit](https://github.com/zjunlp/EasyEdit) | a Python package for edit Large Language Models (LLM) like `GPT-J`, `Llama`, `GPT-NEO`, `GPT2`, `T5`(support models from **1B** to **65B** ),
the objective of which is to alter the behavior of LLMs efficiently within a specific domain without negatively impacting performance across other inputs. It is designed to be easy to use and easy to extend. |# External Knowledge
allowing the model to access external knowledge, such as internet、KG、databases.
| contributor | project | main feature |
| -------------- | --------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| @jerryjliu | [LlamaIndex](https://github.com/jerryjliu/llama_index) | provides a central interface to connect your LLM's with external data. |
| @imClumsyPanda | [langchain-ChatGLM](https://github.com/imClumsyPanda/langchain-ChatGLM) | local knowledge based ChatGLM with[langchain](https://github.com/hwchase17/langchain). |
| @wenda-LLM | [wenda](https://github.com/wenda-LLM/wenda) | an LLM calling platform[ ](https://github.com/wenda-LLM/wenda)designed to find and design automatic execution actions for small model plug-in
knowledge bases to achieve the same generation ability as large models. |
| @csunny | [DB-GPT](https://github.com/csunny/DB-GPT) | build a complete private large model solution for all database-based scenarios. |
| THU, BAAI, ZJU | [ChatDB](https://github.com/huchenxucs/ChatDB) | a novel framework integrating symbolic memory with LLMs. ChatDB explores ways of augmenting LLMs with symbolic memory to handle contexts of arbitrary lengths.
Such a symbolic memory framework is instantiated as an LLM with a set of SQL databases. The LLM generates SQL instructions to manipulate the SQL databases
autonomously (including insertion, selection, update, and deletion), aiming to complete a complex task requiring multi-hop reasoning and long-term symbolic memory. |
| IDEA | [Ziya-Reader](https://modelscope.cn/models/Fengshenbang/Ziya-Reader-13B-v1.0/) | "Ziya-Reader-13B-v1.0" is a knowledge question-answering model. It can accurately answer questions given questions and knowledge documents,
and is suitable for both multi-document and single-document question-answering. The model has an 8k context window, and compared to models with longer windows,
we have achieved victory in evaluations across multiple long-text tasks. The tasks include multi-document question-answering, synthetic tasks (document retrieval), and long-text summarization.
Additionally, the model also demonstrates excellent generalization capabilities, enabling it to be used for general question-answering.
Its performance on our general ability evaluation set surpassed that of Ziya-Llama-13B. |
| docker | [GenAI Stack](https://github.com/docker/genai-stack) | significantly simplify the entire process by integrating Docker with the Neo4j graph database, LangChain model linking technology, and Ollama for running Large Language models (LLM) |
| UW, etc. | [Self-RAG](https://github.com/AkariAsai/self-rag) | Unlike a widely-adopted Retrieval-Augmented Generation (RAG) approach,**Self-RAG** retrieves on demand (e.g., can retrieve multiple times or completely skip retrieval) given diverse queries,
and criticize its own generation from multiple fine-grained aspects by predicting **reflection tokens** as an integral part of generation. |
| RUC | [StructGPT](https://github.com/JBoRu/StructGPT) | Inspired by the studies on tool augmentation for LLMs, we develop an Iterative Reading-thenReasoning (IRR) framework to solve question answering tasks based on structured data, called StructGPT.
In this framework, we construct the specialized interfaces to collect relevant evidence from structured data (i.e., reading), and let LLMs concentrate on the reasoning task based on the collected information (i.e., reasoning).
Specially, we propose an invokinglinearization-generation procedure to support LLMs in reasoning on the structured data with the help of the interfaces. By iterating this procedure with provided interfaces,
our approach can gradually approach the target answers to a given query. Experiments conducted on three types of structured data show that StructGPT greatly improves the performance of LLMs,
under the few-shot and zero-shot settings. |
| BUPT | [ChatKBQA](https://github.com/LHRLAB/ChatKBQA) | A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned LLMs. |
| ZJU | [KnowPAT](https://github.com/zjukg/KnowPAT) | Knowledgeable Preference AlignmenT (KnowPAT) is a new pipeline to align LLMs with human's knowledge preference.
KnowPAT incorporates domain knowledge graphs to construct preference set and design new alignment objective to fine-tune the LLMs. |
| NetEase | [QAnything](https://github.com/netease-youdao/QAnything) | a local knowledge base question-answering system designed to support a wide range of file formats and databases, allowing for offline installation and use. With `QAnything`, you can simply drop any locally stored file of any format and receive accurate, fast, and reliable answers. Currently supported formats include: **PDF(pdf)** , **Word(docx)** , **PPT(pptx)** , **XLS(xlsx)** , **Markdown(md)** , **Email(eml)** , **TXT(txt)** , **Image(jpg,jpeg,png)** , **CSV(csv)** ,**Web links(html)** and more formats coming soon… |
| InfiniFlow | [RAGFlow](https://github.com/infiniflow/ragflow/tree/main) | an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. |
| dify.ai | [Dify](https://github.com/langgenius/dify) | Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production. |
| sealos.io | [FastGPT](https://github.com/labring/FastGPT) | FastGPT is a knowledge-based Q&A system built on the LLM, offers out-of-the-box data processing and model invocation capabilities, allows for workflow orchestration through Flow visualization! |
| vanna-ai | [vanna](https://github.com/vanna-ai/vanna) | 使用RAG通过llm实现文本到sql的转换。 |## AI搜索引擎
| contributor | project | main feature |
| ----------------- | ----------------------------------------------------------------------- | --------------------------------------------------------------------------------- |
| LeptonAI | [Search with Lepton](https://github.com/leptonai/search_with_lepton) | AI搜索demo,无需API密钥 |
| @rashadphz | [Farfalle](https://github.com/rashadphz/farfalle) | perpldxity开源仿制品,需要搜索API密钥、OpenAI密钥([demo](https://www.farfalle.dev/)) |
| @developersdigest | [llm-answer-engine](https://github.com/developersdigest/llm-answer-engine) | perplexity开源仿制品,需要搜索API密钥 |
| @ItzCrazyKns | [Perplexica](https://github.com/ItzCrazyKns/Perplexica) | perplexity开源仿制品,无需API密钥 |
| @miurla | [Morphic](https://github.com/miurla/morphic) | perplexity开源仿制品,无需API密钥 ([demo](https://morphic.sh/ "https://morphic.sh")) |
| @nilsherzig | [LLocalSearch](https://github.com/nilsherzig/LLocalSearch) | AI搜索,无需API密钥 |## Chat with Docs
| contributor | project | main feature |
| ----------- | ---------------------------------------- | ------------------------------------------------------------ |
| @arc53 | [DocsGPT](https://github.com/arc53/DocsGPT) | GPT-powered chat for documentation, chat with your documents |more at: [funNLP](https://github.com/fighting41love/funNLP?tab=readme-ov-file#%E7%B1%BBchatgpt%E7%9A%84%E6%96%87%E6%A1%A3%E9%97%AE%E7%AD%94)
## 内容解析
| 贡献者 | 项目 | 主要特征 |
| ------ | ----------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 阿里 | [OmniParser](https://github.com/alibabaresearch/advancedliteratemachinery) | 设计了一个通用模型OmniParser,可以同时处理3个典型的视觉情境文本解析任务:文本识别、关键信息提取和表格识别。在OmniParser中,所有任务共享统一的编码器-解码器架构,统一的目标:条件文本生成,以及统一的输入和输出表示:提示和结构化序列。广泛的实验表明,OmniParser在三个视觉位置的文本解析任务的7个数据集上取得了最先进的(SOTA)或极具竞争力的性能。 |## Vector DataBase
| contributor | db | main feature |
| ----------- | ----------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| milvus-io | [milvus](https://github.com/milvus-io/milvus) | a cloud-native vector database with storage and computation separated by design. |
| Meta | [faiss](https://github.com/facebookresearch/faiss) | It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning. Faiss is written in C++ with complete wrappers for Python/numpy. Some of the most useful algorithms are implemented on the GPU. |
| nmslib | [hnswlib](https://github.com/nmslib/hnswlib) | Header-only C++ HNSW implementation with python bindings, insertions and updates. |
| MyScale | [MyScaleDB](https://github.com/myscale/myscaledb) | An open-source, high-performance SQL vector database built on ClickHouse. |
| chroma | [Chroma](https://github.com/chroma-core/chroma) | the AI-native open-source embedding database. |
| Weaviate | [Weaviate](https://github.com/weaviate/weaviate) | stores both objects and vectors, allowing for the combination of vector search with structured filtering with the fault tolerance and scalability of a cloud-native database. |# External Tools
## Using Existing Tools
allowing the model to access external tools, such as search engine、api.
| contributor | project | base model | main feature |
| ------------- | ------------------------------------------------ | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| UCB/microsoft | [Gorilla](https://github.com/ShishirPatil/gorilla/) | LLaMA | invokes 1,600+ (and growing) API calls accurately while reducing hallucination. |
| THU | [ToolLLaMA](https://github.com/OpenBMB/ToolBench) | LLaMA | This project aims to construct**open-source, large-scale, high-quality** instruction tuning SFT data to facilitate the construction
of powerful LLMs with general **tool-use** capability. We provide the dataset, the corresponding training and evaluation scripts,
and a capable model ToolLLaMA fine-tuned on ToolBench. |## Make New Tools
| contributor | project | main feature |
| ------------ | --------------------------------------------- | --------------------------------------------------------- |
| Google, etc. | [LATM](https://github.com/ctlllll/LLM-ToolMaker) | LLMs create their own reusable tools for problem-solving. |# Agent
| contributor | project | main feature |
| --------------------- | ------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| @Significant-Gravitas | [Auto-GPT](https://github.com/Significant-Gravitas/Auto-GPT) | chains together LLM "thoughts", to autonomously achieve whatever goal you set. |
| @yoheinakajima | [BabyAGI](https://github.com/yoheinakajima/babyagi) | The main idea behind this system is that it creates tasks based on the result of previous tasks and a predefined objective.
The script then uses OpenAI's natural language processing (NLP) capabilities to create new tasks based on the objective,
and Chroma/Weaviate to store and retrieve task results for context. |
| microsoft | [HuggingGPT](https://github.com/microsoft/JARVIS) | Language serves as an interface for LLMs to connect numerous AI models for solving complicated AI tasks! |
| microsoft/NCSU | [ReWOO](https://github.com/billxbf/ReWOO) | detaches the reasoning process from external observations, thus significantly reducing token consumption. |
| Stanford | [generative_agents](https://github.com/joonspk-research/generative_agents) | Generative Agents: Interactive Simulacra of Human Behavior. |
| THU, etc. | [AgentVerse](https://github.com/OpenBMB/AgentVerse) | 🤖 AgentVerse 🪐 provides a flexible framework that simplifies the process of building custom multi-agent environments for large language models (LLMs). |
| BUAA, etc. | [TrafficGPT](https://github.com/lijlansg/TrafficGPT) | By seamlessly intertwining large language model and traffic expertise, TrafficGPT not only advances traffic
management but also offers a novel approach to leveraging AI capabilities in this domain. |
| microsoft, etc. | [ToRA](https://github.com/microsoft/ToRA) | ToRA is a series of Tool-integrated Reasoning LLM Agents designed to solve challenging mathematical reasoning problems by interacting with tools. |
| HKU | [OpenAgents](https://github.com/xlang-ai/OpenAgents) | an open platform for using and hosting language agents in the wild of everyday life. |
| THU | [XAgent](https://github.com/OpenBMB/XAgent) | an open-source experimental Large Language Model (LLM) driven autonomous agent that can automatically solve various tasks.
It is designed to be a general-purpose agent that can be applied to a wide range of tasks. |
| Nvidia, etc. | [Eureka](https://github.com/eureka-research/Eureka) | a**human-level** reward design algorithm powered by LLMs. Eureka exploits the remarkable zero-shot generation, code-writing, and in-context improvement
capabilities of state-of-the-art LLMs, such as GPT-4, to perform in-context evolutionary optimization over reward code. The resulting rewards can then be used to
acquire complex skills via reinforcement learning. Eureka generates reward functions that outperform expert human-engineered rewards without any task-specific
prompting or pre-defined reward templates. In a diverse suite of 29 open-source RL environments that include 10 distinct robot morphologies,
Eureka outperforms human expert on **83%** of the tasks leading to an average normalized improvement of **52%** . |
| THU | [AgentTuning](https://github.com/THUDM/AgentTuning) | **AgentTuning** represents the very first attempt to instruction-tune LLMs using interaction trajectories across multiple agent tasks.
Evaluation results indicate that AgentTuning enables the agent capabilities of LLMs with robust generalization on unseen agent tasks while remaining good on general language abilities. |
| microsoft | [AutoGen](https://github.com/microsoft/autogen) | AutoGen is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are
customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools. |
| PKU | [RestGPT](https://github.com/Yifan-Song793/RestGPT) | we connect LLMs with**RESTful APIs** and tackle the practical challenges of planning, API calling, and response parsing. To fully evaluate the performance of RestGPT, we propose **RestBench**,
a high-quality benchmark which consists of two real-world scenarios and human-annotated instructions with gold solution paths.
RestGPT adopts an iterative coarse-to-fine online planning framework and uses an executor to call RESTful APIs. |
| microsoft | [MusicAgent](https://github.com/microsoft/muzic) | a music domain agent powered by large language models (LLMs). Its goal is to help developers and non-professional music creators automatically analyze user requests and select appropriate tools to solve the problem. |
| HW, etc. | [LEGO-Prover](https://github.com/wiio12/LEGO-Prover) | the first automated theorem prover powered by the LLM that constructs the proof in a block-by-block manner. |
| alibaba | [ModelScope-Agent](https://github.com/modelscope/modelscope-agent) | An agent framework connecting models in ModelScope with the world. |
| CMU, etc. | [RoboGen](https://github.com/Genesis-Embodied-AI/RoboGen) | A generative and self-guided robotic agent that endlessly propose and master new skills. |
| PKU, etc. | [LLaMA-Rider](https://github.com/PKU-RL/LLaMA-Rider) | A LLM training framework that enables LLMs to autonomously explore open worlds based on environmental feedback and their own abilities, and to efficiently learn from collected experiences. In the Minecraft environment,
it has demonstrated better multitasking capabilities than other methods, including ChatGPT task planners. This adaptability to open worlds has been a major achievement for LLMs.
Additionally, LLaMA-Rider's ability to use past task experiences to solve new tasks demonstrates the potential of this method for lifelong exploration and learning in large models. |
| IDEA, etc. | [ToG](https://github.com/IDEA-FinAI/ToG) | Think-on-Graph (ToG), in which the LLM agent iteratively executes beam search on KG, discovers the most promising reasoning paths, and returns the most likely reasoning results. |
| Yale, etc. | [ToolkenGPT](https://github.com/Ber666/ToolkenGPT) | represents each**tool** as a to**ken** (**toolken**) and learns an embedding for it, enabling tool calls in the same way as generating a regular word token. Once a toolken is triggered, the LLM is prompted to complete arguments for the tool to execute. |
| tencent | [AppAgent](https://appagent-official.github.io/) | Our framework enables the agent to operate smartphone applications through a simplified action space, mimicking human-like interactions such as tapping and swiping. This novel approach bypasses the need for system back-end access, thereby broadening its applicability across diverse apps |
| Stanford, etc. | [Meta-Prompting](https://arxiv.org/abs/2401.12954) | This approach transforms a single LM into a multi-faceted conductor, adept at managing and integrating multiple independent LM queries. By employing high-level instructions, meta-prompting guides the LM to break down complex tasks into smaller, more manageable subtasks. These subtasks are then handled by distinct "expert" instances of the same LM, each operating under specific, tailored instructions. |
| tencent | [More-Agents-Is-All-You-Need](https://github.com/MoreAgentsIsAllYouNeed/More-Agents-Is-All-You-Need) | We find that, simply via a sampling-and-voting method, the performance of large language models (LLMs) scales with the number of agents instantiated. Also, this method is orthogonal to existing complicated methods to further enhance LLMs, while the degree of enhancement is correlated to the task difficulty. |
| Pythagora-io | [gpt-pilot](https://github.com/Pythagora-io/gpt-pilot) | GPT Pilot aims to research how much LLMs can be utilized to generate fully working, production-ready apps while the developer oversees the implementation. |
| DeepWisdom, etc. | [MetaGPT](https://github.com/geekan/MetaGPT) | A Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming. |
| OpenBMB | [XAgent](https://github.com/OpenBMB/XAgent) | An Autonomous LLM Agent for Complex Task Solving. |
| CrewAI | [crewAI](https://github.com/joaomdmoura/crewAI) | Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks. |
| stition.ai | [devika](https://github.com/stitionai/devika) | an Agentic AI Software Engineer that can understand high-level human instructions, break them down into steps, research relevant information, and write code to achieve the given objective. Devika aims to be a competitive open-source alternative to[Devin](https://www.cognition-labs.com/introducing-devin) by Cognition AI. |
| OpenDevin | [OpenDevin](https://github.com/OpenDevin/OpenDevin) | an open-source project aiming to replicate Devin, an autonomous AI software engineer who is capable of executing complex engineering tasks and collaborating actively with users on software development projects. This project aspires to replicate, enhance, and innovate upon Devin through the power of the open-source community. |
| alibaba | [AgentScope](https://github.com/modelscope/agentscope/blob/main/README_ZH.md) | 结合丰富的语法工具、内置资源和用户友好的交互,AgentScope 的通信机制显著降低了开发和理解的障碍。为了实现健壮和灵活的多智能体应用,AgentScope 提供了内置和可定制的容错机制,同时也具备系统级支持多模态数据生成、存储和传输的能力。此外,设计了一个基于 actor 的分布式框架,使得本地和分布式部署之间的轻松转换以及自动并行优化成为可能,无需额外努力。通过这些特性,AgentScope 赋予开发者构建充分发挥智能代理潜力的应用程序的能力。 |
| @langchain-ai | [langgraph](https://github.com/langchain-ai/langgraph) | 图编排方式搭建AI agent应用。 |
| @Maplemx | [Agently](https://github.com/Maplemx/Agently) | 易用,帮助开发者快速搭建AI agent应用。 |paper list : [LLM-Agent-Paper-List](https://github.com/WooooDyy/LLM-Agent-Paper-List)
Papers / Repos / Blogs / ... : [Awesome LLM-Powered Agent](https://github.com/hyp1231/awesome-llm-powered-agent "papers/repos/blogs ...")
# LLMs as XXX
| contributor | LLM as | repo | main feature |
| --------------- | ------------------- | ---------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Google DeepMind | optimizer | [OPRO](https://arxiv.org/pdf/2309.03409.pdf) | Optimization by PROmpting (OPRO), a simple and effective approach to LLMs
as optimizers, where the optimization task is described in natural language. In each optimization step, the LLM generates new solutions from the prompt that
contains previously generated solutions with their values, then the new solutions are evaluated and added to the prompt for the next optimization step. |
| HKU, etc. | part of graph tasks | [Awesome-LLMs-in-Graph-tasks](https://github.com/yhLeeee/Awesome-LLMs-in-Graph-tasks) | A curated collection of research papers exploring the utilization of LLMs for graph-related tasks. |# Similar Collections
| collections of open instruction-following llms |
| ---------------------------------------------------------------------------------------------------------- |
| [开源微调大型语言模型(LLM)合集](https://zhuanlan.zhihu.com/p/628716889) |
| [机器之心SOTA!模型](https://sota.jiqizhixin.com/models/list) |
| [Awesome Totally Open Chatgpt](https://github.com/nichtdax/awesome-totally-open-chatgpt) |
| [LLM-Zoo](https://github.com/DAMO-NLP-SG/LLM-Zoo) |
| [Awesome-LLM](https://github.com/Hannibal046/Awesome-LLM) |
| [🤗 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
| [Open LLMs](https://github.com/eugeneyan/open-llms) |
| [Awesome-Chinese-LLM](https://github.com/HqWu-HITCS/Awesome-Chinese-LLM) |
| [Awesome Pretrained Chinese NLP Models](https://github.com/lonePatient/awesome-pretrained-chinese-nlp-models) |
| [LLMSurvey](https://github.com/RUCAIBox/LLMSurvey) |