{"id":18673760,"url":"https://github.com/opencsgs/awesome-slms","last_synced_at":"2026-02-18T17:01:30.876Z","repository":{"id":238810910,"uuid":"797585045","full_name":"OpenCSGs/Awesome-SLMs","owner":"OpenCSGs","description":"survery of small language models","archived":false,"fork":false,"pushed_at":"2024-07-23T07:12:11.000Z","size":101,"stargazers_count":16,"open_issues_count":0,"forks_count":3,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-11-01T23:01:53.900Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/OpenCSGs.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-05-08T06:06:28.000Z","updated_at":"2025-08-26T20:42:29.000Z","dependencies_parsed_at":"2024-07-23T09:12:17.053Z","dependency_job_id":null,"html_url":"https://github.com/OpenCSGs/Awesome-SLMs","commit_stats":null,"previous_names":["opencsgs/awesome-slms"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/OpenCSGs/Awesome-SLMs","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenCSGs%2FAwesome-SLMs","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenCSGs%2FAwesome-SLMs/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenCSGs%2FAwesome-SLMs/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenCSGs%2FAwesome-SLMs/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/OpenCSGs","download_url":"https://codeload.github.com/OpenCSGs/Awesome-SLMs/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenCSGs%2FAwesome-SLMs/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29587066,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-18T16:55:40.614Z","status":"ssl_error","status_checked_at":"2026-02-18T16:55:37.558Z","response_time":162,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-07T09:16:30.509Z","updated_at":"2026-02-18T17:01:30.855Z","avatar_url":"https://github.com/OpenCSGs.png","language":null,"readme":"# 🎉**Awesome-SLM**🎉\n\n## 🌱 How to Contribute\nWe are welcome contributions from researchers. For detailed guidelines on how to contribute, please see our [CONTRIBUTING.md](CONTRIBUTING.md) file.\n\n## 📜 Contents\n- [🎉**Awesome SLM**🎉](#Awesome-SLM)\n  - [🌱 How to Contribute](#-how-to-contribute)\n  - [📜 Contents](#-contents)\n  - [👋 Introduction](#-introduction)\n  - [🔥 Base Model](#-base-model)\n  - [💪 Pretrain datasets](#-Pretrain-dataset)\n  - [💡 SFT datasets](#-SFT-dataset)\n  - [🔧 synthetic datasets](#-synthetic-dataset)\n  - [📦 preference dataset](#-preference-dataset)\n  - [🌈 benchmark](#-benchmark)\n \n## 👋 Introduction\n\n\n## 🔥 Base Model\n1. OPT-series [[paper](https://arxiv.org/abs/2205.01068)] [[code](https://github.com/facebookresearch/metaseq)] [[model](https://huggingface.co/facebook/opt-1.3b)]\n  - release time: 2022/06\n  - organzation: meta\n  - model size: 125M, 350M, 1.3B, 2.7B, 6.7B, 13B, 30B, 66B, 175B\n  - 模型结构：  \n  a. 训练数据：该模型使用了广泛的训练数据集，包括RoBERTa使用的数据集、The Pile以及PushShift.io的Reddit数据，数据量为180B tokens。对数据进行了去重处理，这些数据主要是英语文本，使用GPT-2的BPE分词器。  \n  b. 训练策略：模型训练使用了AdamW优化器，学习率采用线性调度，从零逐步升至最大值，然后随着训练进程逐渐下降。此外，训练过程中采用了较大的batch size  \n  c. 注意力机制：仅解码器的预训练transformer模型，采用多头自注意力机制，使用交替dense and locally banded sparse attention  \n  d. 模型层数和Block类型：模型的架构和超参数主要遵循GPT-3的设计\n\n![alt text](image.png)\n\n2. Pythia [[paper](https://arxiv.org/pdf/2304.01373)] [[code](https://github.com/EleutherAI/pythia)] [[model](https://huggingface.co/EleutherAI/pythia-1b)]\n  - release time: 2023/06\n  - organzation: meta\n  - model size: 70M, 160M, 410M, 1.0B, 1.4B, 2.8B, 6.9B, 12B\n  - 模型结构：  \n  a. 训练数据：训练数据使用的是Pile数据集，数据为全英文，经去重处理后数据量大小为 207B  \n  b. 训练策略：使用GPT-NeoX库进行训练，采用Adam优化器，并利用零冗余优化（ZeRO）和数据并行、张量并行的方法来优化性能。  \n  c. 注意力机制：采用多头自注意力机制，dense attention，使用旋转嵌入，在训练过程中使用Flash Attention技术来提高设备吞吐量  \n  d. 模型层数和Block类型：模型的架构和超参数主要遵循GPT-3的设计\n\n![alt text](image-1.png)\n\n3. phi-1 [[paper](https://arxiv.org/pdf/2306.11644.pdf)] [[code](https://huggingface.co/TommyZQ/phi-1)] [[model](https://huggingface.co/TommyZQ/phi-1)]\n  - release time: 2023/06\n  - organzation: mircosoft\n  - model size: 1.42B\n  - 模型结构：  \n  a. 训练数据：  数据集处理方面比较有特色，提出了教科书级数据，包含从The Stack和StackOverflow中筛选出的子集（约6B tokens）、由GPT-3.5生成的Python教科书（少于1B的tokens）、约180M的Python练习和解决方案的tokens  \n  b. 训练策略：phi-1-base模型在CodeTextbook数据集（过滤后的代码语言数据集和合成教科书数据集）上进行预训练，使用AdamW优化器、线性预热线性衰减学习率调度、attention和残差dropout均为0.1，batch size为1024  \n  c. 注意力机制：使用了仅解码器transformer的多头注意力机制，并在预训练和微调过程中使用了flashattention来提高效率  \n  d. 模型层数和Block类型：\n  模型包含24层，使用并行配置的 MHA 和 MLP 层，每个Block包括以下部分：  \n      隐藏层大小：2048  \n      注意力头数：32  \n      最大位置嵌入：2048  \n      位置嵌入类型：旋转位置嵌入（rotary）  \n      残差连接：gpt-j-residual  \n\n4. phi-1_5 [[paper](https://arxiv.org/pdf/2309.05463.pdf)] [[model](https://huggingface.co/TommyZQ/phi-1_5)]\n  - release time: 2023/09\n  - organzation: mircosoft\n  - model size: 1.42B\n  - 模型结构：  \n  a. 训练数据：  phi-1的训练数据（7B tokens）+新创建的合成“教科书”数据（约20B tokens），用于教授常识推理和世界通用知识（科学、日常活动、心智理论等）  \n  b. 训练策略：从随机初始化开始训练phi-1.5，使用常数学习率2e-4（无预热），权重衰减：0.1。Adam优化器，动量参数为0.9和0.98。混合精度训练使用fp16和DeepSpeed ZeRO Stage 2，批量大小2048。  \n  c. 注意力机制：使用了仅解码器transformer的多头注意力机制，并在预训练和微调过程中使用了flashattention来提高效率  \n  d. 模型层数和Block类型：（和phi-1相同）  \n  模型包含24层，使用并行配置的 MHA 和 MLP 层，每个Block包括以下部分：  \n      隐藏层大小：2048  \n      注意力头数：32  \n      注意力头维度：64  \n      最大位置嵌入：2048  \n      位置嵌入类型：旋转位置嵌入（rotary）  \n      残差连接：gpt-j-residual  \n\n5. phi-2 [[paper](https://www.microsoft.com/en-us/research/blog/phi-2-the-surprising-power-of-small-language-models/)][[model](https://huggingface.co/TommyZQ/phi-2)]\n  - release time: 2023/12\n  - organzation: mircosoft\n  - model size: 2.78B\n  - 模型结构：  \n  a. 训练数据： 与phi-1.5相同，数据源基于phi-1.5，并增加了由各种NLP合成文本和过滤网站（出于安全和教育价值）组成的新数据源250B tokens，训练tokens为1.4T tokens  \n  b. 训练策略：没有详细明确  \n  c. 注意力机制：没有详细明确  \n  d. 模型层数和Block类型：没有详细明确  \n\n6. phi-3-series [[paper](https://arxiv.org/pdf/2404.14219)] [[model](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)]\n  - release time: 2024/04\n  - organzation: mircosoft\n  - model series: Phi-3-mini-4k-instruct, Phi-3-mini-128k-instruct\n  - model size: 3.82B\n  - 模型结构：  \n  a. 训练数据： 训练数据集是phi-2数据集的升级版本，包括经过严格过滤的公开可用的网页数据和合成数据，训练tokens为3.3T tokens  \n  b. 训练策略：\n  预训练分为两个不连续且顺序进行的阶段：1.主要使用网络来源的数据，旨在教授模型通用知识和语言理解。2.合并了更多经过严格过滤的网络数据和一些合成数据，旨在教授模型逻辑推理和各种特殊技能\n  后训练：包括监督微调（SFT）和直接偏好优化（DPO）两个阶段。SFT数据集覆盖多种领域的高质量数据，DPO数据集则用于调整模型行为  \n  c. 注意力机制：使用了仅解码器transformer的分组查询注意力机制，默认上下文长度为4K，使用LongRope技术扩展到128K，使用Flash Attention加速训练  \n  d. 模型层数和Block类型：（ Llama-2 类似的块结构）  \n  模型包含32层，每个Block包括以下部分：  \n      隐藏层大小：3072  \n      注意力头数：32  \n      注意力头维度：64  \n      最大位置嵌入：2048  \n      位置嵌入类型：旋转位置嵌入（rotary）  \n      残差连接：gpt-j-residual  \n\n7. Tinyllama [[paper](https://arxiv.org/abs/2401.02385)] [[model](https://huggingface.co/TinyLlama)]\n  - release time: 2024/01\n  - organzation: 新加坡科技与设计大学\n  - model size: 1.1B\n  - 模型结构：  \n  a. 训练数据： 训练数据由两部分组成： SlimPajama：这是一个高质量的语料库，专门用于训练大型语言模型。它由RedPajama衍生而来，并经过额外的清洗和去重过程。原始的RedPajama语料库包含超过1.2万亿tokens。经过过滤后，SlimPajama保留了原始tokens的50%；StarCoder训练数据集：这个数据集用于训练StarCoder，包含86种编程语言的数据，除了代码数据外，还包括GitHub问题和涉及自然语言的文本-代码对。为避免重复，SlimPajama中移除了GitHub子集，只从StarCoder训练数据集中采样代码相关数据.合并这两个数据集后，得到大约9500亿tokens进行预训练，总共处理了3万亿tokens  \n  b. 训练策略：Adamw优化器，基于lit-gpt构建框架。TinyLlama的预训练分为两个阶段： \n  基础预训练：使用SlimPajama数据训练1.5万亿tokens，主要发展模型的常识推理能力。\n  持续预训练：结合SlimPajama数据和StarCoder、Proof Pile等代码和数学内容，以及Skypile的中文数据，分别针对一般应用、数学和编码任务以及中文处理进行持续预训练  \n  c. 注意力机制：使用分组查询注意力机制，使用旋转位置嵌入（RoPE）,使用Flash Attention2加速训练  \n  d. 模型层数和Block类型：模型共22层    \n  隐藏层大小：2048  \n  中间隐藏层大小：5632  \n  上下文长度：2048  \n  注意力头数：32  \n  词汇表大小：32000\n\n  激活函数：使用SwiGLU，即Swish激活函数和门控线性单元（GLU）的结合\n\n  预归一化和RMS归一化：在每个Transformer子层的输入进行归一化\n\n8. MiniCPM-series [[paper](https://shengdinghu.notion.site/MiniCPM-c805a17c5c8046398914e47f0542095a)][[code](https://github.com/OpenBMB/MiniCPM)[[model](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16)]\n  - release time: 2024/02\n  - organzation: openbmb\n  - model series: MiniCPM-1B-sft-bf16, MiniCPM-2B-sft-bf16, MiniCPM-2B-sft-fp32, MiniCPM-2B-128k, MiniCPM-MoE-8x2B\n  - model size: 1.2B, 2.4B, 8X2.4B (excluding embeddings)\n\n9. H2O-Danube-1.8B [[paper](https://arxiv.org/abs/2401.16818)] [[code](https://github.com/OpenBMB/MiniCPM)] [[model](https://huggingface.co/h2oai/h2o-danube2-1.8b-base)]\n  - release time: 2024/04\n  - organzation: h2oai\n  - model series: h2o-danube2-1.8b-base, h2o-danube2-1.8b-sft, h2o-danube2-1.8b-chat\n  - model size: 1.8B\n\n10. csg-wukong-series[[model](https://huggingface.co/opencsg/csg-wukong-1B)]\n  - release time: 2024/04\n  - organzation: opencsg\n  - model series: csg-wukong-1B, csg-wukong-1B-VL, csg-wukong-1B-chat\n  - model size: 1B\n\n11. CT-LLM-Base[[paper](https://arxiv.org/pdf/2404.04167.pdf)] [[code](https://github.com/Chinese-Tiny-LLM/Chinese-Tiny-LLM)] [[model](https://huggingface.co/m-a-p/CT-LLM-Base)]\n  - release time: 2024/04\n  - organzation: Peking University\n  - model series: CT-LLM-Base\n  - model size: 2B\n\n12. Qwen-series[[paper](https://arxiv.org/abs/2309.16609)] [[code](https://github.com/QwenLM/Qwen)] [[model](https://huggingface.co/Qwen)]\n  - release time: 2023/08\n  - organzation: Alibaba Cloud\n  - model series: Qwen-1.8B, Qwen-7B, Qwen-14B, and Qwen-72B, Qwen-1.8B-Chat, Qwen-7B-Chat, Qwen-14B-Chat, Qwen-72B-Chat\n  - model size: 1.8B,7B,14B,72B\n\n13. Qwen2-series[[paper](https://arxiv.org/abs/2309.16609)] [[code](https://github.com/QwenLM/Qwen)] [[model](https://huggingface.co/Qwen)]\n  - release time: 2024/06\n  - organzation: Alibaba Cloud\n  - model series: Qwen2-0.5B, Qwen2-1.5B, Qwen2-7B, Qwen2-57B-A14B, Qwen2-72B\n  - model size: 0.5B,7B,A14B,72B\n\n14. Gemma-series[[paper](https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf)] [[code](https://github.com/google-deepmind/gemma)] [[model](https://huggingface.co/google/gemma-2b)]\n  - release time: 2024/02\n  - organzation: Google\n  - model series: gemma-2b, gemma-2b-it, gemma-7b, gemma-7b-it, gemma-2-9b,gemma-2-9b-it,gemma-2-27b,gemma-2-27b-it\n  - model size: 2B,7B,27B\n\n15. OpenELM-series[[paper](https://arxiv.org/abs/2404.14619)] [[code](https://github.com/apple/corenet)] [[model](https://huggingface.co/collections/apple/openelm-instruct-models-6619ad295d7ae9f868b759c)]\n  - release time: 2024/04\n  - organzation: apple\n  - model series: OpenELM-270M, OpenELM-450M, OpenELM-1.1B, OpenELM-3B,OpenELM-270M-Instruct,OpenELM-450M-Instruct,OpenELM-1.1B-Instruct,OpenELM-3B-Instruct\n  - model size: 0.27B,0.45B,1.1B,3B\n\n16. Sheared-LLaMA-series[[paper](https://arxiv.org/abs/2310.06694)] [[code](https://github.com/princeton-nlp/LLM-Shearing)] [[model](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B)]\n  - release time: 2023/10\n  - organzation: Princeton NLP group\n  - model series:  Sheared-LLaMA-1.3B, Sheared-LLaMA-2.7B,Sheared-LLaMA-1.3B-Pruned, Sheared-LLaMA-2.7B-Pruned,Sheared-LLaMA-1.3B-ShareGPT, Sheared-LLaMA-2.7B-ShareGPT\n  - model size: 1.3B,2.7B\n\n17. SlimPajama-DC[[paper](https://arxiv.org/html/2309.10818v3)] [[code](https://github.com/togethercomputer/RedPajama-Data)] [[model](https://huggingface.co/MBZUAI-LLM/SlimPajama-DC)]\n  - release time: 2023/09\n  - organzation: cerebras\n  - model series:  SlimPajama-DC-1.3B\n  - model size: 1.3B\n\n18. RedPajama [[code](https://github.com/togethercomputer/RedPajama-Data)] [[model](https://huggingface.co/MBZUAI-LLM/SlimPajama-DC)]\n  - release time: 2023/05\n  - organzation: Together Computer.\n  - model series: RedPajama-INCITE-Base-3B-v1, RedPajama-INCITE-Instruct-3B-v1,RedPajama-INCITE-Chat-3B-v1\n  - model size: 1.3B\n\n19. OLMo[[paper](https://arxiv.org/html/2402.00838)] [[code](https://github.com/allenai/OLMo)] [[model](https://huggingface.co/MBZUAI-LLM/SlimPajama-DC)]\n  - release time: 2024/02\n  - organzation: allenai\n  - model series:  OLMo-1B,OLMo-7B,OLMo-7B-Twin-2T\n  - model size: 1B,7B\n\n20. Cerebras-GPT-series[[paper](https://arxiv.org/html/2304.03208)] [[model](https://huggingface.co/cerebras/Cerebras-GPT-111M)]\n  - release time: 2023/04\n  - organzation: cerebras\n  - model series:  Cerebras-GPT-111M,Cerebras-GPT-256M,Cerebras-GPT-590M,Cerebras-GPT-11.3B,Cerebras-GPT-2.7B,Cerebras-GPT-6.7B,Cerebras-GPT-111M,Cerebras-GPT-13B\n  - model size: 111M, 256M, 590M, 1.3B, 2.7B, 6.7B, 13B\n\n## 💪 Pretrain Datasets\n- SlimPajama-627B [[paper](https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama)] [[code](https://github.com/Cerebras/modelzoo/tree/main/src/cerebras/modelzoo/data_preparation/nlp/slimpajama)] [[dataset](https://huggingface.co/datasets/cerebras/SlimPajama-627B)]\n  - release time: 2023/06\n  - dataset size: 895 GB\n  - token size: 627B\n  - language: Primarily English, with some non-English files in Wikipedia\n  \n\n- dolma [[paper](https://arxiv.org/abs/2402.00159)] [[code](https://github.com/allenai/dolma)] [[dataset](https://huggingface.co/datasets/allenai/dolma)]\n  - release time: 2024/04\n  - dataset size: 4.5TB\n  - token size: 1.7T\n  - language: Primarily English, with some non-English files in Wikipedia\n\n- RedPajama-Data-1T [[paper](https://arxiv.org/pdf/1906.02285.pdf)] [[code](https://github.com/togethercomputer/RedPajama-Data)] [[dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)]\n  - release time: 2023/04\n  - token size: 627B\n\n- C4 [[paper](https://www.tensorflow.org/datasets/catalog/c4)] [[code](https://github.com/allenai/c4-documentation)] [[dataset](https://huggingface.co/datasets/c4)]\n  - release time: 2022/01\n  - dataset size: en: 305GB, en.noclean: 2.3TB, en.noblocklist: 380GB, realnewslike: 15GB, multilingual (mC4): 9.7TB (108 subsets, one per language)\n\n\n## 💡 SFT Datasets\n- ultrachat [[code](https://github.com/thunlp/UltraChat)] [[dataset](https://huggingface.co/datasets/stingning/ultrachat)]\n  - release time: 2023/04\n  - dataset size: 2.5GB\n  - language: en\n\n- ultrachat_200k [[code](https://github.com/thunlp/UltraChat)] [[dataset](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)]\n  - release time: 2023/10\n  - dataset size: 1.6GB\n  - language: en\n\n\n## 🔧 synthetic datasets\n- cosmopedia [[code](https://github.com/thunlp/UltraChat)] [[dataset](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia)]\n  - release time: 2024/02\n  - dataset size: 92.2GB\n  - language: en\n\n\n## 📦 preference dataset\n- UltraFeedback [[code](https://github.com/thunlp/UltraChat)] [[dataset](https://huggingface.co/datasets/openbmb/UltraFeedback)]\n  - release time: 2023/09\n  - dataset size: 0.94GB\n  - language: en\n\n\n## 🌈 benchmark\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopencsgs%2Fawesome-slms","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fopencsgs%2Fawesome-slms","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopencsgs%2Fawesome-slms/lists"}