Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

awesome-llm

Awesome series for Large Language Model(LLM)s
https://github.com/KennethanCeyer/awesome-llm

  • T5 (11B) - Announced by Google / 2020
  • T5 FLAN (540B) - Announced by Google / 2022
  • T0 (11B) - Announced by BigScience (HuggingFace) / 2021
  • OPT-175B (175B) - Announced by Meta / 2022
  • UL2 (20B) - Announced by Google / 2022
  • Bloom (176B) - Announced by BigScience (HuggingFace) / 2022
  • BERT-Large (336M) - Announced by Google / 2018
  • GPT-NeoX 2.0 (20B) - Announced by EleutherAI / 2023
  • GPT-J (6B) - Announced by EleutherAI / 2021
  • Macaw (11B) - Announced by AI2 / 2021
  • Stanford Alpaca (7B) - Announced by Stanford University / 2023
  • Visual ChatGPT - Announced by Microsoft / 2023
  • LMOps - Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities.
  • GPT 4 (Parameter size unannounced, gpt-4-32k) - Announced by OpenAI / 2023
  • ChatGPT (175B) - Announced by OpenAI / 2022
  • ChatGPT Plus (175B) - Announced by OpenAI / 2023
  • GPT 3.5 (175B, text-davinci-003) - Announced by OpenAI / 2022
  • Gemini - Announced by Google Deepmind / 2023
  • Bard - Announced by Google / 2023
  • Codex (11B) - Announced by OpenAI / 2021
  • Sphere - Announced by Meta / 2022
  • Common Crawl
  • SQuAD 2.0
  • Pile
  • RACE
  • Wikipedia
  • BIG-bench
  • Megatron-Turing NLG (530B) - Announced by NVIDIA and Microsoft / 2021
  • LaMDA (137B) - Announced by Google / 2021
  • GLaM (1.2T) - Announced by Google / 2021
  • PaLM (540B) - Announced by Google / 2022
  • AlphaCode (41.4B) - Announced by DeepMind / 2022
  • Chinchilla (70B) - Announced by DeepMind / 2022
  • Sparrow (70B) - Announced by DeepMind / 2022
  • NLLB (54.5B) - Announced by Meta / 2022
  • LLaMA (65B) - Announced by Meta / 2023
  • AlexaTM (20B) - Announced by Amazon / 2022
  • Gopher (280B) - Announced by DeepMind / 2021
  • Galactica (120B) - Announced by Meta / 2022
  • PaLM2 Tech Report - Announced by Google / 2023
  • LIMA - Announced by Meta / 2023
  • Llama 2 (70B) - Announced by Meta / 2023
  • Luminous (13B) - Announced by Aleph Alpha / 2021
  • Turing NLG (17B) - Announced by Microsoft / 2020
  • Claude (52B) - Announced by Anthropic / 2021
  • Minerva (Parameter size unannounced) - Announced by Google / 2022
  • BloombergGPT (50B) - Announced by Bloomberg / 2023
  • AlexaTM (20B - Announced by Amazon / 2023
  • Dolly (6B) - Announced by Databricks / 2023
  • Jurassic-1 - Announced by AI21 / 2022
  • Jurassic-2 - Announced by AI21 / 2023
  • Koala - Announced by Berkeley Artificial Intelligence Research(BAIR) / 2023
  • Gemma - Gemma: Introducing new state-of-the-art open models / 2024
  • Grok-1 - Open Release of Grok-1 / 2023
  • Grok-1.5 - Announced by XAI / 2024
  • DBRX - Announced by Databricks / 2024
  • BigScience - Maintained by HuggingFace ([Twitter](https://twitter.com/BigScienceLLM)) ([Notion](https://bigscience.notion.site/BLOOM-BigScience-176B-Model-ad073ca07cdf479398d5f95d88e218c4))
  • HuggingChat - Maintained by HuggingFace / 2023
  • OpenAssistant - Maintained by Open Assistant / 2023
  • StableLM - Maintained by Stability AI / 2023
  • Eleuther AI Language Model - Maintained by Eleuther AI / 2023
  • Falcon LLM - Maintained by Technology Innovation Institute / 2023
  • Gemma - Maintained by Google / 2024
  • Stanford Alpaca - ![Repo stars of tatsu-lab/stanford_alpaca](https://img.shields.io/github/stars/tatsu-lab/stanford_alpaca?style=social) - A repository of Stanford Alpaca project, a model fine-tuned from the LLaMA 7B model on 52K instruction-following demonstrations.
  • Dolly - ![Repo stars of databrickslabs/dolly](https://img.shields.io/github/stars/databrickslabs/dolly?style=social) - A large language model trained on the Databricks Machine Learning Platform.
  • AutoGPT - ![Repo stars of Significant-Gravitas/Auto-GPT](https://img.shields.io/github/stars/Significant-Gravitas/Auto-GPT?style=social) - An experimental open-source attempt to make GPT-4 fully autonomous.
  • dalai - ![Repo stars of cocktailpeanut/dalai](https://img.shields.io/github/stars/cocktailpeanut/dalai?style=social) - The cli tool to run LLaMA on the local machine.
  • LLaMA-Adapter - ![Repo stars of ZrrSkywalker/LLaMA-Adapter](https://img.shields.io/github/stars/ZrrSkywalker/LLaMA-Adapter?style=social) - Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters.
  • alpaca-lora - ![Repo stars of tloen/alpaca-lora](https://img.shields.io/github/stars/tloen/alpaca-lora?style=social) - Instruct-tune LLaMA on consumer hardware.
  • llama_index - ![Repo stars of jerryjliu/llama_index](https://img.shields.io/github/stars/jerryjliu/llama_index?style=social) - A project that provides a central interface to connect your LLM's with external data.
  • openai/evals - ![Repo stars of openai/evals](https://img.shields.io/github/stars/openai/evals?style=social) - A curated list of reinforcement learning with human feedback resources.
  • trlx - ![Repo stars of promptslab/Promptify](https://img.shields.io/github/stars/CarperAI/trlx?style=social) - A repo for distributed training of language models with Reinforcement Learning via Human Feedback. (RLHF)
  • pythia - ![Repo stars of EleutherAI/pythia](https://img.shields.io/github/stars/EleutherAI/pythia?style=social) - A suite of 16 LLMs all trained on public data seen in the exact same order and ranging in size from 70M to 12B parameters.
  • Embedchain - ![Repo stars of embedchain/embedchain](https://img.shields.io/github/stars/embedchain/embedchain.svg?style=social) - Framework to create ChatGPT like bots over your dataset.
  • OpenAssistant SFT 6 - 30 billion LLaMa-based model made by HuggingFace for the chatting conversation.
  • Vicuna Delta v0 - An open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
  • MPT 7B - A decoder-style transformer pre-trained from scratch on 1T tokens of English text and code. This model was trained by MosaicML.
  • Falcon 7B - A 7B parameters causal decoder-only model built by TII and trained on 1,500B tokens of RefinedWeb enhanced with curated corpora.
  • Phi-2: The surprising power of small language models
  • StackLLaMA: A hands-on guide to train LLaMA with RLHF
  • PaLM2
  • PaLM2 and Future work: Gemini model