Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-llamas
Awesome repositories for LLaMA1 and LLaMA2
https://github.com/lawwu/awesome-llamas
Last synced: 2 days ago
JSON representation
-
Libraries
- TinyLlama - pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs
- open-interpreter - Open source version of OpenAI's Code Interpreter. Works with GPT-4 and llama2.
- llama-gpt - A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device
- LLaMA-Efficient-Tuning - Easy-to-use fine-tuning framework using PEFT (PT+SFT+RLHF with QLoRA) (LLaMA-2, BLOOM, Falcon, Baichuan, Qwen)
- mlc-llm - Running LLaMA2 on iOS devices natively using GPU acceleration, see [example](https://twitter.com/bohanhou1998/status/1681682445937295360)
- llama2.c - Inference Llama 2 in one file of pure C by Andrej Karpathy
- h2o-llmstudio - H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/
- Llama2-Chinese - Llama中文社区,最好的中文Llama大模型,完全开源可商用
- Code Llama - Code Llama is an AI model built on top of Llama 2, fine-tuned for generating and discussing code. Also this [Huggingface blog post on Code Llama](https://huggingface.co/blog/codellama).
- Llama2-Chinese - Llama中文社区,最好的中文Llama大模型,完全开源可商用
- LLaMA-Efficient-Tuning - Easy-to-use fine-tuning framework using PEFT (PT+SFT+RLHF with QLoRA) (LLaMA-2, BLOOM, Falcon, Baichuan, Qwen)
- ollama - Get up and running with Llama 2 and other large language models locally
-
Tutorials
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running 13b-chat inference locally on Apple Silicon - Post I wrote for how to run [Llama-2-13B-chat-GGML](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML) locally on a Mac getting around 15-20 tokens per second.
- How to fine-tune LLaMA2
- Running Llama 2 on CPU Inference Locally for Document Q&A
- How to deploy LLaMA2 or any open-source LLM using HuggingFace's TGI
- How to Build a LLaMA2 Chatbot in Streamlit
- LLaMa 70B Chatbot in Hugging Face and LangChain
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
- Running Llama 2 on CPU Inference Locally for Document Q&A
-
LLaMA2 Models
- Llama-2-70b
- Llama-2-70b-chat - Other model sizes can be found here: https://huggingface.co/meta-llama.
- Demo of 70b-chat
- Demo of 13b-chat
- Llama-2-70B-Chat-GGML - Other model sizes can be found here: https://huggingface.co/TheBloke
-
Benchmarks
- AlpacaEval - LLaMA2-70B-chat is better than ChatGPT and Claude2, trailing only GPT-4 as of 2023-07-27.
- Open LLM Leaderboard - see models that have llama or llama-2 in the name.
- LLaMA2 is competitive with GPT3.5 in Medical applications
- Open LLM Leaderboard - see models that have llama or llama-2 in the name.
-
Papers
- LLaMA2 Paper - Llama 2: Open Foundation and Fine-Tuned Chat Model - Released 2023-07-18
- LLaMA1 Paper - LLaMA: Open and Efficient Foundation Language Models - Released 2023-02-27
- LLaMA2 Paper - Llama 2: Open Foundation and Fine-Tuned Chat Model - Released 2023-07-18
-
Derivative Models
- StableBeluga2 - Stable Beluga 2 is a Llama2 70B model finetuned on an Orca style Dataset
- Mikael110/llama-2-70b-guanaco-qlora - first time we got a model that defeats ChatGPT at MMLU
- airoboros-12-70b-gpt4-1.4.1
- Nous-Hermes-Llama2-13b
- LLongMA-2-13b-16k - Releasing LLongMA-2-13b-16k, a Llama-2 model, trained at 16k context length using linear positional interpolation scaling. 8k context length is also available. ([post](https://www.linkedin.com/posts/enrico-shippole-495521b8_conceptofmindllongma-2-13b-hugging-face-activity-7089288709220524032-75yV/?trk=public_profile_like_view)). Also [LLaMA-2-7B-32k](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K)
- WizardLM-13B-V1.2 - The WizardLM-13B-V1.2 achieves 7.06 on [MT-Bench Leaderboard](https://chat.lmsys.org/?leaderboard), [89.17% on AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/), which is better than Claude and ChatGPT, and 101.4% on WizardLM Eval. [WizardLM repo](https://github.com/nlpxucan/WizardLM)
- FreeWilly1 - FreeWilly is a Llama65B model fine-tuned on an Orca style Dataset
- LongLLaMA: Focused Transformer Training for Context Scaling
- Dolphin LLama - open source implementation of Microsoft's Orca model, based on Llama 1, not for commercial use.
-
News
- AI Silicon Valley = RealChar + AI town + Llama2
- OpenAI Function calling with llama2
- LLaMA 2: Every Resource You Need
- Llama 2: an incredible open LLM
- Llama 2 is here - get it on Hugging Face
- Qualcomm Works with Meta to Enable On-device AI Applications Using Llama 2
- A Brief History of LLaMA Models
- Meta’s Open Source Llama Upsets the AI Horse Race - Meta is giving its answer to OpenAI’s GPT-4 away for free. The move could intensify the generative AI boom by making it easier for entrepreneurs to build powerful new AI systems.
Programming Languages
Categories
Sub Categories
Keywords
llm
6
llama
5
ai
4
gpt
4
chatgpt
3
fine-tuning
3
language-model
3
llama3
3
agent
2
chatglm
2
llama2
2
instruction-tuning
2
large-language-models
2
lora
2
mistral
2
moe
2
peft
2
qlora
2
quantization
2
qwen
2
rlhf
2
transformers
2
code-llama
1
codellama
1
finetune-llm
1
llm-training
1
gpt-4
1
gpt4all
1
llm-agent
1
langchain
1
codeinterpreter
1
code-interpreter
1
chatgpt-code-generation
1
pretraining
1
generative-ai
1
llama-2
1
llama-cpp
1
llamacpp
1
generative
1
localai
1
openai
1
self-hosted
1
finetuning
1
chatbot
1
tvm
1
machine-learning-compilation
1