Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/bigscience-workshop/petals
๐ธ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
https://github.com/bigscience-workshop/petals
bloom chatbot deep-learning distributed-systems falcon gpt guanaco language-models large-language-models llama llama2 machine-learning neural-networks nlp pipeline-parallelism pretrained-models pytorch tensor-parallelism transformer volunteer-computing
Last synced: about 1 month ago
JSON representation
๐ธ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
- Host: GitHub
- URL: https://github.com/bigscience-workshop/petals
- Owner: bigscience-workshop
- License: mit
- Created: 2022-06-12T00:10:27.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2024-04-27T13:35:07.000Z (about 2 months ago)
- Last Synced: 2024-04-27T14:36:27.034Z (about 2 months ago)
- Topics: bloom, chatbot, deep-learning, distributed-systems, falcon, gpt, guanaco, language-models, large-language-models, llama, llama2, machine-learning, neural-networks, nlp, pipeline-parallelism, pretrained-models, pytorch, tensor-parallelism, transformer, volunteer-computing
- Language: Python
- Homepage: https://petals.dev
- Size: 4.09 MB
- Stars: 8,668
- Watchers: 87
- Forks: 460
- Open Issues: 87
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Lists
- awesome-stars - bigscience-workshop/petals - ๐ธ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading (Python)
- awesome-local-ai - petals - Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading. (Inference UI)
- awesome-list - bigscience-workshop/petals - ๐ธ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading (Python)
- awesome-ChatGPT-repositories - petals - ๐ธ Run 100B+ language models at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading (NLP)
- awesome-stars - bigscience-workshop/petals - ๐ธ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading (Python)
- awesome-ai-tools - Petals - BitTorrent style platform for running AI models in a distributed way. (Other / Music)
- awesome-stars - bigscience-workshop/petals - ๐ธ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading (Python)
- my-awesome-stars - bigscience-workshop/petals - ๐ธ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading (Python)
- awesome-stars - bigscience-workshop/petals - ๐ธ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading (Python)
- awesome-starts - bigscience-workshop/petals - ๐ธ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading (Python)
- awesome-stars - bigscience-workshop/petals - ๐ธ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading (pytorch)
- awesome-stars - bigscience-workshop/petals - ๐ธ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading (Python)
- awesome-stars - bigscience-workshop/petals - ๐ธ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading (Python)
- awesome-stars - bigscience-workshop/petals - ๐ธ Run 100B+ language models at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading (Python)
- awesome-stars - petals - style. Fine-tuning and inference up to 10x faster than offloading | bigscience-workshop | 8789 | (Python)
- awesome-LLMs-finetuning - Petals - style. Fine-tuning and inference up to 10x faster than offloading. (7768 stars) (4. Fine-Tuning / Frameworks)
- awesome-stars - bigscience-workshop/petals - `โ 8842` ๐ธ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading (Python)
- awesome-stars - bigscience-workshop/petals - ๐ธ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading (Python)
- awesome-stars - bigscience-workshop/petals - ๐ธ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading. (Python)
- awesome-stars - bigscience-workshop/petals - ๐ธ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading (Python)
- my-awesome-stars - bigscience-workshop/petals - ๐ธ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading (Python)
- my-awesome-stars - bigscience-workshop/petals - ๐ธ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading (Python)
- awesome-stars - petals - style. Fine-tuning and inference up to 10x faster than offloading | bigscience-workshop | 8851 | (Python)
- awesome-stars - bigscience-workshop/petals - `โ 8529` ๐ธ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading (Python)
- my-awesome-stars - bigscience-workshop/petals - ๐ธ Run large language models at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading (Python)
- my-awesome - bigscience-workshop/petals - ๐ธ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading (Python)
- StarryDivineSky - bigscience-workshop/petals - ไฝ ๅ ่ฝฝๆจกๅ็ไธๅฐ้จๅ๏ผ็ถๅๅ ๅ ฅไธบๅ ถไป้จๅๆไพๆๅก็ไบบๆฅ่ฟ่กๆจ็ๆๅพฎ่ฐใ (ๆๆฌ็ๆใๆๆฌๅฏน่ฏ / ็ฑปChatGPTๅคง่ฏญ่จๅฏน่ฏๆจกๅๅๆฐๆฎ)
- AiTreasureBox - bigscience-workshop/petals - 06-12_8817_1](https://img.shields.io/github/stars/bigscience-workshop/petals.svg)|๐ธ Run large language models at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading| (Repos)
README
Run large language models at home, BitTorrent-style.
Fine-tuning and inference up to 10x faster than offloading
![]()
![]()
Generate text with distributed **Llama 2** (70B), **Falcon** (40B+), **BLOOM** (176B) (or their derivatives), and fineโtune them for your own tasks โ right from your desktop computer or Google Colab:
```python
from transformers import AutoTokenizer
from petals import AutoDistributedModelForCausalLM# Choose any model available at https://health.petals.dev
model_name = "petals-team/StableBeluga2" # This one is fine-tuned Llama 2 (70B)# Connect to a distributed network hosting model layers
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoDistributedModelForCausalLM.from_pretrained(model_name)# Run the model as if it were on your computer
inputs = tokenizer("A cat sat", return_tensors="pt")["input_ids"]
outputs = model.generate(inputs, max_new_tokens=5)
print(tokenizer.decode(outputs[0])) # A cat sat on a mat...
```
๐ ย Try now in Colab๐ **Privacy.** Your data will be processed with the help of other people in the public swarm. Learn more about privacy [here](https://github.com/bigscience-workshop/petals/wiki/Security,-privacy,-and-AI-safety). For sensitive data, you can set up a [private swarm](https://github.com/bigscience-workshop/petals/wiki/Launch-your-own-swarm) among people you trust.
๐ฆ **Want to run Llama 2?** Request access to its weights at the โพ๏ธ [Meta AI website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and ๐ค [Model Hub](https://huggingface.co/meta-llama/Llama-2-70b-hf), then run `huggingface-cli login` in the terminal before loading the model. Or just try it in our [chatbot app](https://chat.petals.dev).
๐ฌ **Any questions?** Ping us in [our Discord](https://discord.gg/KdThf2bWVU)!
## Connect your GPU and increase Petals capacity
Petals is a community-run system โ we rely on people sharing their GPUs. You can check out [available models](https://health.petals.dev) and help serving one of them! As an example, here is how to host a part of [Stable Beluga 2](https://huggingface.co/stabilityai/StableBeluga2) on your GPU:
๐ง **Linux + Anaconda.** Run these commands for NVIDIA GPUs (or follow [this](https://github.com/bigscience-workshop/petals/wiki/Running-on-AMD-GPU) for AMD):
```bash
conda install pytorch pytorch-cuda=11.7 -c pytorch -c nvidia
pip install git+https://github.com/bigscience-workshop/petals
python -m petals.cli.run_server petals-team/StableBeluga2
```๐ช **Windows + WSL.** Follow [this guide](https://github.com/bigscience-workshop/petals/wiki/Run-Petals-server-on-Windows) on our Wiki.
๐ **Docker.** Run our [Docker](https://www.docker.com) image for NVIDIA GPUs (or follow [this](https://github.com/bigscience-workshop/petals/wiki/Running-on-AMD-GPU) for AMD):
```bash
sudo docker run -p 31330:31330 --ipc host --gpus all --volume petals-cache:/cache --rm \
learningathome/petals:main \
python -m petals.cli.run_server --port 31330 petals-team/StableBeluga2
```๐ **macOS + Apple M1/M2 GPU.** Install [Homebrew](https://brew.sh/), then run these commands:
```bash
brew install python
python3 -m pip install git+https://github.com/bigscience-workshop/petals
python3 -m petals.cli.run_server petals-team/StableBeluga2
```
๐ ย Learn more (how to use multiple GPUs, start the server on boot, etc.)๐ฌ **Any questions?** Ping us in [our Discord](https://discord.gg/X7DgtxgMhc)!
๐ฆ **Want to host Llama 2?** Request access to its weights at the โพ๏ธ [Meta AI website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and ๐ค [Model Hub](https://huggingface.co/meta-llama/Llama-2-70b-hf), generate an ๐ [access token](https://huggingface.co/settings/tokens), then add `--token YOUR_TOKEN_HERE` to the `python -m petals.cli.run_server` command.
๐ **Security.** Hosting a server does not allow others to run custom code on your computer. Learn more [here](https://github.com/bigscience-workshop/petals/wiki/Security,-privacy,-and-AI-safety).
๐ **Thank you!** Once you load and host 10+ blocks, we can show your name or link on the [swarm monitor](https://health.petals.dev) as a way to say thanks. You can specify them with `--public_name YOUR_NAME`.
## How does it work?
- You load a small part of the model, then join a [network](https://health.petals.dev) of people serving the other parts. Singleโbatch inference runs at up to **6 tokens/sec** for **Llama 2** (70B) and up to **4 tokens/sec** for **Falcon** (180B) โ enough for [chatbots](https://chat.petals.dev) and interactive apps.
- You can employ any fine-tuning and sampling methods, execute custom paths through the model, or see its hidden states. You get the comforts of an API with the flexibility of **PyTorch** and **๐ค Transformers**.
![]()
๐ ย Read paper
ย ย ย ย ย ย ย ย ย ย
๐ ย See FAQ## ๐ Tutorials, examples, and more
Basic tutorials:
- Getting started: [tutorial](https://colab.research.google.com/drive/1uCphNY7gfAUkdDrTx21dZZwCOUDCMPw8?usp=sharing)
- Prompt-tune Llama-65B for text semantic classification: [tutorial](https://colab.research.google.com/github/bigscience-workshop/petals/blob/main/examples/prompt-tuning-sst2.ipynb)
- Prompt-tune BLOOM to create a personified chatbot: [tutorial](https://colab.research.google.com/github/bigscience-workshop/petals/blob/main/examples/prompt-tuning-personachat.ipynb)Useful tools:
- [Chatbot web app](https://chat.petals.dev) (connects to Petals via an HTTP/WebSocket endpoint): [source code](https://github.com/petals-infra/chat.petals.dev)
- [Monitor](https://health.petals.dev) for the public swarm: [source code](https://github.com/petals-infra/health.petals.dev)Advanced guides:
- Launch a private swarm: [guide](https://github.com/bigscience-workshop/petals/wiki/Launch-your-own-swarm)
- Run a custom model: [guide](https://github.com/bigscience-workshop/petals/wiki/Run-a-custom-model-with-Petals)### Benchmarks
Please see **Section 3.3** of our [paper](https://arxiv.org/pdf/2209.01188.pdf).
### ๐ ๏ธ Contributing
Please see our [FAQ](https://github.com/bigscience-workshop/petals/wiki/FAQ:-Frequently-asked-questions#contributing) on contributing.
### ๐ Citation
Alexander Borzunov, Dmitry Baranchuk, Tim Dettmers, Max Ryabinin, Younes Belkada, Artem Chumachenko, Pavel Samygin, and Colin Raffel.
[Petals: Collaborative Inference and Fine-tuning of Large Models.](https://arxiv.org/abs/2209.01188)
_arXiv preprint arXiv:2209.01188,_ 2022.```bibtex
@article{borzunov2022petals,
title = {Petals: Collaborative Inference and Fine-tuning of Large Models},
author = {Borzunov, Alexander and Baranchuk, Dmitry and Dettmers, Tim and Ryabinin, Max and Belkada, Younes and Chumachenko, Artem and Samygin, Pavel and Raffel, Colin},
journal = {arXiv preprint arXiv:2209.01188},
year = {2022},
url = {https://arxiv.org/abs/2209.01188}
}
```--------------------------------------------------------------------------------
This project is a part of the BigScience research workshop.
![]()