Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/huggingface/nanotron
Minimalistic large language model 3D-parallelism training
https://github.com/huggingface/nanotron
Last synced: 5 days ago
JSON representation
Minimalistic large language model 3D-parallelism training
- Host: GitHub
- URL: https://github.com/huggingface/nanotron
- Owner: huggingface
- License: apache-2.0
- Created: 2023-09-11T14:40:28.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2025-01-11T11:56:08.000Z (10 days ago)
- Last Synced: 2025-01-12T06:49:13.915Z (9 days ago)
- Language: Python
- Homepage:
- Size: 14.3 MB
- Stars: 1,379
- Watchers: 43
- Forks: 137
- Open Issues: 79
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
- awesome-local-ai - Nanotron - Minimalistic large language model 3D-parallelism training. (Training)
- awesome-production-machine-learning - Nanotron - Nanotron provides distributed primitives to train a variety of models efficiently using 3D parallelism. (Training Orchestration)
- awesome-open-source-lms - Training Code
- StarryDivineSky - huggingface/nanotron
- awesome-LLM-resourses - nanotron - parallelism training. (微调 Fine-Tuning)
README
⚡️ Nanotron
Installation •
Quick Start •
Features •
Contributing
Pretraining models made easy
Nanotron is a library for pretraining transformer models. It provides a simple and flexible API to pretrain models on custom datasets. Nanotron is designed to be easy to use, fast, and scalable. It is built with the following principles in mind:
- **Simplicity**: Nanotron is designed to be easy to use. It provides a simple and flexible API to pretrain models on custom datasets.
- **Performance**: Optimized for speed and scalability, Nanotron uses the latest techniques to train models faster and more efficiently.## Installation
```bash
# Requirements: Python>=3.10,<3.12
git clone https://github.com/huggingface/nanotron
cd nanotron
pip install --upgrade pip
pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu121
pip install -e .# Install dependencies if you want to use the example scripts
pip install datasets transformers
pip install triton "flash-attn>=2.5.0" --no-build-isolation
```
> [!NOTE]
> If you get `undefined symbol: ncclCommRegister` error you should install torch 2.1.2 instead: `pip install torch==2.1.2 --index-url https://download.pytorch.org/whl/cu121`> [!TIP]
> We log to wandb automatically if it's installed. For that you can use `pip install wandb`. If you don't want to use wandb, you can run `wandb disabled`.## Quick Start
### Training a tiny Llama model
The following command will train a tiny Llama model on a single node with 8 GPUs. The model will be saved in the `checkpoints` directory as specified in the config file.
```bash
CUDA_DEVICE_MAX_CONNECTIONS=1 torchrun --nproc_per_node=8 run_train.py --config-file examples/config_tiny_llama.yaml
```### Run generation from your checkpoint
```bash
torchrun --nproc_per_node=1 run_generate.py --ckpt-path checkpoints/10/ --tp 1 --pp 1
# We could set a larger TP for faster generation, and a larger PP in case of very large models.
```### Custom examples
You can find more examples in the [`/examples`](/examples) directory:| Example | Description |
| --- | --- |
| `custom-dataloader` | Plug a custom dataloader to nanotron |
| `datatrove` | Use the datatrove library to load data |
| `doremi` | Use DoReMi to speed up training |
| `mamba` | Train an example Mamba model |
| `moe` | Train an example Mixture-of-Experts (MoE) model |
| `mup` | Use spectral µTransfer to scale up your model |
| `examples/config_tiny_llama_with_s3_upload.yaml` | For automatically uploading checkpoints to S3 |We're working on adding more examples soon! Feel free to add a PR to add your own example. 🚀
## Features
We currently support the following features:
- [x] 3D parallelism (DP+TP+PP)
- [x] Expert parallelism for MoEs
- [x] AFAB and 1F1B schedules for PP
- [x] Explicit APIs for TP and PP which enables easy debugging
- [x] ZeRO-1 optimizer
- [x] FP32 gradient accumulation
- [x] Parameter tying/sharding
- [x] Custom module checkpointing for large models
- [x] Spectral µTransfer parametrization for scaling up neural networks
- [x] Mamba exampleAnd we have on our roadmap:
- [ ] FP8 training
- [ ] ZeRO-3 optimizer (a.k.a FSDP)
- [ ] `torch.compile` support
- [ ] Ring attention
- [ ] Interleaved 1f1b schedule## Credits
We would like to thank everyone working on LLMs, especially those sharing their work openly from which we took great inspiration: Nvidia for `Megatron-LM/apex`, Microsoft for `DeepSpeed`, HazyResearch for `flash-attn`..