Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/huggingface/large_language_model_training_playbook
An open collection of implementation tips, tricks and resources for training large language models
https://github.com/huggingface/large_language_model_training_playbook
cuda large-language-models llm nccl nlp performance python pytorch scalability troubleshooting
Last synced: 2 days ago
JSON representation
An open collection of implementation tips, tricks and resources for training large language models
- Host: GitHub
- URL: https://github.com/huggingface/large_language_model_training_playbook
- Owner: huggingface
- License: apache-2.0
- Created: 2023-03-06T08:30:35.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2023-03-08T18:06:22.000Z (over 1 year ago)
- Last Synced: 2024-08-03T09:09:42.642Z (3 months ago)
- Topics: cuda, large-language-models, llm, nccl, nlp, performance, python, pytorch, scalability, troubleshooting
- Language: Python
- Homepage:
- Size: 162 KB
- Stars: 448
- Watchers: 69
- Forks: 19
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
README
# 📖 The Large Language Model Training Playbook
This playbook is a companion to the [LLM Training Handbook](https://github.com/huggingface/llm_training_handbook) which contains a lot more details and scripts.
An open collection of implementation tips, tricks and resources for training large language models.
The following covers questions related to various topics which are interesting or challenging when training large language models.
## [Deciding on a model architecture](./architecture/)
## Deciding on a model parallelism strategy
## Deciding on the model size
#### Scaling laws
#### Trade-off of large language model sizes
## Issues and questions related to tensor precision
### What to chose between fp32, fp16, bf16
### Mixed-precisions for optimizers, weights, specifics modules
### How to finetune and integrate a model trained in a precision in another precision
## [Selecting training hyper-parameters and model initializations](./hparams)
### Learning rate and learning rate schedules
### Questions on batch size
## [Maximizing throughput](./throughput)
## [Avoiding, recovering from and understanding instabilities](./instabilities)
### Detecting instabilities early
### Training tips to reduce instabilities
## Issues with data and data processing
## [Debugging software and hardware failures](./debug/)
## Tips on what metrics to follow during the training
## [Resources](./resources/)