Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/epfLLM/Megatron-LLM

distributed trainer for LLMs
https://github.com/epfLLM/Megatron-LLM

Last synced: 2 months ago
JSON representation

distributed trainer for LLMs

Awesome Lists containing this project

README

        



# Megatron-LLM

This library enables pre-training and fine-tuning of large language models (LLMs) at scale.
Our repository is a modification of the [original Megatron-LM codebase](https://github.com/NVIDIA/Megatron-LM) by Nvidia.

Added key features include:
- architectures supported: [Llama](https://arxiv.org/abs/2302.13971), [Llama 2](https://arxiv.org/abs/2307.09288), [Code Llama](https://arxiv.org/abs/2308.12950), [Falcon](https://huggingface.co/tiiuae) and [Mistral](https://arxiv.org/abs/2310.06825)
- support training of large models (70B Llama 2, 65B Llama 1, 34B Code Llama, 40B Falcon and Mistral) on commodity hardware on multiple nodes
- 3-way parallelism: tensor parallel, pipeline parallel and data parallel training (inherited from Megatron)
- full pretraining, finetuning and instruct tuning support
- Support for special tokens & tokenizers
- grouped-query attention (GQA) and multi-query attention (MQA)
- Rotary Position Embeddings (RoPE), RMS layer norm, Lima dropout
- [RoPE scaling](https://together.ai/blog/llama-2-7b-32k) for longer attention context support
- FlashAttention 2
- BF16 / FP16 training
- WandB integration
- Metrics support: Ease to add custom metrics to evaluate on the validation set while training
- Conversion to and from Hugging Face hub

# Documentation

Take a look at [the online documentation](https://epfllm.github.io/Megatron-LLM).

Alternatively, build the docs from source:
```
cd docs/
pip install -r requirements.txt
make html
```

# Example models trained with *Megatron-LLM*
- [TOWER: An Open Multilingual Large Language Model for Translation-Related Tasks](https://arxiv.org/abs/2402.17733)
- [Executable Code Actions Elicit Better LLM Agents](https://arxiv.org/abs/2402.01030)
- [Sailor: A suite of Open Language Models tailored for South-East Asia](https://arxiv.org/abs/2404.03608)
- [Meditron 70b: Scaling Medical Pretraining for Large Language Models](https://huggingface.co/epfl-llm/meditron-70b)
- [Llama2-70b-OAsst-sft-v10](https://huggingface.co/OpenAssistant/llama2-70b-oasst-sft-v10)
- [Falcon-40b-megacode2-OAsst](https://huggingface.co/OpenAssistant/falcon-40b-megacode2-oasst)
- [CodeLlama-13b-OAsst-sft-v10](https://huggingface.co/OpenAssistant/codellama-13b-oasst-sft-v10)
- [Meditron 7b](https://huggingface.co/epfl-llm/meditron-7b)
- ...

(Let us know about yours!)

# Citation

If you use this software please cite it:


@software{epfmgtrn,
author = {Alejandro Hernández Cano and
Matteo Pagliardini and
Andreas Köpf and
Kyle Matoba and
Amirkeivan Mohtashami and
Xingyao Wang and
Olivia Simin Fan and
Axel Marmet and
Deniz Bayazit and
Igor Krawczuk and
Zeming Chen and
Francesco Salvi and
Antoine Bosselut and
Martin Jaggi},
title = {epfLLM Megatron-LLM},
year = 2023,
url = {https://github.com/epfLLM/Megatron-LLM}
}