Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

Projects in Awesome Lists tagged with pipeline-parallelism

A curated list of projects in awesome lists tagged with pipeline-parallelism .

https://github.com/microsoft/deepspeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

billion-parameters compression data-parallelism deep-learning gpu inference machine-learning mixture-of-experts model-parallelism pipeline-parallelism pytorch trillion-parameters zero

Last synced: 17 Dec 2024

https://github.com/microsoft/DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

billion-parameters compression data-parallelism deep-learning gpu inference machine-learning mixture-of-experts model-parallelism pipeline-parallelism pytorch trillion-parameters zero

Last synced: 28 Oct 2024

https://github.com/paddlepaddle/paddlefleetx

飞桨大模型开发套件,提供大语言模型、跨模态大模型、生物计算大模型等领域的全流程开发工具链。

benchmark cloud data-parallelism distributed-algorithm elastic fleet-api large-scale lightning model-parallelism paddlecloud paddlepaddle pipeline-parallelism pretraining self-supervised-learning unsupervised-learning

Last synced: 18 Dec 2024

https://github.com/Coobiw/MPP-LLaVA

Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conversations}. Don't let the poverty limit your imagination! Train your own 8B/14B LLaVA-training-like MLLM on RTX3090/4090 24GB.

deepspeed fine-tuning mllm model-parallel multimodal-large-language-models pipeline-parallelism pretraining qwen video-language-model video-large-language-models

Last synced: 16 Oct 2024

https://github.com/internlm/internevo

InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencies.

910b deepspeed-ulysses flash-attention gemma internlm internlm2 llama3 llava llm-framework llm-training multi-modal pipeline-parallelism pytorch ring-attention sequence-parallelism tensor-parallelism transformers-models zero3

Last synced: 14 Dec 2024

https://github.com/InternLM/InternEvo

InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencies.

910b deepspeed-ulysses flash-attention gemma internlm internlm2 llama3 llava llm-framework llm-training multi-modal pipeline-parallelism pytorch ring-attention sequence-parallelism tensor-parallelism transformers-models zero3

Last synced: 30 Oct 2024

https://github.com/alibaba/EasyParallelLibrary

Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.

data-parallelism deep-learning distributed-training gpu memory-efficient model-parallelism pipeline-parallelism

Last synced: 05 Nov 2024

https://github.com/AlibabaPAI/DAPPLE

An Efficient Pipelined Data Parallel Approach for Training Large Model

distribution-strategy-planner hybrid-parallelism pipeline-parallelism

Last synced: 07 Nov 2024

https://github.com/saareliad/FTPipe

FTPipe and related pipeline model parallelism research.

deep-neural-networks distributed-training fine-tuning nlp pipeline-parallelism t5

Last synced: 07 Nov 2024

https://github.com/Shigangli/Chimera

Chimera: Efficiently Training Large-Scale Neural Networks with Bidirectional Pipelines.

distributed-deep-learning pipeline-parallelism transformers

Last synced: 07 Nov 2024

https://github.com/ler0ever/hpgo

Development of Project HPGO | Hybrid Parallelism Global Orchestration

data-parallelism distributed-training gpipe machine-learning model-parallelism pipedream pipeline-parallelism pytorch rust tensorflow

Last synced: 29 Oct 2024