Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/hpcaitech/ColossalAI
Making large AI models cheaper, faster and more accessible
https://github.com/hpcaitech/ColossalAI
ai big-model data-parallelism deep-learning distributed-computing foundation-models heterogeneous-training hpc inference large-scale model-parallelism pipeline-parallelism
Last synced: 7 days ago
JSON representation
Making large AI models cheaper, faster and more accessible
- Host: GitHub
- URL: https://github.com/hpcaitech/ColossalAI
- Owner: hpcaitech
- License: apache-2.0
- Created: 2021-10-28T16:19:44.000Z (about 3 years ago)
- Default Branch: main
- Last Pushed: 2024-04-12T08:49:34.000Z (7 months ago)
- Last Synced: 2024-04-12T12:18:45.463Z (7 months ago)
- Topics: ai, big-model, data-parallelism, deep-learning, distributed-computing, foundation-models, heterogeneous-training, hpc, inference, large-scale, model-parallelism, pipeline-parallelism
- Language: Python
- Homepage: https://www.colossalai.org
- Size: 29.4 MB
- Stars: 37,740
- Watchers: 377
- Forks: 4,233
- Open Issues: 416
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGE_LOG.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Codeowners: .github/CODEOWNERS
Awesome Lists containing this project
- awesome - hpcaitech/ColossalAI - Making large AI models cheaper, faster and more accessible (Python)
- awesome-instruction-datasets - ColossalChat
- awesome - hpcaitech/ColossalAI - Making large AI models cheaper, faster and more accessible (Python)
- Awesome-LLM - https://github.com/hpcaitech/ColossalAI
- Awesome_Multimodel_LLM - Colossal-AI - Making large AI models cheaper, faster, and more accessible. (LLM Training Frameworks)
- awesome-lm-system - ColossalAI
- awesome - hpcaitech/ColossalAI - Making large AI models cheaper, faster and more accessible (Python)
- awesome-AI-system - Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training ICPP'23
- awesome-llmops - ColossalAI - scale model training system with efficient parallelization techniques. | ![GitHub Badge](https://img.shields.io/github/stars/hpcaitech/ColossalAI.svg?style=flat-square) | (Training / Frameworks for Training)
- awesome-repositories - hpcaitech/ColossalAI - Making large AI models cheaper, faster and more accessible (Python)
- awesome-llm-eval - ColossalAI - An integrated large-scale model training system with efficient parallelization techniques. (Frameworks-for-Training / Popular-LLM)
- AiTreasureBox - hpcaitech/ColossalAI - 11-02_38763_0](https://img.shields.io/github/stars/hpcaitech/ColossalAI.svg)|Making large AI models cheaper, faster and more accessible| (Repos)
- awesome-for-beginners - Colossal-AI - source deep learning system for large-scale model training and inference with high efficiency and low cost. (Python)
- awesome-list - ColossalAI - Provides a collection of parallel components and user-friendly tools to kickstart distributed training and inference in a few lines. (Deep Learning Framework / Deployment & Distribution)
- awesome-generative-ai - hpcaitech/ColossalAI
- StarryDivineSky - hpcaitech/ColossalAI
- Awesome-LLM - Colossal-AI - Making large AI models cheaper, faster, and more accessible. (LLM Training Frameworks)
- Awesome-instruction-tuning - ColossalChat
- my-awesome - hpcaitech/ColossalAI - model,data-parallelism,deep-learning,distributed-computing,foundation-models,heterogeneous-training,hpc,inference,large-scale,model-parallelism,pipeline-parallelism pushed_at:2024-10 star:38.7k fork:4.3k Making large AI models cheaper, faster and more accessible (Python)
- awesome-production-machine-learning - Colossal-AI - A unified deep learning system for big model era, which helps users to efficiently and quickly deploy large AI model training and inference. (Computation Load Distribution)
- fucking-awesome-for-beginners - Colossal-AI - source deep learning system for large-scale model training and inference with high efficiency and low cost. (Python)
- awesome-llm-and-aigc - Colossal-AI - AI: A Unified Deep Learning System For Large-Scale Parallel Training". (**[arXiv 2021](https://arxiv.org/abs/2110.14883)**). (Summary)