Projects in Awesome Lists tagged with ring-attention
A curated list of projects in awesome lists tagged with ring-attention .
https://github.com/feifeibear/long-context-attention
USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference
attention-is-all-you-need deepspeed-ulysses llm-inference llm-training pytorch ring-attention
Last synced: 14 May 2025
https://github.com/internlm/internevo
InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencies.
910b deepspeed-ulysses flash-attention gemma internlm internlm2 llama3 llava llm-framework llm-training multi-modal pipeline-parallelism pytorch ring-attention sequence-parallelism tensor-parallelism transformers-models zero3
Last synced: 14 Apr 2025
https://github.com/InternLM/InternEvo
InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencies.
910b deepspeed-ulysses flash-attention gemma internlm internlm2 llama3 llava llm-framework llm-training multi-modal pipeline-parallelism pytorch ring-attention sequence-parallelism tensor-parallelism transformers-models zero3
Last synced: 27 Mar 2025
https://github.com/damo-nlp-sg/inf-clip
💣💣 The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Contrastive Loss". A super memory-efficiency CLIP training scheme.
clip contrastive-learning flash-attention infinite-batch-size memory-efficient ring-attention
Last synced: 09 Apr 2025
https://github.com/DAMO-NLP-SG/Inf-CLIP
💣💣 The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Contrastive Loss". A super memory-efficiency CLIP training scheme.
clip contrastive-learning flash-attention infinite-batch-size memory-efficient ring-attention
Last synced: 28 Mar 2025