Awesome-Efficient-AIGC
A list of papers, docs, codes about efficient AIGC. This repo is aimed to provide the info for efficient AIGC research, including language and vision, we are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo.
https://github.com/Efficient-ML/Awesome-Efficient-AIGC
Last synced: 13 days ago
JSON representation
-
Language
-
2023
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [ArXiv - Fine-Tuning-Aware Quantization for Large Language Models [[code](https://github.com/yxli2123/LoftQ)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [NeurIPS - Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization
- [ICML - Training Quantization for Large Language Models [[code](https://github.com/mit-han-lab/smoothquant)] 
- [ICML - wise Division for Post-Training Quantization [[code](https://openreview.net/attachment?id=-tYCaP0phY_&name=supplementary_material)]
- [ICML
- [ICML - Zip: Deep Compression of Finetuned Large Language Models
- [ICML - DASLab/QIGen)] 
- [ICLR - Training Quantization for Generative Pre-trained Transformers [[code](https://github.com/IST-DASLab/gptq)] 
- [NeurIPS
- [ICML - bit precision: k-bit Inference Scaling Laws
- [ACL - agnostic Quantization Approach for Pre-trained Language Models
- [ACL - based Language Models with GPU-Friendly Sparsity and Quantization
- [EMNLP - based Quantisation: What is Important for Sub-8-bit LLM Inference?
- [EMNLP - Shot Sharpness-Aware Quantization for Pre-trained Language Models
- [EMNLP - FP4: 4-Bit Floating-Point Quantized Transformers [[code](https://github.com/nbasyl/LLM-FP4)] 
- [EMNLP
- [ISCA - friendly Outlier-Victim Pair Quantization
- [ArXiv - V2: Exploring Post-training Quantization in LLMs from Comprehensive Study to Low Rank Compensation
- [ArXiv - GEMM: Quantized Matrix Multiplication based on LUTs for Efficient Inference in Large-Scale Generative Language Models
- [ArXiv
- [ArXiv - QAT: Data-Free Quantization Aware Training for Large Language Models
- [ArXiv - aware Weight Quantization for LLM Compression and Acceleration [[code](https://github.com/mit-han-lab/llm-awq)] 
- [ArXiv - bit Integers [[code](https://github.com/xijiu9/Train_Transformers_with_INT4)] 
- [ArXiv - and-Sparse Quantization [[code](https://github.com/SqueezeAILab/SqueezeLLM)] 
- [ArXiv
- [ArXiv - Quantized Representation for Near-Lossless LLM Weight Compression [[code](https://github.com/Vahe1994/SpQR)] 
- [ArXiv - Bit Quantization of Large Language Models With Guarantees [[code](https://github.com/jerry-chee/QuIP)] 
- [ArXiv
- [ArXiv
- [ArXiv - based Post-training Quantization for Large Language Models [[code](https://github.com/hahnyuan/RPTQ4LLM)] 
- [ArXiv - Bit Quantization on Large Language Models
- [ArXiv - Tunable Quantized Large Language Models with Error Correction through Low-Rank Adaptation
- [ArXiv - FP-QSim: Mixed Precision and Formats For Large Language Models and Vision Transformers [[code](https://github.com/lightmatter-ai/INT-FP-QSim)] 
- [ArXiv
- [ArXiv - FP: A Leap Forward in LLMs Post-Training W4A8 Quantization Using Floating-Point Formats
- [ArXiv - Uniform Post-Training Quantization via Power Exponent Search
- [ArXiv - Scaled Logit Distillation for Ternary Weight Generative Language Models
- [ArXiv - Based Post-Training Quantization: Challenging the Status Quo
- [ArXiv - Grained Weight-Only Quantization for LLMs
- [ArXiv - VQ: Compression for Tractable Internet-Scale Memory
- [ArXiv - grained Post-Training Quantization for Large Language Models
- [ArXiv - time Weight Clustering for Large Language Models
- [ArXiv - based Quantization for Language Models - An Efficient and Intuitive Algorithm
- [ArXiv - performance Low-bit Quantization of Large Language Models
- [ArXiv - Training Quantization on Large Language Models
- [ArXiv - compressor)] 
- [ArXiv - training Quantization with FP8 Formats [[code](https://github.com/intel/neural-compressor)] 
- [ArXiv - bit Weight Quantization of Large Language Models
- [ArXiv - Bit LLMs on Consumer GPUs by Integrating with Modular Quantizers
- [ArXiv - LLM: Partially Binarized Large Language Models [[code](https://github.com/hahnyuan/BinaryLLM)] 
- [ArXiv - Grained Quantization for LLM
- [ArXiv - parameter Tuning of LLMs with Affordable Resources
- [ArXiv - Bitwidth Quantization for Large Language Models
- [ArXiv - compressor)] 
- [ArXiv - bit Transformers for Large Language Models [[code](https://github.com/kyegomez/BitNet)] 
- [ArXiv - LM: Training FP8 Large Language Models [[code](https://github.com/Azure/MS-AMP)] 
- [ArXiv - bit Quantization for Efficient and Accurate LLM Serving [[code](https://github.com/efeslab/Atom)] 
- [ArXiv - Training Quantization with Activation-Weight Equalization for Large Language Models
- [ArXiv
- [ICML
- [ICML - Shot [[code](https://github.com/IST-DASLab/sparsegpt)] 
- [ICML - Rank and Sparse Approximation [[code](https://github.com/yxli2123/LoSparse)] 
- [ICML
- [ICLR - DASLab/gptq)] 
- [NeurIPS - Pruner: On the Structural Pruning of Large Language Models [[code](https://github.com/horseee/LLM-Pruner)] 
- [AutoML
- [VLDB - LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity [[code](https://github.com/AlibabaResearch/flash-llm)] 
- [ArXiv
- [ArXiv - Rank Parameter-Efficient Fine-Tuning
- [ArXiv
- [ArXiv - training via Structured Pruning [[code](https://github.com/princeton-nlp/LLM-Shearing)] 
- [ArXiv
- [ICLR
- [NeurIPS - bit Transformer Language Models [[code](https://github.com/wimh966/outlier_suppression)] 
- [ArXiv - Centric Angle of LLM Pre-trained Weights through Sparsity [[code](https://github.com/VITA-Group/Junk_DNA_Hypothesis)] 
- [ArXiv
- [ArXiv - Free Fine-tuning for Sparse LLMs [[code](https://github.com/zyxxmu/DSnoT)] 
- [ArXiv - Shot Sensitivity-Aware Mixed Sparsity Pruning for Large Language Models
- [ArXiv - Sparse: Boosting the Large Language Model Inference through Entropy-based N:M Sparsity
- [ArXiv
- [ACL - of-Thought Distillation: Small Models Can Also "Think" Step-by-Step [[code](https://github.com/allenai/cot_distillation)]
- [ACL
- [ACL
- [ACL - Consistent Chain-of-Thought Distillation [[code](https://github.com/wangpf3/consistent-CoT-distillation)]
- [ACL - KD: Attribution-Driven Knowledge Distillation for Language Model Compression
- [ACL - teacher)]
- [ACL
- [ACL - effective Distillation of Large Language Models [[code](https://github.com/Sayan21/MAKD)]
- [ACL - by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes [[code](https://github.com/google-research/distilling-step-by-step)]
- [EMNLP - to-Reason)]
- [EMNLP - EMNLP-2023)]
- [EMNLP - KD: Multi-CoT Consistent Knowledge Distillation
- [EMNLP
- [ArXiv - LM: A Diverse Herd of Distilled Models from Large-Scale Instructions [[code](https://github.com/mbzuai-nlp/LaMini-LM)]
- [ArXiv - agnostic Distillation of Encoder-Decoder Language Models
- [ArXiv - Source Large Language Model [[code](https://github.com/YJiangcm/Lion)]
- [ArXiv - aided Distillation Specializes Large Models in Reasoning
- [ArXiv
- [code
- [ArXiv
- [ArXiv
- [ArXiv - regressive Sequence Models
- [ArXiv - of-Thought Prompt Distillation for Multimodal Named Entity Recognition and Multimodal Relation Extraction
- [ArXiv - CoT: Leveraging Large Language Models for Enhanced Knowledge Distillation in Small Models for Scientific QA
- [ArXiv
- [ArXiv
- [ArXiv - training Pruning and Quantization of Large Language Models?
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [NeurIPS - trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning [[code](https://github.com/BaohaoLiao/mefts)]
- [ArXiv - Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization
- [ArXiv - tuning of Long-Context Large Language Models [[code](https://github.com/dvlab-research/LongLoRA)]
- [ArXiv - LoRA: Serving Thousands of Concurrent LoRA Adapters [[code](https://github.com/S-LoRA/S-LoRA)]
- [ACL - instruction-effectiveness)]
- [EMNLP - nlp/AutoCompressors)]
- [EMNLP
- [EMNLP
- [EMNLP - ai/batch-prompting)]
- [ArXiv
- [ArXiv - Context Learning
- [ArXiv - Efficiency Trade-off of LLM Inference with Transferable Prompt
- [ArXiv - context Autoencoder for Context Compression in a Large Language Model [[code](https://github.com/getao/icae)]
- [ArXiv
- [ArXiv
- [ArXiv
- [ArXiv
- [ArXiv - Augmented LMs with Compression and Selective Augmentation [[code](https://github.com/carriex/recomp)]
- [ArXiv - context Attention in Near-Linear Time
- [ArXiv
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [ArXiv - LoRA: Low-rank Plus Quantized Matrix Decomposition for Efficient Language Model Finetuning [[code](https://github.com/HanGuo97/lq-lora)]
- [ArXiv - LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models [[code](https://github.com/yuhuixu1993/qa-lora)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
- [Nature - efficient fine-tuning of large-scale pre-trained language models [[code](https://github.com/thunlp/OpenDelta)]
-
2024
- [arXiv - bit Quantized LLaMA3 Models? An Empirical Study [[code](https://github.com/Macaronlin/LLaMA3-Quantization)] [[HuggingFace](https://huggingface.co/LLMQ)]
- [ArXiv - Finetuning Quantization of LLMs via Information Retention [[code](https://github.com/htqin/IR-QLoRA)]
- [ArXiv - Training Quantization for LLMs [[code](https://github.com/Aaronhuang-778/BiLLM)]
- [ArXiv - LLM: Accurate Dual-Binarization for Efficient LLMs
- [ArXiv
- [ArXiv - Efficient Tuning of Quantized Large Language Models
- [ArXiv - LLM: Efficiently Serving Large Language Models Through FP6-Centric Algorithm-System Co-Design
- [ArXiv
- [ArXiv - Aware Training for the Acceleration of Lightweight LLMs on the Edge [[code](https://github.com/shawnricecake/EdgeQAT)] 
- [ArXiv - Free Asymmetric 2bit Quantization for KV Cache [[code](https://github.com/jy-yuan/KIVI)] 
- [ArXiv - RelaxML/quip-sharp)] 
- [ArXiv - Aware Training on Large Language Models via LoRA-wise LSQ
- [ArXiv - Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs
- [ArXiv - Rank Quantization Error Reconstruction for LLMs
- [ArXiv - Aware Dequantization
- [ArXiv - Bit Quantized Large Language Model
- [ArXiv - 4-Bit LLMs via Self-Distillation [[code](https://github.com/DD-DuDa/BitDistiller)] 
- [ArXiv - bit Large Language Models
- [ArXiv
- [ArXiv - ai-research/gptvq)] 
- [DAC - aware Post-Training Mixed-Precision Quantization for Large Language Models
- [DAC
- [ArXiv - Aware Mixed Precision Quantization
- [ArXiv
- [ArXiv - bound for Large Language Models with Per-tensor Quantization
- [ArXiv
- [ArXiv - PQ: Serving LLM on Heterogeneous Clusters with Phase-Aware Partition and Adaptive Quantization
- [ArXiv
- [ArXiv - free Quantization Algorithm for LLMs
- [ArXiv - KVCacheQuantization)] 
- [ArXiv - Lossless Generative Inference of LLM
- [ArXiv
- [ArXiv - LLM: Truncation-aware Singular Value Decomposition for Large Language Model Compression [[code](https://github.com/AIoT-MLSys-Lab/SVD-LLM)] 
- [ICLR
- [ICLR Practical ML for Low Resource Settings Workshop
- [ArXiv
- [ArXiv - Free 4-Bit Inference in Rotated LLMs [[code](https://github.com/spcl/QuaRot)] 
- [ArXiv - compensation)] 
- [ArXiv
- [ArXiv - Tune May Only Be Worth One Bit [[code](https://github.com/FasterDecoding/BitDelta)] 
- [AAAI EIW Workshop 2024 - Rank Adaptation for Efficient Large Language Model Tuning
- [ArXiv
- [ArXiv
-
2022
- [NeurIPS - bit Matrix Multiplication for Transformers at Scale
- [NeurIPS - training Quantization of Pre-trained Language Models
- [NeurIPS - Training Quantization for Large-Scale Transformers
- [NeurIPS - distilled Transformer [[code](https://github.com/facebookresearch/bit)]
- [ICLR
- [ArXiv
- [ArXiv - context Learning Distillation: Transferring Few-shot Learning Ability of Pre-trained Language Models
- [ACL - tuning of Large Models [[code](https://petals.ml/)]
- [ICML
-
2021
-
2020
-
-
Survey
- [Arxiv
- [Arxiv
- [Arxiv
- [Arxiv - Bench)]  [[Blog]](https://sites.google.com/view/spec-bench)
- [Arxiv
- [Arxiv
- [Arxiv
- [Arxiv - LLM-Survey)] 
- [Arxiv - Knowledge-Distillation-of-LLMs)] 
- [Arxiv - efficient LLM and Multimodal Foundation Models [[code](https://github.com/UbiquitousLearning/Efficient_Foundation_Model_Survey)] 
- [Arxiv - Efficient Large Language Models [[code](https://github.com/tiingweii-shii/Awesome-Resource-Efficient-LLM-Papers)] 
- [Arxiv
- [Arxiv - MLSys-Lab/Efficient-LLMs-Survey)] 
- [Arxiv - LLM-Survey)] 
- [Arxiv
- [Arxiv
- [TACL - Scale Transformer-Based Models: A Case Study on BERT
- [JSA
- [Arxiv