Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
Awesome-Pruning
A curated list of neural network pruning resources.
https://github.com/he-y/Awesome-Pruning
Last synced: 6 days ago
JSON representation
-
Table of Contents
-
2023
- HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers - |
- MECTA: Memory-Economic Continual Test-Time Model Adaptation - |
- DepthFL : Depthwise Federated Learning for Heterogeneous Clients - |
- Revisiting Pruning at Initialization Through the Lens of Ramanujan Graph - Group/ramanujan-on-pai)(Releasing) |
- Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask? - |
- OTOv2: Automatic, Generic, User-Friendly
- Bit-Pruning: A Sparse Multiplication-Less Dot-Product
- Over-parameterized Model Optimization with Polyak-Lojasiewicz Condition - |
- NTK-SAP: Improving neural network pruning by aligning training dynamics - |
- A Unified Framework for Soft Threshold Pruning - Chen/LATS) |
- CrAM: A Compression-Aware Minimizer - |
- Trainability Preserving Neural Pruning - |
- DFPC: Data flow driven pruning of coupled channels without data - m) |
- TVSPrune - Pruning Non-discriminative filters via Total Variation separability of intermediate representations without fine tuning
- Pruning Deep Neural Networks from a Sparsity Perspective - Deep-Neural-Networks-from-a-Sparsity-Perspective) |
- Holistic Adversarially Robust Pruning - |
- How I Learned to Stop Worrying and Love Retraining - IOL/BIMP) |
- Symmetric Pruning in Quantum Neural Networks - |
- Rethinking Graph Lottery Tickets: Graph Sparsity Matters - |
- Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural Networks - |
- Searching Lottery Tickets in Graph Neural Networks: A Dual Perspective - |
- Diffusion Models for Causal Discovery via Topological Ordering - |
- A General Framework For Proving The Equivariant Strong Lottery Ticket Hypothesis - |
- Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together! - |
- Minimum Variance Unbiased N:M Sparsity for the Neural Gradients - |
-
2022
- Parameter-Efficient Masking Networks
- "Lossless" Compression of Deep Neural Networks: A High-dimensional Neural Tangent Kernel Approach - Compression/Lossless_Compression) |
- Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing - EIC/S3-Router) |
- Models Out of Line: A Fourier Lens on Distribution Shift Robustness
- Robust Binary Models by Pruning Randomly-initialized Networks
- Rare Gems: Finding Lottery Tickets at Initialization
- Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning - DASLab/OBC) |
- Back Razor: Memory-Efficient Transfer Learning by Self-Sparsified Backpropagation - Group/BackRazor_Neurips22) |
- Analyzing Lottery Ticket Hypothesis from PAC-Bayesian Theory Perspective - |
- Sparse Winning Tickets are Data-Efficient Image Recognizers - Group/DataEfficientLTH) |
- Lottery Tickets on a Data Diet: Finding Initializations with Sparse Trainable Networks - |
- Weighted Mutual Learning with Diversity-Driven Model Compression - |
- SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance - |
- Data-Efficient Structured Pruning via Submodular Optimization
- Structural Pruning via Latency-Saliency Knapsack
- Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm - |
- Pruning Neural Networks via Coresets and Convex Geometry: Towards No Assumptions - |
- Controlled Sparsity via Constrained Optimization or: How I Learned to Stop Tuning Penalties and Love Constraints - posada/constrained_sparsity) |
- Advancing Model Pruning via Bi-level Optimization - Group/BiP) |
- Emergence of Hierarchical Layers in a Single Sheet of Self-Organizing Spiking Neurons - |
- CryptoGCN: Fast and Scalable Homomorphically Encrypted Graph Convolutional Network Inference
- Transform Once: Efficient Operator Learning in Frequency Domain
- Most Activation Functions Can Win the Lottery Without Excessive Depth - existence) |
- Pruning has a disparate impact on model accuracy - |
- Model Preserving Compression for Neural Networks - chee/ModelPreserveCompressionNN) |
- Prune Your Model Before Distill It - then-distill) |
- FedLTN: Federated Learning for Sparse and Personalized Lottery Ticket Networks - |
- FairGRAPE: Fairness-Aware GRAdient Pruning mEthod for Face Attribute Classification
- SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning - EIC/SuperTickets) |
- Ensemble Knowledge Guided Sub-network Search and Fine-Tuning for Filter Pruning
- CPrune: Compiler-Informed Model Pruning for Efficient Target-Aware DNN Execution
- Soft Masking for Cost-Constrained Channel Pruning
- Filter Pruning via Feature Discrimination in Deep Neural Networks - |
- Disentangled Differentiable Network Pruning - |
- Interpretations Steered Network Pruning via Amortized Inferred Saliency Maps - Ganjj/InterpretationsSteeredPruning) |
- Bayesian Optimization with Clustering and Rollback for CNN Auto Pruning
- Multi-granularity Pruning for Model Acceleration on Mobile Devices - |
- Exploring Lottery Ticket Hypothesis in Spiking Neural Networks - Computing-Lab-Yale/Exploring-Lottery-Ticket-Hypothesis-in-SNNs) |
- Towards Ultra Low Latency Spiking Neural Networks for Vision and Sequential Tasks Using Temporal Pruning - |
- Recent Advances on Neural Network Pruning at Initialization - tse/smile-pruning) |
- FedDUAP: Federated Learning with Dynamic Update and Adaptive Pruning Using Shared Data on the Server - |
- On the Channel Pruning using Graph Convolution Network for Convolutional Neural Network Acceleration - |
- Pruning-as-Search: Efficient Neural Architecture Search via Channel Pruning and Structural Reparameterization - |
- Neural Network Pruning by Cooperative Coevolution - |
- SPDY: Accurate Pruning with Speedup Guarantees - DASLab/spdy) |
- Sparse Double Descent: Where Network Pruning Aggravates Overfitting - double-descent) |
- The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks
- Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness - Group/Linearity-Grafting) |
- Winning the Lottery Ahead of Time: Efficient Early Network Pruning - Cropression-via-Gradient-Flow-Preservation) |
- Topology-Aware Network Pruning using Multi-stage Graph Embedding and Reinforcement Learning - swapp/GNN-RL-Model-Compression) |
- Fast Lossless Neural Compression with Integer-Only Discrete Flows - ml/IODF) |
- DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
- PAC-Net: A Model Pruning Approach to Inductive Transfer Learning - |
- Neural Network Pruning Denoises the Features and Makes Local Connectivity Emerge in Visual Tasks
- Interspace Pruning: Using Adaptive Filter Representations To Improve Training of Sparse CNNs - |
- Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse Network - |
- When To Prune? A Policy Towards Early Structural Pruning - |
- Fire Together Wire Together: A Dynamic Pruning Approach With Self-Supervised Mask PredictionFire Together Wire Together: A Dynamic Pruning Approach With Self-Supervised Mask Prediction - |
- Revisiting Random Channel Pruning for Neural Network Compression
- Learning Bayesian Sparse Networks With Full Experience Replay for Continual Learning - |
- DECORE: Deep Compression With Reinforcement Learning - |
- CHEX: CHannel EXploration for CNN Model Compression - |
- Compressing Models With Few Samples: Mimicking Then Replacing
- Contrastive Dual Gating: Learning Sparse Features With Contrastive Learning - |
- DiSparse: Disentangled Sparsification for Multitask Model Compression - Labs/DiSparse-Multitask-Model-Compression) |
- Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, And No Retraining - Group/SFW-Once-for-All-Pruning) |
- On Lottery Tickets and Minimal Task Representations in Deep Reinforcement Learning - |
- An Operator Theoretic View On Pruning Deep Neural Networks - redman/Koopman_pruning) |
- Effective Model Sparsification by Scheduled Grow-and-Prune Methods
- Signing the Supermask: Keep, Hide, Invert - |
- How many degrees of freedom do we need to train deep networks: a loss landscape perspective - lab/degrees-of-freedom) |
- Dual Lottery Ticket Hypothesis
- Peek-a-Boo: What (More) is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently - Group/Peek-a-Boo) |
- Sparsity Winning Twice: Better Robust Generalization from More Efficient Training - Group/Sparsity-Win-Robust-Generalization) |
- SOSP: Efficiently Capturing Global Correlations by Second-Order Structured Pruning
- Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models
- Revisit Kernel Pruning with Lottery Regulated Grouped Convolutions
- Plant 'n' Seek: Can You Find the Winning Ticket?
- Proving the Lottery Ticket Hypothesis for Convolutional Neural Networks
- On the Existence of Universal Lottery Tickets
- Training Structured Neural Networks Through Manifold Identification and Variance Reduction
- Learning Efficient Image Super-Resolution Networks via Structure-Regularized Pruning - Tse/SRP) |
- Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients - ad/prospr) |
- The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training - Group/Random_Pruning) |
- Prune and Tune Ensembles: Low-Cost Ensemble Learning with Sparse Independent Subnetworks - |
- Prior Gradient Mask Guided Pruning-Aware Fine-Tuning - |
- Convolutional Neural Network Compression through Generalized Kronecker Product Decomposition - |
- Pruning’s Effect on Generalization Through the Lens of Training and Regularization - |
-
2021
- Validating the Lottery Ticket Hypothesis with Inertial Manifold Theory - |
- The Elastic Lottery Ticket Hypothesis - Group/ElasticLTH) |
- Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot? - check-LTH) |
- Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks - |
- Layer-adaptive Sparsity for the Magnitude-based Pruning - lee/layer-adaptive-sparsity) |
- AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks - DASLab/ACDC) |
- A Winning Hand: Compressing Deep Networks Can Improve Out-of-Distribution Robustness
- Rethinking the Pruning Criteria for Convolutional Neural Network - |
- Only Train Once: A One-Shot Neural Network Training And Pruning Framework
- CHIP: CHannel Independence-based Pruning for Compact Neural Networks
- RED : Looking for Redundancies for Data-FreeStructured Compression of Deep Neural Networks - |
- Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition
- Sparse Flows: Pruning Continuous-depth Models
- Scaling Up Exact Neural Network Compression by ReLU Stability
- Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme
- Heavy Tails in SGD and Compressibility of Overparametrized Neural Networks
- ResRep: Lossless CNN Pruning via Decoupling Remembering and Forgetting
- Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search - |
- GDP: Stabilized Neural Network Pruning via Gates with Differentiable Polarization - |
- Auto Graph Encoder-Decoder for Neural Network Pruning - |
- Sub-Bit Neural Networks: Learning To Compress and Accelerate Binary Neural Networks
- On the Predictability of Pruning Across Scales - |
- A Probabilistic Approach to Neural Network Pruning - |
- Accelerate CNNs from Three Dimensions: A Comprehensive Pruning Framework - |
- Group Fisher Pruning for Practical Network Compression
- Towards Compact CNNs via Collaborative Compression - Compact-CNNs-via-Collaborative-Compression) |
- Permute, Quantize, and Fine-tune: Efficient Compression of Neural Networks - research/permute-quantize-finetune) |
- NPAS: A Compiler-aware Framework of Unified Network Pruning andArchitecture Search for Beyond Real-Time Mobile Acceleration - |
- Network Pruning via Performance Maximization - |
- Convolutional Neural Network Pruning with Structural Redundancy Reduction - |
- Manifold Regularized Dynamic Network Pruning - |
- Joint-DetNAS: Upgrade Your Detector with NAS, Pruning and Dynamic Distillation - |
- Content-Aware GAN Compression - aware-gan-compression) |
- Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network
- Pruning Neural Networks at Initialization: Why Are We Missing the Mark? - |
- Robust Pruning at Initialization - |
- A Gradient Flow Framework For Analyzing Network Pruning
- Neural Pruning via Growing Regularization - Tse/Regularization-Pruning) |
- ChipNet: Budget-Aware Pruning with Heaviside Continuous Approximations
- Network Pruning That Matters: A Case Study on Retraining Variants
- Exploration and Estimation for Model Compression - |
- Pruning Randomly Initialized Neural Networks with Iterative Randomization - ntt/iterand) |
-
2020
- Optimal Lottery Tickets via Subset Sum: Logarithmic Over-Parameterization is Sufficient - |
- Winning the Lottery with Continuous Sparsification - sparsification) |
- HYDRA: Pruning Adversarially Robust Neural Networks - group/hydra) |
- Logarithmic Pruning is All You Need - |
- Directional Pruning of Deep Neural Networks - |
- Movement Pruning: Adaptive Sparsity by Fine-Tuning
- Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot - checking-pruning) |
- Neuron Merging: Compensating for Pruned Neurons - merging) |
- Neuron-level Structured Pruning using Polarization Regularizer
- SCOP: Scientific Control for Reliable Neural Network Pruning
- Storage Efficient and Dynamic Flexible Runtime Channel Pruning via Deep Reinforcement Learning - |
- The Generalization-Stability Tradeoff In Neural Network Pruning
- Greedy Optimization Provably Wins the Lottery: Logarithmic Number of Winning Tickets is Enough - |
- Pruning Filter in Filter - Filter-in-Filter) |
- Position-based Scaled Gradient for Model Quantization and Pruning - Kim/PSG-pytorch) |
- Bayesian Bits: Unifying Quantization and Pruning - |
- Pruning neural networks without any data by iteratively conserving synaptic flow - lab/Synaptic-Flow) |
- Meta-Learning with Network Pruning - |
- Accelerating CNN Training by Pruning Activation Gradients - |
- EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning
- DSA: More Efficient Budgeted Pruning via Differentiable Sparsity Allocation - |
- DHP: Differentiable Meta Pruning via HyperNetworks
- DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search - |
- Differentiable Joint Pruning and Quantization for Hardware Efficiency - |
- Channel Pruning via Automatic Structure Search
- Adversarial Neural Pruning with Latent Vulnerability Suppression - |
- Proving the Lottery Ticket Hypothesis: Pruning is All You Need - |
- Network Pruning by Greedy Subnetwork Selection - |
- Operation-Aware Soft Channel Pruning using Differentiable Masks - |
- DropNet: Reducing Neural Network Complexity via Iterative Pruning - |
- Soft Threshold Weight Reparameterization for Learnable Sparsity
- Structured Compression by Weight Encryption for Unstructured Pruning and Quantization - |
- Automatic Neural Network Compression by Sparsity-Quantization Joint Learning: A Constrained Optimization-Based Approach - |
- Towards Efficient Model Compression via Learned Global Ranking - enyac/LeGR) |
- HRank: Filter Pruning using High-Rank Feature Map
- Neural Network Pruning with Residual-Connections and Limited-Data - |
- DMCP: Differentiable Markov Channel Pruning for Neural Networks
- Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression
- Few Sample Knowledge Distillation for Efficient Network Compression - |
- Discrete Model Compression With Resource Constraint for Deep Neural Networks - |
- Learning Filter Pruning Criteria for Deep Convolutional Neural Networks Acceleration - |
- APQ: Joint Search for Network Architecture, Pruning and Quantization Policy - |
- Multi-Dimensional Pruning: A Unified Framework for Model Compression - |
- A Signal Propagation Perspective for Pruning Neural Networks at Initialization - |
- ProxSGD: Training Structured Neural Networks under Regularization and Constraints
- One-Shot Pruning of Recurrent Neural Networks by Jacobian Spectrum Evaluation - |
- Lookahead: A Far-sighted Alternative of Magnitude-based Pruning
- Data-Independent Neural Pruning via Coresets - |
- Provable Filter Pruning for Efficient Neural Networks - |
- Dynamic Model Pruning with Feedback - |
- Comparing Rewinding and Fine-tuning in Neural Network Pruning - ticket/rewinding-iclr20-public) |
- AutoCompress: An Automatic DNN Structured Pruning Framework for Ultra-High Compression Rates - |
- Reborn filters: Pruning convolutional neural networks with limited data - |
- DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks - |
- Pruning from Scratch - |
-
2019
- Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask - research/deconstructing-lottery-tickets) |
- One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers - |
- Global Sparse Momentum SGD for Pruning Very Deep Neural Networks - SGD) |
- AutoPrune: Automatic Network Pruning by Regularizing Auxiliary Parameters - |
- Network Pruning via Transformable Architecture Search - X-Y/NAS-Projects) |
- Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks - decorator-pruning) |
- Model Compression with Adversarial Robustness: A Unified Optimization Framework - VITA/ATMC) |
- Adversarial Robustness vs Model Compression, or Both? - Aware-Pruning-ADMM) |
- MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning
- Accelerate CNN via Recursive Bayesian Pruning - |
- Learning Filter Basis for Convolutional Neural Network Compression - |
- Co-Evolutionary Compression for Unpaired Image Translation - |
- COP: Customized Deep Model Compression via Regularized Correlation-Based Filter-Level Pruning
- Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration - y/filter-pruning-geometric-median) |
- Towards Optimal Structured CNN Pruning via Generative Adversarial Learning
- Centripetal SGD for Pruning Very Deep Convolutional Networks with Complicated Structure - SGD) |
- On Implicit Filter Level Sparsity in Convolutional Neural Networks - Pytorch) |
- Structured Pruning of Neural Networks with Budget-Aware Regularization - |
- Importance Estimation for Neural Network Pruning
- OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural Networks - |
- Variational Convolutional Neural Network Pruning - |
- Partial Order Pruning: for Best Speed/Accuracy Trade-off in Neural Architecture Search - Order-Pruning) |
- Collaborative Channel Pruning for Deep Networks - |
- Approximated Oracle Filter Pruning for Destructive CNN Width Optimization github - |
- EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis - Pytorch) |
- The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks - research/lottery-ticket-hypothesis) |
- SNIP: Single-shot Network Pruning based on Connection Sensitivity - public) |
- Dynamic Channel Pruning: Feature Boosting and Suppression - fry/mayo) |
- Rethinking the Value of Network Pruning - mingjie/rethinking-network-pruning) |
- Dynamic Sparse Graph for Efficient Deep Learning - sparse-graph) |
-
2018
- Frequency-Domain Dynamic Pruning for Convolutional Neural Networks - |
- Discrimination-aware Channel Pruning for Deep Neural Networks - AILab/DCP) |
- Learning Sparse Neural Networks via Sensitivity-Driven Regularization - |
- Constraint-Aware Deep Neural Network Compression
- A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers - pruning) |
- Amc: Automl for model compression and acceleration on mobile devices - pruning) |
- Data-Driven Sparse Structure Selection for Deep Neural Networks - structure-selection) |
- Coreset-Based Neural Network Compression - smiles/CNN_Compression) |
- Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks - y/soft-filter-pruning) |
- Accelerating Convolutional Networks via Global & Dynamic Filter Pruning - |
- Weightless: Lossy weight encoding for deep neural network compression - |
- Compressing Neural Networks using the Variational Information Bottleneck
- Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions - Group/Deep-K-Means-pytorch) |
- CLIP-Q: Deep Network Compression Learning by In-Parallel Pruning-Quantization - |
- “Learning-Compression” Algorithms for Neural Net Pruning - |
- PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning
- NISP: Pruning Networks using Neuron Importance Score Propagation - |
- To prune, or not to prune: exploring the efficacy of pruning for model compression - |
- Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers - willturner/batchnorm-pruning) |
- Learning Sparse Neural Networks via Sensitivity-Driven Regularization - |
-
2017
- Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee - Trim-v1) |
- Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon - OBS) |
- Runtime Neural Pruning - |
- Structured Bayesian Pruning via Log-Normal Multiplicative Noise - |
- Bayesian Compression for Deep Learning - |
- ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression - thinet) |
- Channel pruning for accelerating very deep neural networks - he/channel-pruning) |
- Learning Efficient Convolutional Networks Through Network Slimming - mingjie/network-slimming) |
- Variational Dropout Sparsifies Deep Neural Networks - |
- Combined Group and Exclusive Sparsity for Deep Neural Networks - |
- Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning - |
- Pruning Filters for Efficient ConvNets - mingjie/rethinking-network-pruning/tree/master/imagenet/l1-norm-pruning) |
- Pruning Convolutional Neural Networks for Resource Efficient Inference - pruning) |
-
2016
- Dynamic Network Surgery for Efficient DNNs - Network-Surgery) |
- Learning the Number of Neurons in Deep Networks - |
- Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding - Compression-AlexNet) |
-
2015
- Learning both Weights and Connections for Efficient Neural Networks - willturner/DeepCompression-PyTorch) |
-
-
Related Repo