Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/he-y/Awesome-Pruning

A curated list of neural network pruning resources.
https://github.com/he-y/Awesome-Pruning

List: Awesome-Pruning

awesome-list model-acceleration model-compression pruning

Last synced: about 2 months ago
JSON representation

A curated list of neural network pruning resources.

Awesome Lists containing this project

README

        

# Awesome Pruning [![Awesome](https://awesome.re/badge.svg)](https://awesome.re)

A curated list of neural network pruning and related resources. Inspired by [awesome-deep-vision](https://github.com/kjw0612/awesome-deep-vision), [awesome-adversarial-machine-learning](https://github.com/yenchenlin/awesome-adversarial-machine-learning), [awesome-deep-learning-papers](https://github.com/terryum/awesome-deep-learning-papers) and [Awesome-NAS](https://github.com/D-X-Y/Awesome-NAS).

Please feel free to [pull requests](https://github.com/he-y/awesome-Pruning/pulls) or [open an issue](https://github.com/he-y/awesome-Pruning/issues) to add papers.

## Table of Contents

- [Type of Pruning](#type-of-pruning)

- [A Survey of Structured Pruning](#a-survey-of-structured-pruning-arxiv-version-and-ieee-t-pami-version)

- [2023 Venues](#2023)

- [2022 Venues](#2022)

- [2021 Venues](#2021)

- [2020 Venues](#2020)

- [2019 Venues](#2019)

- [2018 Venues](#2018)

- [2017 Venues](#2017)

- [2016 Venues](#2016)

- [2015 Venues](#2015)

### Type of Pruning

| Type | `F` | `W` | `S` | `Other` |
|:----------- |:--------------:|:--------------:|:----------------:|:-----------:|
| Explanation | Filter pruning | Weight pruning | Special Networks | other types |

### A Survey of Structured Pruning ([arXiv version](https://arxiv.org/abs/2303.00566) and [IEEE T-PAMI version](https://ieeexplore.ieee.org/document/10330640))

Please cite our paper if it's helpful:
```
@article{he2024structured,
author={He, Yang and Xiao, Lingao},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Structured Pruning for Deep Convolutional Neural Networks: A Survey},
year={2024},
volume={46},
number={5},
pages={2900-2919},
doi={10.1109/TPAMI.2023.3334614}}
```

The related papers are categorized as below:
![Structured Pruning Taxonomy](./Structured_Taxonomy.png)

### 2023
| Title | Venue | Type | Code |
|:-------------------------------------------------------------------------------------------------------------------------------- |:-----:|:-------:|:----:|
| [Revisiting Pruning at Initialization Through the Lens of Ramanujan Graph](https://openreview.net/forum?id=uVcDssQff_) | ICLR | `W` | [PyTorch(Author)](https://github.com/VITA-Group/ramanujan-on-pai)(Releasing) |
| [Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask?](https://openreview.net/forum?id=xSsW2Am-ukZ) | ICLR | `W` | - |
| [Bit-Pruning: A Sparse Multiplication-Less Dot-Product](https://openreview.net/forum?id=YUDiZcZTI8) | ICLR | `W` | [Code Deleted](https://github.com/DensoITLab/bitprune) |
| [NTK-SAP: Improving neural network pruning by aligning training dynamics](https://openreview.net/forum?id=-5EWhW_4qWP) | ICLR | `W` | - |
| [A Unified Framework for Soft Threshold Pruning](https://openreview.net/forum?id=cCFqcrq0d8) | ICLR | `W` | [PyTorch(Author)](https://github.com/Yanqi-Chen/LATS) |
| [CrAM: A Compression-Aware Minimizer](https://openreview.net/forum?id=_eTZBs-yedr) | ICLR | `W` | - |
| [Trainability Preserving Neural Pruning](https://openreview.net/forum?id=AZFvpnnewr) | ICLR | `F` | - |
| [DFPC: Data flow driven pruning of coupled channels without data](https://openreview.net/forum?id=mhnHqRqcjYU) | ICLR | `F` | [PyTorch(Author)](https://drive.google.com/drive/folders/18eRYzWnB_6Qq0cYiSzvyOgicqn50g3-m) |
| [TVSPrune - Pruning Non-discriminative filters via Total Variation separability of intermediate representations without fine tuning](https://openreview.net/forum?id=sZI1Oj9KBKy) | ICLR | `F` | [PyTorch(Author)](https://github.com/tvsprune/TVS_Prune) |
| [HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers](https://openreview.net/forum?id=D7srTrGhAs) | ICLR | `F` | - |
| [MECTA: Memory-Economic Continual Test-Time Model Adaptation](https://openreview.net/forum?id=N92hjSf5NNh) | ICLR | `F` | - |
| [DepthFL : Depthwise Federated Learning for Heterogeneous Clients](https://openreview.net/forum?id=pf8RIZTMU58) | ICLR | `F` | - |
| [OTOv2: Automatic, Generic, User-Friendly](https://openreview.net/forum?id=7ynoX1ojPMt) | ICLR | `F` | [PyTorch(Author)](https://github.com/tianyic/only_train_once) |
| [Over-parameterized Model Optimization with Polyak-Lojasiewicz Condition](https://openreview.net/forum?id=aBIpZvMdS56) | ICLR | `F` | - |
| [Pruning Deep Neural Networks from a Sparsity Perspective](https://openreview.net/forum?id=i-DleYh34BM) | ICLR | `WF` | [PyTorch(Author)](https://github.com/dem123456789/Pruning-Deep-Neural-Networks-from-a-Sparsity-Perspective) |
| [Holistic Adversarially Robust Pruning](https://openreview.net/forum?id=sAJDi9lD06L) | ICLR | `WF` | - |
| [How I Learned to Stop Worrying and Love Retraining](https://openreview.net/forum?id=_nF5imFKQI) | ICLR | `WF` | [PyTorch(Author)](https://github.com/ZIB-IOL/BIMP) |
| [Symmetric Pruning in Quantum Neural Networks](https://openreview.net/forum?id=K96AogLDT2K) | ICLR | `S` | - |
| [Rethinking Graph Lottery Tickets: Graph Sparsity Matters](https://openreview.net/forum?id=fjh7UGQgOB) | ICLR | `S` | - |
| [Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural Networks](https://openreview.net/forum?id=4UldFtZ_CVF) | ICLR | `S` | - |
| [Searching Lottery Tickets in Graph Neural Networks: A Dual Perspective](https://openreview.net/forum?id=Dvs-a3aymPe) | ICLR | `S` | - |
| [Diffusion Models for Causal Discovery via Topological Ordering](https://openreview.net/forum?id=Idusfje4-Wq) | ICLR | `S` | - |
| [A General Framework For Proving The Equivariant Strong Lottery Ticket Hypothesis](https://openreview.net/forum?id=vVJZtlZB9D) | ICLR | `Other` | - |
| [Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!](https://openreview.net/forum?id=J6F3lLg4Kdp) | ICLR | `Other` | - |
| [Minimum Variance Unbiased N:M Sparsity for the Neural Gradients](https://openreview.net/forum?id=vuD2xEtxZcj) | ICLR | `Other` | - |

### 2022
| Title | Venue | Type | Code |
|:-------------------------------------------------------------------------------------------------------------------------------- |:-----:|:-------:|:----:|
| [Parameter-Efficient Masking Networks](https://openreview.net/forum?id=7rcuQ_V2GFg) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/yueb17/PEMN) |
| ["Lossless" Compression of Deep Neural Networks: A High-dimensional Neural Tangent Kernel Approach](https://openreview.net/forum?id=NaW6T93F34m) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/Model-Compression/Lossless_Compression) |
| [Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing](https://openreview.net/forum?id=2EUJ4e6H4OX) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/GATECH-EIC/S3-Router) |
| [Models Out of Line: A Fourier Lens on Distribution Shift Robustness](https://openreview.net/forum?id=YZ-N-sejjwO) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/sarafridov/RobustNets) |
| [Robust Binary Models by Pruning Randomly-initialized Networks](https://openreview.net/forum?id=5g-h_DILemH) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/IVRL/RobustBinarySubNet) |
| [Rare Gems: Finding Lottery Tickets at Initialization](https://openreview.net/forum?id=Jpxd93u2vK-) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/ksreenivasan/pruning_is_enough) |
| [Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning](https://openreview.net/forum?id=ksVGCOlOEba) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/IST-DASLab/OBC) |
| [Pruning’s Effect on Generalization Through the Lens of Training and Regularization](https://openreview.net/forum?id=OrcLKV9sKWp) | NeurIPS | `W` | - |
| [Back Razor: Memory-Efficient Transfer Learning by Self-Sparsified Backpropagation](https://openreview.net/forum?id=mTXQIpXPDbh) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/VITA-Group/BackRazor_Neurips22) |
| [Analyzing Lottery Ticket Hypothesis from PAC-Bayesian Theory Perspective](https://openreview.net/forum?id=fbUybomIuE) | NeurIPS | `W` | - |
| [Sparse Winning Tickets are Data-Efficient Image Recognizers](https://openreview.net/forum?id=wfKbtSjHA6F) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/VITA-Group/DataEfficientLTH) |
| [Lottery Tickets on a Data Diet: Finding Initializations with Sparse Trainable Networks](https://openreview.net/forum?id=QLPzCpu756J) | NeurIPS | `W` | - |
| [Weighted Mutual Learning with Diversity-Driven Model Compression](https://openreview.net/forum?id=UQJoGBNRX4) | NeurIPS | `F` | - |
| [SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance](https://openreview.net/forum?id=oQIJsMlyaW_) | NeurIPS | `F` | - |
| [Data-Efficient Structured Pruning via Submodular Optimization](https://openreview.net/forum?id=K2QGzyLwpYG) | NeurIPS | `F` | [PyTorch(Author)](https://github.com/marwash25/subpruning) |
| [Structural Pruning via Latency-Saliency Knapsack](https://openreview.net/forum?id=cUOR-_VsavA) | NeurIPS | `F` | [PyTorch(Author)](https://github.com/NVlabs/HALP) |
| [Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm](https://openreview.net/forum?id=5hgYi4r5MDp) | NeurIPS | `WF` | - |
| [Pruning Neural Networks via Coresets and Convex Geometry: Towards No Assumptions](https://openreview.net/forum?id=btpIaJiRx6z) | NeurIPS | `WF` | - |
| [Controlled Sparsity via Constrained Optimization or: How I Learned to Stop Tuning Penalties and Love Constraints](https://openreview.net/forum?id=XUvSYc6TqDF) | NeurIPS | `WF` | [PyTorch(Author)](https://github.com/gallego-posada/constrained_sparsity) |
| [Advancing Model Pruning via Bi-level Optimization](https://openreview.net/forum?id=t6O08FxvtBY) | NeurIPS | `WF` | [PyTorch(Author)](https://github.com/OPTML-Group/BiP) |
| [Emergence of Hierarchical Layers in a Single Sheet of Self-Organizing Spiking Neurons](https://openreview.net/forum?id=cPVuuk1lZb3) | NeurIPS | `S` | - |
| [CryptoGCN: Fast and Scalable Homomorphically Encrypted Graph Convolutional Network Inference](https://openreview.net/forum?id=VeQBBm1MmTZ) | NeurIPS | `S` | [PyTorch(Author)](https://github.com/ranran0523/CryptoGCN)(Releasing) |
| [Transform Once: Efficient Operator Learning in Frequency Domain](https://openreview.net/forum?id=B2PpZyAAEgV) | NeurIPS | `Other` | [PyTorch(Author)](https://github.com/DiffEqML/kairos)(Releasing) |
| [Most Activation Functions Can Win the Lottery Without Excessive Depth](https://openreview.net/forum?id=NySDKS9SxN) | NeurIPS | `Other` | [PyTorch(Author)](https://github.com/RelationalML/LT-existence) |
| [Pruning has a disparate impact on model accuracy](https://openreview.net/forum?id=11nMVZK0WYM) | NeurIPS | `Other` | - |
| [Model Preserving Compression for Neural Networks](https://openreview.net/forum?id=gt-l9Hu2ndd) | NeurIPS | `Other` | [PyTorch(Author)](https://github.com/jerry-chee/ModelPreserveCompressionNN) |
| [Prune Your Model Before Distill It](https://link.springer.com/10.1007/978-3-031-20083-0_8) | ECCV | `W` | [PyTorch(Author)](https://https://github.com/ososos888/prune-then-distill) |
| [FedLTN: Federated Learning for Sparse and Personalized Lottery Ticket Networks](https://link.springer.com/10.1007/978-3-031-19775-8_5) | ECCV | `W` | - |
| [FairGRAPE: Fairness-Aware GRAdient Pruning mEthod for Face Attribute Classification](https://link.springer.com/10.1007/978-3-031-19778-9_24) | ECCV | `F` | [PyTorch(Author)](https://github.com/Bernardo1998/FairGRAPE) |
| [SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning](https://link.springer.com/10.1007/978-3-031-20083-0_40) | ECCV | `F` | [PyTorch(Author)](https://github.com/GATECH-EIC/SuperTickets) |
| [Ensemble Knowledge Guided Sub-network Search and Fine-Tuning for Filter Pruning](https://link.springer.com/10.1007/978-3-031-20083-0_34) | ECCV | `F` | [PyTorch(Author)](https://github.com/sseung0703/EKG) |
| [CPrune: Compiler-Informed Model Pruning for Efficient Target-Aware DNN Execution](https://link.springer.com/10.1007/978-3-031-20044-1_37) | ECCV | `F` | [PyTorch(Author)](https://github.com/taehokim20/CPrune) |
| [Soft Masking for Cost-Constrained Channel Pruning](https://link.springer.com/10.1007/978-3-031-20083-0_38) | ECCV | `F` | [PyTorch(Author)](https://github.com/NVlabs/SMCP) |
| [Filter Pruning via Feature Discrimination in Deep Neural Networks](https://link.springer.com/10.1007/978-3-031-19803-8_15) | ECCV | `F` | - |
| [Disentangled Differentiable Network Pruning](https://link.springer.com/10.1007/978-3-031-20083-0_20) | ECCV | `F` | - |
| [Interpretations Steered Network Pruning via Amortized Inferred Saliency Maps](https://link.springer.com/10.1007/978-3-031-19803-8_17) | ECCV | `F` | [PyTorch(Author)](https://github.com/Alii-Ganjj/InterpretationsSteeredPruning) |
| [Bayesian Optimization with Clustering and Rollback for CNN Auto Pruning](https://link.springer.com/10.1007/978-3-031-20050-2_29) | ECCV | `F` | [PyTorch(Author)](https://github.com/fanhanwei/BOCR) |
| [Multi-granularity Pruning for Model Acceleration on Mobile Devices](https://link.springer.com/10.1007/978-3-031-20083-0_29) | ECCV | `WF` | - |
| [Exploring Lottery Ticket Hypothesis in Spiking Neural Networks](https://link.springer.com/10.1007/978-3-031-19775-8_7) | ECCV | `S` | [PyTorch(Author)](https://github.com/Intelligent-Computing-Lab-Yale/Exploring-Lottery-Ticket-Hypothesis-in-SNNs) |
| [Towards Ultra Low Latency Spiking Neural Networks for Vision and Sequential Tasks Using Temporal Pruning](https://link.springer.com/10.1007/978-3-031-20083-0_42) | ECCV | `S` | - |
| [Recent Advances on Neural Network Pruning at Initialization](https://www.ijcai.org/proceedings/2022/786) | IJCAI | `W` | [PyTorch(Author)](https://github.com/mingsun-tse/smile-pruning) |
| [FedDUAP: Federated Learning with Dynamic Update and Adaptive Pruning Using Shared Data on the Server](https://www.ijcai.org/proceedings/2022/385) | IJCAI | `F` | - |
| [On the Channel Pruning using Graph Convolution Network for Convolutional Neural Network Acceleration](https://www.ijcai.org/proceedings/2022/431) | IJCAI | `F` | - |
| [Pruning-as-Search: Efficient Neural Architecture Search via Channel Pruning and Structural Reparameterization](https://www.ijcai.org/proceedings/2022/449) | IJCAI | `F` | - |
| [Neural Network Pruning by Cooperative Coevolution](https://www.ijcai.org/proceedings/2022/667) | IJCAI | `F` | - |
| [SPDY: Accurate Pruning with Speedup Guarantees](https://proceedings.mlr.press/v162/frantar22a.html) | ICML | `W` | [PyTorch(Author)](https://github.com/IST-DASLab/spdy) |
| [Sparse Double Descent: Where Network Pruning Aggravates Overfitting](https://proceedings.mlr.press/v162/he22d.html) | ICML | `W` | [PyTorch(Author)](https://github.com/hezheug/sparse-double-descent) |
| [The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks](https://proceedings.mlr.press/v162/yu22f.html) | ICML | `W` | [PyTorch(Author)](https://github.com/yuxwind/CBS) |
| [Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness](https://proceedings.mlr.press/v162/chen22af.html) | ICML | `F` | [PyTorch(Author)](https://github.com/VITA-Group/Linearity-Grafting) |
| [Winning the Lottery Ahead of Time: Efficient Early Network Pruning](https://proceedings.mlr.press/v162/rachwan22a.html) | ICML | `F` | [PyTorch(Author)](https://github.com/johnrachwan123/Early-Cropression-via-Gradient-Flow-Preservation) |
| [Topology-Aware Network Pruning using Multi-stage Graph Embedding and Reinforcement Learning](https://proceedings.mlr.press/v162/yu22e.html) | ICML | `F` | [PyTorch(Author)](https://github.com/yusx-swapp/GNN-RL-Model-Compression) |
| [Fast Lossless Neural Compression with Integer-Only Discrete Flows](https://proceedings.mlr.press/v162/wang22a.html) | ICML | `F` | [PyTorch(Author)](https://github.com/thu-ml/IODF) |
| [DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks](https://proceedings.mlr.press/v162/fu22c.html) | ICML | `Other` | [PyTorch(Author)](https://github.com/facebookresearch/DepthShrinker) |
| [PAC-Net: A Model Pruning Approach to Inductive Transfer Learning](https://proceedings.mlr.press/v162/myung22a.html) | ICML | `Other` | - |
| [Neural Network Pruning Denoises the Features and Makes Local Connectivity Emerge in Visual Tasks](https://proceedings.mlr.press/v162/pellegrini22a.html) | ICML | `Other` | [PyTorch(Author)](https://github.com/phiandark/SiftingFeatures) |
| [Interspace Pruning: Using Adaptive Filter Representations To Improve Training of Sparse CNNs](https://openaccess.thecvf.com/content/CVPR2022/html/Wimmer_Interspace_Pruning_Using_Adaptive_Filter_Representations_To_Improve_Training_of_CVPR_2022_paper.html) | CVPR | `W` | - |
| [Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse Network](https://openaccess.thecvf.com/content/CVPR2022/html/Lee_Masking_Adversarial_Damage_Finding_Adversarial_Saliency_for_Robust_and_Sparse_CVPR_2022_paper.html) | CVPR | `W` | - |
| [When To Prune? A Policy Towards Early Structural Pruning](https://openaccess.thecvf.com/content/CVPR2022/html/Shen_When_To_Prune_A_Policy_Towards_Early_Structural_Pruning_CVPR_2022_paper.html) | CVPR | `F` | - |
| [Fire Together Wire Together: A Dynamic Pruning Approach With Self-Supervised Mask PredictionFire Together Wire Together: A Dynamic Pruning Approach With Self-Supervised Mask Prediction](https://openaccess.thecvf.com/content/CVPR2022/html/Elkerdawy_Fire_Together_Wire_Together_A_Dynamic_Pruning_Approach_With_Self-Supervised_CVPR_2022_paper.html) | CVPR | `F` | - |
| [Revisiting Random Channel Pruning for Neural Network Compression](https://openaccess.thecvf.com/content/CVPR2022/html/Li_Revisiting_Random_Channel_Pruning_for_Neural_Network_Compression_CVPR_2022_paper.html) | CVPR | `F` | [PyTorch(Author)](https://github.com/ofsoundof/random_channel_pruning)(Releasing) |
| [Learning Bayesian Sparse Networks With Full Experience Replay for Continual Learning](https://openaccess.thecvf.com/content/CVPR2022/html/Yan_Learning_Bayesian_Sparse_Networks_With_Full_Experience_Replay_for_Continual_CVPR_2022_paper.html) | CVPR | `F` | - |
| [DECORE: Deep Compression With Reinforcement Learning](https://openaccess.thecvf.com/content/CVPR2022/html/Alwani_DECORE_Deep_Compression_With_Reinforcement_Learning_CVPR_2022_paper.html) | CVPR | `F` | - |
| [CHEX: CHannel EXploration for CNN Model Compression](https://openaccess.thecvf.com/content/CVPR2022/html/Hou_CHEX_CHannel_EXploration_for_CNN_Model_Compression_CVPR_2022_paper.html) | CVPR | `F` | - |
| [Compressing Models With Few Samples: Mimicking Then Replacing](https://openaccess.thecvf.com/content/CVPR2022/html/Wang_Compressing_Models_With_Few_Samples_Mimicking_Then_Replacing_CVPR_2022_paper.html) | CVPR | `F` | [PyTorch(Author)](https://github.com/cjnjuwhy/MiR)(Releasing) |
| [Contrastive Dual Gating: Learning Sparse Features With Contrastive Learning](https://openaccess.thecvf.com/content/CVPR2022/html/Meng_Contrastive_Dual_Gating_Learning_Sparse_Features_With_Contrastive_Learning_CVPR_2022_paper.html) | CVPR | `WF` | - |
| [DiSparse: Disentangled Sparsification for Multitask Model Compression](https://openaccess.thecvf.com/content/CVPR2022/html/Sun_DiSparse_Disentangled_Sparsification_for_Multitask_Model_Compression_CVPR_2022_paper.html) | CVPR | `Other` | [PyTorch(Author)](https://github.com/SHI-Labs/DiSparse-Multitask-Model-Compression) |
| [Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, And No Retraining](https://openreview.net/forum?id=O1DEtITim__) | ICLR **(Spotlight)** | `W` | [PyTorch(Author)](https://github.com/VITA-Group/SFW-Once-for-All-Pruning) |
| [On Lottery Tickets and Minimal Task Representations in Deep Reinforcement Learning](https://openreview.net/forum?id=Fl3Mg_MZR-) | ICLR **(Spotlight)** | `W` | - |
| [An Operator Theoretic View On Pruning Deep Neural Networks](https://openreview.net/forum?id=pWBNOgdeURp) | ICLR | `W` | [PyTorch(Author)](https://github.com/william-redman/Koopman_pruning) |
| [Effective Model Sparsification by Scheduled Grow-and-Prune Methods](https://openreview.net/forum?id=xa6otUDdP2W) | ICLR | `W` | [PyTorch(Author)](https://github.com/boone891214/GaP) |
| [Signing the Supermask: Keep, Hide, Invert](https://openreview.net/forum?id=e0jtGTfPihs) | ICLR | `W` | - |
| [How many degrees of freedom do we need to train deep networks: a loss landscape perspective](https://openreview.net/forum?id=ChMLTGRjFcU) | ICLR | `W` | [PyTorch(Author)](https://github.com/ganguli-lab/degrees-of-freedom) |
| [Dual Lottery Ticket Hypothesis](https://openreview.net/forum?id=fOsN52jn25l) | ICLR | `W` | [PyTorch(Author)](https://github.com/yueb17/DLTH) |
| [Peek-a-Boo: What (More) is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently](https://openreview.net/forum?id=moHCzz6D5H3) | ICLR | `W` | [PyTorch(Author)](https://github.com/VITA-Group/Peek-a-Boo) |
| [Sparsity Winning Twice: Better Robust Generalization from More Efficient Training](https://openreview.net/forum?id=SYuJXrXq8tw) | ICLR | `W` | [PyTorch(Author)](https://github.com/VITA-Group/Sparsity-Win-Robust-Generalization) |
| [SOSP: Efficiently Capturing Global Correlations by Second-Order Structured Pruning](https://openreview.net/forum?id=t5EmXZ3ZLR) | ICLR **(Spotlight)** | `F` | [PyTorch(Author)](https://github.com/boschresearch/sosp)(Releasing) |
| [Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models](https://openreview.net/forum?id=Nfl-iXa-y7R) | ICLR **(Spotlight)** | `F` | [PyTorch(Author)](https://github.com/HazyResearch/pixelfly) |
| [Revisit Kernel Pruning with Lottery Regulated Grouped Convolutions](https://openreview.net/forum?id=LdEhiMG9WLO) | ICLR | `F` | [PyTorch(Author)](https://github.com/choH/lottery_regulated_grouped_kernel_pruning) |
| [Plant 'n' Seek: Can You Find the Winning Ticket?](https://openreview.net/forum?id=9n9c8sf0xm) | ICLR | `F` | [PyTorch(Author)](http://www.github.com/RelationalML/PlantNSeek) |
| [Proving the Lottery Ticket Hypothesis for Convolutional Neural Networks](https://openreview.net/forum?id=Vjki79-619-) | ICLR | `F` | [PyTorch(Author)](https://github.com/ArthurWalraven/cnnslth) |
| [On the Existence of Universal Lottery Tickets](https://openreview.net/forum?id=SYB4WrJql1n) | ICLR | `F` | [PyTorch(Author)](https://github.com/RelationalML/UniversalLT) |
| [Training Structured Neural Networks Through Manifold Identification and Variance Reduction](https://openreview.net/forum?id=mdUYT5QV0O) | ICLR | `F` | [PyTorch(Author)](https://www.github.com/zihsyuan1214/rmda) |
| [Learning Efficient Image Super-Resolution Networks via Structure-Regularized Pruning](https://openreview.net/forum?id=AjGC97Aofee) | ICLR | `F` | [PyTorch(Author)](https://github.com/MingSun-Tse/SRP) |
| [Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients](https://openreview.net/forum?id=AIgn9uwfcD1) | ICLR | `WF` | [PyTorch(Author)](https://github.com/mil-ad/prospr) |
| [The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training](https://openreview.net/forum?id=VBZJ_3tz-t) | ICLR | `Other` | [PyTorch(Author)](https://github.com/VITA-Group/Random_Pruning) |
| [Prune and Tune Ensembles: Low-Cost Ensemble Learning with Sparse Independent Subnetworks](https://ojs.aaai.org/index.php/AAAI/article/view/20842) | AAAI | `W` | - |
| [Prior Gradient Mask Guided Pruning-Aware Fine-Tuning](https://ojs.aaai.org/index.php/AAAI/article/view/19888) | AAAI | `F` | - |
| [Convolutional Neural Network Compression through Generalized Kronecker Product Decomposition](https://ojs.aaai.org/index.php/AAAI/article/view/19958) | AAAI | `Other` | - |

### 2021
| Title | Venue | Type | Code |
|:-------------------------------------------------------------------------------------------------------------------------------- |:-----:|:-------:|:----:|
| [Validating the Lottery Ticket Hypothesis with Inertial Manifold Theory](https://papers.nips.cc/paper/2021/hash/fdc42b6b0ee16a2f866281508ef56730-Abstract.html) | NeurIPS | `W` | - |
| [The Elastic Lottery Ticket Hypothesis](https://papers.nips.cc/paper/2021/hash/dfccdb8b1cc7e4dab6d33db0fef12b88-Abstract.html) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/VITA-Group/ElasticLTH) |
| [Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?](https://papers.nips.cc/paper/2021/hash/6a130f1dc6f0c829f874e92e5458dced-Abstract.html) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/boone891214/sanity-check-LTH) |
| [Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks](https://papers.nips.cc/paper/2021/hash/15f99f2165aa8c86c9dface16fefd281-Abstract.html) | NeurIPS | `W` | - |
| [You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership](https://papers.nips.cc/paper/2021/hash/23e582ad8087f2c03a5a31c125123f9a-Abstract.html) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/VITA-Group/NO-stealing-LTH) |
| [Pruning Randomly Initialized Neural Networks with Iterative Randomization](https://papers.nips.cc/paper/2021/hash/23e582ad8087f2c03a5a31c125123f9a-Abstract.html) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/dchiji-ntt/iterand) |
| [Sparse Training via Boosting Pruning Plasticity with Neuroregeneration](https://papers.nips.cc/paper/2021/hash/5227b6aaf294f5f027273aebf16015f2-Abstract.html) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/VITA-Group/GraNet) |
| [AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks](https://papers.nips.cc/paper/2021/hash/48000647b315f6f00f913caa757a70b3-Abstract.html) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/IST-DASLab/ACDC) |
| [A Winning Hand: Compressing Deep Networks Can Improve Out-of-Distribution Robustness](https://papers.nips.cc/paper/2021/hash/0607f4c705595b911a4f3e7a127b44e0-Abstract.html) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/RobustBench/robustbench) |
| [Rethinking the Pruning Criteria for Convolutional Neural Network](https://papers.nips.cc/paper/2021/hash/87ae6fb631f7c8a627e8e28785d9992d-Abstract.html) | NeurIPS | `F` | - |
| [Only Train Once: A One-Shot Neural Network Training And Pruning Framework](https://papers.nips.cc/paper/2021/hash/a376033f78e144f494bfc743c0be3330-Abstract.html) | NeurIPS | `F` | [PyTorch(Author)](https://github.com/tianyic/onlytrainonce) |
| [CHIP: CHannel Independence-based Pruning for Compact Neural Networks](https://papers.nips.cc/paper/2021/hash/ce6babd060aa46c61a5777902cca78af-Abstract.html) | NeurIPS | `F` | [PyTorch(Author)](https://github.com/Eclipsess/CHIP_NeurIPS2021) |
| [RED : Looking for Redundancies for Data-FreeStructured Compression of Deep Neural Networks](https://papers.nips.cc/paper/2021/hash/ae5e3ce40e0404a45ecacaaf05e5f735-Abstract.html) | NeurIPS | `F` | - |
| [Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition](https://papers.nips.cc/paper/2021/hash/2adcfc3929e7c03fac3100d3ad51da26-Abstract.html) | NeurIPS | `F` | [PyTorch(Author)](https://github.com/lucaslie/torchprune) |
| [Sparse Flows: Pruning Continuous-depth Models](https://papers.nips.cc/paper/2021/hash/bf1b2f4b901c21a1d8645018ea9aeb05-Abstract.html) | NeurIPS | `WF` | [PyTorch(Author)](https://github.com/lucaslie/torchprune) |
| [Scaling Up Exact Neural Network Compression by ReLU Stability](https://papers.nips.cc/paper/2021/hash/e35d7a5768c4b85b4780384d55dc3620-Abstract.html) | NeurIPS | `S` | [PyTorch(Author)](https://github.com/yuxwind/ExactCompression) |
| [Discriminator in GAN Compression: A Generator-discriminator Cooperative Compression Scheme](https://papers.nips.cc/paper/2021/hash/effc299a1addb07e7089f9b269c31f2f-Abstract.html) | NeurIPS | `S` | [PyTorch(Author)](https://github.com/SJLeo/GCC) |
| [Heavy Tails in SGD and Compressibility of Overparametrized Neural Networks](https://papers.nips.cc/paper/2021/hash/f5c3dd7514bf620a1b85450d2ae374b1-Abstract.html) | NeurIPS | `Other` | [PyTorch(Author)](https://github.com/mbarsbey/sgd_comp_gen) |
| [ResRep: Lossless CNN Pruning via Decoupling Remembering and Forgetting](https://openaccess.thecvf.com/content/ICCV2021/html/Ding_ResRep_Lossless_CNN_Pruning_via_Decoupling_Remembering_and_Forgetting_ICCV_2021_paper.html) | ICCV | `F` | [PyTorch(Author)](https://github.com/DingXiaoH/ResRep) |
| [Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search](https://openaccess.thecvf.com/content/ICCV2021/html/Zhan_Achieving_On-Mobile_Real-Time_Super-Resolution_With_Neural_Architecture_and_Pruning_Search_ICCV_2021_paper.html) | ICCV | `F` | - |
| [GDP: Stabilized Neural Network Pruning via Gates with Differentiable Polarization](https://openaccess.thecvf.com/content/ICCV2021/html/Guo_GDP_Stabilized_Neural_Network_Pruning_via_Gates_With_Differentiable_Polarization_ICCV_2021_paper.html) | ICCV | `F` | - |
| [Auto Graph Encoder-Decoder for Neural Network Pruning](https://openaccess.thecvf.com/content/ICCV2021/html/Yu_Auto_Graph_Encoder-Decoder_for_Neural_Network_Pruning_ICCV_2021_paper.html) | ICCV | `F` | - |
| [Exploration and Estimation for Model Compression](https://papers.nips.cc/paper/2021/hash/5227b6aaf294f5f027273aebf16015f2-Abstract.html) | ICCV | `F` | - |
| [Sub-Bit Neural Networks: Learning To Compress and Accelerate Binary Neural Networks](https://openaccess.thecvf.com/content/ICCV2021/html/Wang_Sub-Bit_Neural_Networks_Learning_To_Compress_and_Accelerate_Binary_Neural_ICCV_2021_paper.html) | ICCV | `Other` | [PyTorch(Author)](https://github.com/yikaiw/SNN) |
| [On the Predictability of Pruning Across Scales](https://arxiv.org/abs/2006.10621) | ICML | `W` | - |
| [A Probabilistic Approach to Neural Network Pruning](https://arxiv.org/abs/2105.10065) | ICML | `F` | - |
| [Accelerate CNNs from Three Dimensions: A Comprehensive Pruning Framework](https://arxiv.org/abs/2010.04879) | ICML | `F` | - |
| [Group Fisher Pruning for Practical Network Compression](https://arxiv.org/abs/2108.00708) | ICML | `F` | [PyTorch(Author)](https://github.com/jshilong/FisherPruning) |
| [Towards Compact CNNs via Collaborative Compression](https://arxiv.org/abs/2105.11228) | CVPR | `F` | [PyTorch(Author)](https://github.com/liuguoyou/Towards-Compact-CNNs-via-Collaborative-Compression) |
| [Permute, Quantize, and Fine-tune: Efficient Compression of Neural Networks](https://arxiv.org/abs/2010.15703) | CVPR | `F` | [PyTorch(Author)](https://github.com/uber-research/permute-quantize-finetune) |
| [NPAS: A Compiler-aware Framework of Unified Network Pruning andArchitecture Search for Beyond Real-Time Mobile Acceleration](https://arxiv.org/abs/2012.00596) | CVPR | `F` | - |
| [Network Pruning via Performance Maximization](https://openaccess.thecvf.com/content/CVPR2021/html/Gao_Network_Pruning_via_Performance_Maximization_CVPR_2021_paper.html) | CVPR | `F` | - |
| [Convolutional Neural Network Pruning with Structural Redundancy Reduction](https://arxiv.org/abs/2104.03438) | CVPR | `F` | - |
| [Manifold Regularized Dynamic Network Pruning](https://arxiv.org/abs/2103.05861) | CVPR | `F` | - |
| [Joint-DetNAS: Upgrade Your Detector with NAS, Pruning and Dynamic Distillation](https://arxiv.org/abs/2105.12971) | CVPR | `FO` | - |
| [Content-Aware GAN Compression](https://arxiv.org/abs/2104.02244) | CVPR | `S` | [PyTorch(Author)](https://github.com/lychenyoko/content-aware-gan-compression) |
| [Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network](https://openreview.net/forum?id=U_mat0b9iv) | ICLR | `W` | [PyTorch(Author)](https://github.com/chrundle/biprop) |
| [Layer-adaptive Sparsity for the Magnitude-based Pruning](https://openreview.net/forum?id=H6ATjJ0TKdf) | ICLR | `W` | [PyTorch(Author)](https://github.com/jaeho-lee/layer-adaptive-sparsity) |
| [Pruning Neural Networks at Initialization: Why Are We Missing the Mark?](https://openreview.net/forum?id=Ig-VyQc-MLK) | ICLR | `W` | - |
| [Robust Pruning at Initialization](https://openreview.net/forum?id=vXj_ucZQ4hA) | ICLR | `W` | - |
| [A Gradient Flow Framework For Analyzing Network Pruning](https://openreview.net/forum?id=rumv7QmLUue) | ICLR | `F` | [PyTorch(Author)](https://github.com/EkdeepSLubana/flowandprune) |
| [Neural Pruning via Growing Regularization](https://openreview.net/forum?id=o966_Is_nPA) | ICLR | `F` | [PyTorch(Author)](https://github.com/MingSun-Tse/Regularization-Pruning) |
| [ChipNet: Budget-Aware Pruning with Heaviside Continuous Approximations](https://openreview.net/forum?id=xCxXwTzx4L1) | ICLR | `F` | [PyTorch(Author)](https://github.com/transmuteAI/ChipNet) |
| [Network Pruning That Matters: A Case Study on Retraining Variants](https://openreview.net/forum?id=Cb54AMqHQFP) | ICLR | `F` | [PyTorch(Author)](https://github.com/lehduong/NPTM) |

### 2020

| Title | Venue | Type | Code |
|:-------------------------------------------------------------------------------------------------------------------------------- |:-----:|:-------:|:----:|
| [Optimal Lottery Tickets via Subset Sum: Logarithmic Over-Parameterization is Sufficient](https://proceedings.neurips.cc/paper/2020/hash/1b742ae215adf18b75449c6e272fd92d-Abstract.html) | NeurIPS | `W` | - |
| [Winning the Lottery with Continuous Sparsification](https://arxiv.org/abs/1912.04427v4) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/lolemacs/continuous-sparsification) |
| [HYDRA: Pruning Adversarially Robust Neural Networks](https://arxiv.org/abs/2002.10509) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/inspire-group/hydra) |
| [Logarithmic Pruning is All You Need](https://arxiv.org/abs/2006.12156) | NeurIPS | `W` | - |
| [Directional Pruning of Deep Neural Networks](https://arxiv.org/abs/2006.09358) | NeurIPS | `W` | - |
| [Movement Pruning: Adaptive Sparsity by Fine-Tuning](https://arxiv.org/abs/2005.07683) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/huggingface/block_movement_pruning) |
| [Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot](https://arxiv.org/abs/2009.11094) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/JingtongSu/sanity-checking-pruning) |
| [Neuron Merging: Compensating for Pruned Neurons](https://arxiv.org/abs/2010.13160) | NeurIPS | `F` | [PyTorch(Author)](https://github.com/friendshipkim/neuron-merging) |
| [Neuron-level Structured Pruning using Polarization Regularizer](https://papers.nips.cc/paper/2020/file/703957b6dd9e3a7980e040bee50ded65-Paper.pdf) | NeurIPS | `F` | [PyTorch(Author)](https://github.com/polarizationpruning/PolarizationPruning) |
| [SCOP: Scientific Control for Reliable Neural Network Pruning](https://arxiv.org/abs/2010.10732) | NeurIPS | `F` | [PyTorch(Author)](https://github.com/yehuitang/Pruning/tree/master/SCOP_NeurIPS2020) |
| [Storage Efficient and Dynamic Flexible Runtime Channel Pruning via Deep Reinforcement Learning](https://proceedings.neurips.cc/paper/2020/hash/a914ecef9c12ffdb9bede64bb703d877-Abstract.html) | NeurIPS | `F` | - |
| [The Generalization-Stability Tradeoff In Neural Network Pruning](https://arxiv.org/abs/1906.03728) | NeurIPS | `F` | [PyTorch(Author)](https://github.com/bbartoldson/GeneralizationStabilityTradeoff) |
| [Greedy Optimization Provably Wins the Lottery: Logarithmic Number of Winning Tickets is Enough](https://proceedings.neurips.cc/paper/2020/hash/be23c41621390a448779ee72409e5f49-Abstract.html) | NeurIPS | `WF` | - |
| [Pruning Filter in Filter](https://arxiv.org/abs/2009.14410) | NeurIPS | `Other` | [PyTorch(Author)](https://github.com/fxmeng/Pruning-Filter-in-Filter) |
| [Position-based Scaled Gradient for Model Quantization and Pruning](https://arxiv.org/abs/2005.11035) | NeurIPS | `Other` | [PyTorch(Author)](https://github.com/Jangho-Kim/PSG-pytorch) |
| [Bayesian Bits: Unifying Quantization and Pruning](https://arxiv.org/abs/2005.07093) | NeurIPS | `Other` | - |
| [Pruning neural networks without any data by iteratively conserving synaptic flow](https://arxiv.org/abs/2006.05467) | NeurIPS | `Other` | [PyTorch(Author)](https://github.com/ganguli-lab/Synaptic-Flow) |
| [Meta-Learning with Network Pruning](https://arxiv.org/abs/2007.03219) | ECCV | `W` | - |
| [Accelerating CNN Training by Pruning Activation Gradients](https://arxiv.org/abs/1908.00173) | ECCV | `W` | - |
| [EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning](https://arxiv.org/abs/2007.02491) | ECCV **(Oral)** | `F` | [PyTorch(Author)](https://github.com/anonymous47823493/EagleEye) |
| [DSA: More Efficient Budgeted Pruning via Differentiable Sparsity Allocation](https://arxiv.org/abs/2004.02164) | ECCV | `F` | - |
| [DHP: Differentiable Meta Pruning via HyperNetworks](https://arxiv.org/abs/2003.13683) | ECCV | `F` | [PyTorch(Author)](https://github.com/ofsoundof/dhp) |
| [DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search](https://arxiv.org/abs/2003.12563) S | ECCV | `Other` | - |
| [Differentiable Joint Pruning and Quantization for Hardware Efficiency](https://arxiv.org/abs/2007.10463) | ECCV | `Other` | - |
| [Channel Pruning via Automatic Structure Search](https://arxiv.org/abs/2001.08565) | IJCAI | `F` | [PyTorch(Author)](https://github.com/lmbxmu/ABCPruner) |
| [Adversarial Neural Pruning with Latent Vulnerability Suppression](https://arxiv.org/abs/1908.04355) | ICML | `W` | - |
| [Proving the Lottery Ticket Hypothesis: Pruning is All You Need](https://arxiv.org/abs/2002.00585) | ICML | `W` | - |
| [Network Pruning by Greedy Subnetwork Selection](https://arxiv.org/abs/2003.01794) | ICML | `F` | - |
| [Operation-Aware Soft Channel Pruning using Differentiable Masks](https://arxiv.org/abs/2007.03938) | ICML | `F` | - |
| [DropNet: Reducing Neural Network Complexity via Iterative Pruning](https://proceedings.mlr.press/v119/tan20a.html) | ICML | `F` | - |
| [Soft Threshold Weight Reparameterization for Learnable Sparsity](https://arxiv.org/abs/2002.03231) | ICML | `WF` | [Pytorch(Author)](https://github.com/RAIVNLab/STR) |
| [Structured Compression by Weight Encryption for Unstructured Pruning and Quantization](https://arxiv.org/abs/1905.10138) | CVPR | `W` | - |
| [Automatic Neural Network Compression by Sparsity-Quantization Joint Learning: A Constrained Optimization-Based Approach](https://openaccess.thecvf.com/content_CVPR_2020/papers/Yang_Automatic_Neural_Network_Compression_by_Sparsity-Quantization_Joint_Learning_A_Constrained_CVPR_2020_paper.pdf) | CVPR | `W` | - |
| [Towards Efficient Model Compression via Learned Global Ranking](https://arxiv.org/abs/1904.12368) | CVPR **(Oral)** | `F` | [Pytorch(Author)](https://github.com/cmu-enyac/LeGR) |
| [HRank: Filter Pruning using High-Rank Feature Map](https://arxiv.org/abs/2002.10179) | CVPR **(Oral)** | `F` | [Pytorch(Author)](https://github.com/lmbxmu/HRank) |
| [Neural Network Pruning with Residual-Connections and Limited-Data](https://arxiv.org/abs/1911.08114) | CVPR **(Oral)** | `F` | - |
| [DMCP: Differentiable Markov Channel Pruning for Neural Networks](https://arxiv.org/abs/2005.03354) | CVPR **(Oral)** | `F` | [TensorFlow(Author)](https://github.com/zx55/dmcp) |
| [Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression](https://arxiv.org/abs/2003.08935) | CVPR | `F` | [PyTorch(Author)](https://github.com/ofsoundof/group_sparsity) |
| [Few Sample Knowledge Distillation for Efficient Network Compression](https://arxiv.org/abs/1812.01839) | CVPR | `F` | - |
| [Discrete Model Compression With Resource Constraint for Deep Neural Networks](http://openaccess.thecvf.com/content_CVPR_2020/html/Gao_Discrete_Model_Compression_With_Resource_Constraint_for_Deep_Neural_Networks_CVPR_2020_paper.html) | CVPR | `F` | - |
| [Learning Filter Pruning Criteria for Deep Convolutional Neural Networks Acceleration](http://openaccess.thecvf.com/content_CVPR_2020/html/He_Learning_Filter_Pruning_Criteria_for_Deep_Convolutional_Neural_Networks_Acceleration_CVPR_2020_paper.html) | CVPR | `F` | - |
| [APQ: Joint Search for Network Architecture, Pruning and Quantization Policy](https://arxiv.org/abs/2006.08509) | CVPR | `F` | - |
| [Multi-Dimensional Pruning: A Unified Framework for Model Compression](http://openaccess.thecvf.com/content_CVPR_2020/html/Guo_Multi-Dimensional_Pruning_A_Unified_Framework_for_Model_Compression_CVPR_2020_paper.html) | CVPR **(Oral)** | `WF` | - |
| [A Signal Propagation Perspective for Pruning Neural Networks at Initialization](https://arxiv.org/abs/1906.06307) | ICLR **(Spotlight)** | `W` | - |
| [ProxSGD: Training Structured Neural Networks under Regularization and Constraints](https://openreview.net/forum?id=HygpthEtvr) | ICLR | `W` | [TF+PT(Author)](https://github.com/optyang/proxsgd) |
| [One-Shot Pruning of Recurrent Neural Networks by Jacobian Spectrum Evaluation](https://arxiv.org/abs/1912.00120) | ICLR | `W` | - |
| [Lookahead: A Far-sighted Alternative of Magnitude-based Pruning](https://arxiv.org/abs/2002.04809) | ICLR | `W` | [PyTorch(Author)](https://github.com/alinlab/lookahead_pruning) |
| [Data-Independent Neural Pruning via Coresets](https://arxiv.org/abs/1907.04018) | ICLR | `W` | - |
| [Provable Filter Pruning for Efficient Neural Networks](https://arxiv.org/abs/1911.07412) | ICLR | `F` | - |
| [Dynamic Model Pruning with Feedback](https://openreview.net/forum?id=SJem8lSFwB) | ICLR | `WF` | - |
| [Comparing Rewinding and Fine-tuning in Neural Network Pruning](https://arxiv.org/abs/2003.02389) | ICLR **(Oral)** | `WF` | [TensorFlow(Author)](https://github.com/lottery-ticket/rewinding-iclr20-public) |
| [AutoCompress: An Automatic DNN Structured Pruning Framework for Ultra-High Compression Rates](https://arxiv.org/abs/1907.03141) | AAAI | `F` | - |
| [Reborn filters: Pruning convolutional neural networks with limited data](https://ojs.aaai.org/index.php/AAAI/article/view/6058) | AAAI | `F` | - |
| [DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks](http://arxiv.org/abs/1911.08020) | AAAI | `Other` | - |
| [Pruning from Scratch](http://arxiv.org/abs/1909.12579) | AAAI | `Other` | - |

### 2019

| Title | Venue | Type | Code |
|:-------|:--------:|:-------:|:-------:|
| [Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask](https://arxiv.org/abs/1905.01067) | NeurIPS | `W` | [TensorFlow(Author)](https://github.com/uber-research/deconstructing-lottery-tickets) |
| [One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers](https://arxiv.org/abs/1906.02773) | NeurIPS | `W` | - |
| [Global Sparse Momentum SGD for Pruning Very Deep Neural Networks](https://arxiv.org/abs/1909.12778) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/DingXiaoH/GSM-SGD) |
| [AutoPrune: Automatic Network Pruning by Regularizing Auxiliary Parameters](https://papers.nips.cc/paper/9521-autoprune-automatic-network-pruning-by-regularizing-auxiliary-parameters) | NeurIPS | `W` | - |
| [Network Pruning via Transformable Architecture Search](https://arxiv.org/abs/1905.09717) | NeurIPS | `F` | [PyTorch(Author)](https://github.com/D-X-Y/NAS-Projects) |
| [Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks](https://arxiv.org/abs/1909.08174) | NeurIPS | `F` | [PyTorch(Author)](https://github.com/youzhonghui/gate-decorator-pruning) |
| [Model Compression with Adversarial Robustness: A Unified Optimization Framework](https://arxiv.org/abs/1902.03538) | NeurIPS | `Other` | [PyTorch(Author)](https://github.com/TAMU-VITA/ATMC) |
| [Adversarial Robustness vs Model Compression, or Both?](https://arxiv.org/abs/1903.12561) | ICCV | `W` | [PyTorch(Author)](https://github.com/yeshaokai/Robustness-Aware-Pruning-ADMM) |
| [MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning](https://arxiv.org/abs/1903.10258) | ICCV | `F` | [PyTorch(Author)](https://github.com/liuzechun/MetaPruning) |
| [Accelerate CNN via Recursive Bayesian Pruning](https://arxiv.org/abs/1812.00353) | ICCV | `F` | - |
| [Learning Filter Basis for Convolutional Neural Network Compression](https://arxiv.org/abs/1908.08932) | ICCV | `Other` | - |
| [Co-Evolutionary Compression for Unpaired Image Translation](https://arxiv.org/abs/1907.10804) | ICCV | `S` | - |
| [COP: Customized Deep Model Compression via Regularized Correlation-Based Filter-Level Pruning](https://arxiv.org/abs/1906.10337) | IJCAI | `F` | [Tensorflow(Author)](https://github.com/ZJULearning/COP) |
| [Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration](https://arxiv.org/abs/1811.00250) | CVPR **(Oral)** | `F` | [PyTorch(Author)](https://github.com/he-y/filter-pruning-geometric-median) |
| [Towards Optimal Structured CNN Pruning via Generative Adversarial Learning](https://arxiv.org/abs/1903.09291) | CVPR | `F` | [PyTorch(Author)](https://github.com/ShaohuiLin/GAL) |
| [Centripetal SGD for Pruning Very Deep Convolutional Networks with Complicated Structure](https://arxiv.org/abs/1904.03837) | CVPR | `F` | [PyTorch(Author)](https://github.com/ShawnDing1994/Centripetal-SGD) |
| [On Implicit Filter Level Sparsity in Convolutional Neural Networks](https://arxiv.org/abs/1811.12495), [Extension1](https://arxiv.org/abs/1905.04967), [Extension2](https://openreview.net/forum?id=rylVvNS3hE) | CVPR | `F` | [PyTorch(Author)](https://github.com/mehtadushy/SelecSLS-Pytorch) |
| [Structured Pruning of Neural Networks with Budget-Aware Regularization](https://arxiv.org/abs/1811.09332) | CVPR | `F` | - |
| [Importance Estimation for Neural Network Pruning](http://jankautz.com/publications/Importance4NNPruning_CVPR19.pdf) | CVPR | `F` | [PyTorch(Author)](https://github.com/NVlabs/Taylor_pruning) |
| [OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural Networks](https://arxiv.org/abs/1905.11664) | CVPR | `F` | - |
| [Variational Convolutional Neural Network Pruning](https://openaccess.thecvf.com/content_CVPR_2019/html/Zhao_Variational_Convolutional_Neural_Network_Pruning_CVPR_2019_paper.html) | CVPR | `F` | - |
| [Partial Order Pruning: for Best Speed/Accuracy Trade-off in Neural Architecture Search](https://arxiv.org/abs/1903.03777) | CVPR | `Other` | [TensorFlow(Author)](https://github.com/lixincn2015/Partial-Order-Pruning) |
| [Collaborative Channel Pruning for Deep Networks](http://proceedings.mlr.press/v97/peng19c.html) | ICML | `F` | - |
| [Approximated Oracle Filter Pruning for Destructive CNN Width Optimization github](https://arxiv.org/abs/1905.04748) | ICML | `F` | - |
| [EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis](https://arxiv.org/abs/1905.05934) | ICML | `F` | [PyTorch(Author)](https://github.com/alecwangcq/EigenDamage-Pytorch) |
| [The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks](https://arxiv.org/abs/1803.03635) | ICLR **(Best)** | `W` | [TensorFlow(Author)](https://github.com/google-research/lottery-ticket-hypothesis) |
| [SNIP: Single-shot Network Pruning based on Connection Sensitivity](https://arxiv.org/abs/1810.02340) | ICLR | `W` | [TensorFLow(Author)](https://github.com/namhoonlee/snip-public) |
| [Dynamic Channel Pruning: Feature Boosting and Suppression](https://arxiv.org/abs/1810.05331) | ICLR | `F` | [TensorFlow(Author)](https://github.com/deep-fry/mayo) |
| [Rethinking the Value of Network Pruning](https://arxiv.org/abs/1810.05270) | ICLR | `F` | [PyTorch(Author)](https://github.com/Eric-mingjie/rethinking-network-pruning) |
| [Dynamic Sparse Graph for Efficient Deep Learning](https://arxiv.org/abs/1810.00859) | ICLR | `F` | [CUDA(3rd)](https://github.com/mtcrawshaw/dynamic-sparse-graph) |

### 2018
| Title | Venue | Type | Code |
|:-------|:--------:|:-------:|:-------:|
| [Frequency-Domain Dynamic Pruning for Convolutional Neural Networks](https://papers.NeurIPS.cc/paper/7382-frequency-domain-dynamic-pruning-for-convolutional-neural-networks.pdf) | NeurIPS | `W` | - |
| [Discrimination-aware Channel Pruning for Deep Neural Networks](https://arxiv.org/abs/1810.11809) | NeurIPS | `F` | [TensorFlow(Author)](https://github.com/SCUT-AILab/DCP) |
| [Learning Sparse Neural Networks via Sensitivity-Driven Regularization](https://arxiv.org/pdf/1810.11764.pdf) | NeurIPS | `WF` | - |
| [Constraint-Aware Deep Neural Network Compression](https://openaccess.thecvf.com/content_ECCV_2018/html/Changan_Chen_Constraints_Matter_in_ECCV_2018_paper.html) | ECCV | `W` | [SkimCaffe(Author)](https://github.com/ChanganVR/ConstraintAwareCompression) |
| [A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers](https://arxiv.org/abs/1804.03294) | ECCV | `W` | [Caffe(Author)](https://github.com/KaiqiZhang/admm-pruning) |
| [Amc: Automl for model compression and acceleration on mobile devices](https://arxiv.org/abs/1802.03494) | ECCV | `F` | [TensorFlow(3rd)](https://github.com/Tencent/PocketFlow#channel-pruning) |
| [Data-Driven Sparse Structure Selection for Deep Neural Networks](https://arxiv.org/abs/1707.01213) | ECCV | `F` | [MXNet(Author)](https://github.com/TuSimple/sparse-structure-selection) |
| [Coreset-Based Neural Network Compression](https://arxiv.org/abs/1807.09810) | ECCV | `F` | [PyTorch(Author)](https://github.com/metro-smiles/CNN_Compression) |
| [Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks](https://arxiv.org/abs/1808.06866) | IJCAI | `F` | [PyTorch(Author)](https://github.com/he-y/soft-filter-pruning) |
| [Accelerating Convolutional Networks via Global & Dynamic Filter Pruning](https://www.ijcai.org/proceedings/2018/0336.pdf) | IJCAI | `F` | - |
| [Weightless: Lossy weight encoding for deep neural network compression](https://proceedings.mlr.press/v80/reagan18a.html) | ICML | `W` | - |
| [Compressing Neural Networks using the Variational Information Bottleneck](https://proceedings.mlr.press/v80/dai18d.html) | ICML | `F` | [PyTorch(Author)](https://github.com/zhuchen03/VIBNet) |
| [Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions](https://proceedings.mlr.press/v80/wu18h.html) | ICML | `Other` | [PyTorch(Author)](https://github.com/VITA-Group/Deep-K-Means-pytorch) |
| [CLIP-Q: Deep Network Compression Learning by In-Parallel Pruning-Quantization](https://openaccess.thecvf.com/content_cvpr_2018/html/Tung_CLIP-Q_Deep_Network_CVPR_2018_paper.html) | CVPR | `W` | - |
| [“Learning-Compression” Algorithms for Neural Net Pruning](http://faculty.ucmerced.edu/mcarreira-perpinan/papers/cvpr18.pdf) | CVPR | `W` | - |
| [PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning](https://arxiv.org/abs/1711.05769) | CVPR | `F` | [PyTorch(Author)](https://github.com/arunmallya/packnet) |
| [NISP: Pruning Networks using Neuron Importance Score Propagation](https://arxiv.org/abs/1711.05908) | CVPR | `F` | - |
| [To prune, or not to prune: exploring the efficacy of pruning for model compression](https://arxiv.org/abs/1710.01878) | ICLR | `W` | - |
| [Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers](https://arxiv.org/abs/1802.00124) | ICLR | `F` | [TensorFlow(Author)](https://github.com/bobye/batchnorm_prune), [PyTorch(3rd)](https://github.com/jack-willturner/batchnorm-pruning) |

### 2017
| Title | Venue | Type | Code |
|:-------|:--------:|:-------:|:-------:|
| [Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee](https://arxiv.org/abs/1611.05162) | NeurIPS | `W` | [TensorFlow(Author)](https://github.com/DNNToolBox/Net-Trim-v1) |
| [Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon](https://arxiv.org/abs/1705.07565) | NeurIPS | `W` | [PyTorch(Author)](https://github.com/csyhhu/L-OBS) |
| [Runtime Neural Pruning](https://papers.NeurIPS.cc/paper/6813-runtime-neural-pruning) | NeurIPS | `F` | - |
| [Structured Bayesian Pruning via Log-Normal Multiplicative Noise](https://papers.nips.cc/paper/2017/hash/dab49080d80c724aad5ebf158d63df41-Abstract.html) | NeurIPS | `F` | - |
| [Bayesian Compression for Deep Learning](https://proceedings.neurips.cc/paper/2017/hash/69d1fc78dbda242c43ad6590368912d4-Abstract.html) | NeurIPS | `F` | - |
| [ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression](https://arxiv.org/abs/1707.06342) | ICCV | `F` | [Caffe(Author)](https://github.com/Roll920/ThiNet), [PyTorch(3rd)](https://github.com/tranorrepository/reprod-thinet) |
| [Channel pruning for accelerating very deep neural networks](https://arxiv.org/abs/1707.06168) | ICCV | `F` | [Caffe(Author)](https://github.com/yihui-he/channel-pruning) |
| [Learning Efficient Convolutional Networks Through Network Slimming](https://arxiv.org/abs/1708.06519) | ICCV | `F` | [PyTorch(Author)](https://github.com/Eric-mingjie/network-slimming) |
| [Variational Dropout Sparsifies Deep Neural Networks](http://arxiv.org/abs/1701.05369) | ICML | `W` | - |
| [Combined Group and Exclusive Sparsity for Deep Neural Networks](https://proceedings.mlr.press/v70/yoon17a.html) | ICML | `WF` | - |
| [Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning](https://arxiv.org/abs/1611.05128) | CVPR | `W` | - |
| [Pruning Filters for Efficient ConvNets](https://arxiv.org/abs/1608.08710) | ICLR | `F` | [PyTorch(3rd)](https://github.com/Eric-mingjie/rethinking-network-pruning/tree/master/imagenet/l1-norm-pruning) |
| [Pruning Convolutional Neural Networks for Resource Efficient Inference](https://arxiv.org/abs/1611.06440) | ICLR | `F` | [TensorFlow(3rd)](https://github.com/Tencent/PocketFlow#channel-pruning) |

### 2016
| Title | Venue | Type | Code |
|:-------|:--------:|:-------:|:-------:|
| [Dynamic Network Surgery for Efficient DNNs](https://arxiv.org/abs/1608.04493) | NeurIPS | `W` | [Caffe(Author)](https://github.com/yiwenguo/Dynamic-Network-Surgery) |
| [Learning the Number of Neurons in Deep Networks](https://proceedings.neurips.cc/paper/2016/hash/6e7d2da6d3953058db75714ac400b584-Abstract.html) | NeurIPS | `F` | - |
| [Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding](https://arxiv.org/abs/1510.00149) | ICLR **(Best)** | `W` | [Caffe(Author)](https://github.com/songhan/Deep-Compression-AlexNet) |

### 2015

| Title | Venue | Type | Code |
|:-------|:--------:|:-------:|:-------:|
| [Learning both Weights and Connections for Efficient Neural Networks](https://arxiv.org/abs/1506.02626) | NeurIPS | `W` | [PyTorch(3rd)](https://github.com/jack-willturner/DeepCompression-PyTorch) |

## Related Repo

[Awesome-model-compression-and-acceleration](https://github.com/memoiry/Awesome-model-compression-and-acceleration)

[EfficientDNNs](https://github.com/MingSun-Tse/EfficientDNNs)

[Embedded-Neural-Network](https://github.com/ZhishengWang/Embedded-Neural-Network)

[awesome-AutoML-and-Lightweight-Models](https://github.com/guan-yuan/awesome-AutoML-and-Lightweight-Models)

[Model-Compression-Papers](https://github.com/chester256/Model-Compression-Papers)

[knowledge-distillation-papers](https://github.com/lhyfst/knowledge-distillation-papers)

[Network-Speed-and-Compression](https://github.com/mrgloom/Network-Speed-and-Compression)