Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/chenbong/awesome-compression-papers

Paper collection about model compression and acceleration: Pruning, Quantization, Knowledge Distillation, Low Rank Factorization, etc
https://github.com/chenbong/awesome-compression-papers

List: awesome-compression-papers

Last synced: 3 months ago
JSON representation

Paper collection about model compression and acceleration: Pruning, Quantization, Knowledge Distillation, Low Rank Factorization, etc

Awesome Lists containing this project

README

        

# Awesome Compression Papers ![]( https://visitor-badge.glitch.me/badge?page_id=lmbxmu.awesomecompression)

- Paper collection about model compression and acceleration:
- 1.Pruning

- 1.1. Filter Pruning
- 1.2. Weight Pruning

- 2.Quantization

- 2.1. Multi-bit Quantization
- 2.2. 1-bit Quantization

- 3.Light-weight Design

- 4.Knowledge Distillation

- 5.Tensor Decomposition

- 6.Other

## 2020

### 2020-CVPR

#### 1. Pruning

##### 1.1. Filter Pruning

- 2020-CVPRo-[HRank: Filter Pruning Using High-Rank Feature Map](https://openaccess.thecvf.com/content_CVPR_2020/papers/Lin_HRank_Filter_Pruning_Using_High-Rank_Feature_Map_CVPR_2020_paper.pdf) [[Code](https://github.com/lmbxmu/HRank)]
- 2020-CVPR-[Learning Filter Pruning Criteria for Deep Convolutional Neural Networks Acceleration](https://he-y.github.io/publication/2020_cvpr_lfpc/2020_CVPR_LFPC.pdf)
- 2020-CVPR-[DMCP: Differentiable Markov Channel Pruning for Neural Networks](https://openaccess.thecvf.com/content_CVPR_2020/papers/Guo_DMCP_Differentiable_Markov_Channel_Pruning_for_Neural_Networks_CVPR_2020_paper.pdf)
- 2020-CVPR-[Neural Network Pruning With Residual-Connections and Limited-Data](https://openaccess.thecvf.com/content_CVPR_2020/papers/Luo_Neural_Network_Pruning_With_Residual-Connections_and_Limited-Data_CVPR_2020_paper.pdf)

##### 1.2. Weight Pruning

#### 2. Quantization

##### 2.1. Multi-bit quantization

##### 2.2. 1-bit quantization

#### 3. Light-weight Design

- 2020-CVPR-[GhostNet: More Features from Cheap Operations](https://arxiv.org/abs/1911.11907) [[Code](https://github.com/huawei-noah/ghostnet)]
- 2020-CVPR-[AdderNet: Do We Really Need Multiplications in Deep Learning?](https://arxiv.org/abs/1912.13200) [[Code](https://github.com/huawei-noah/AdderNet)]

#### 4. Knowledge Distillation

- 2020-CVPR-[Online Knowledge Distillation via Collaborative Learning](https://openaccess.thecvf.com/content_CVPR_2020/papers/Guo_Online_Knowledge_Distillation_via_Collaborative_Learning_CVPR_2020_paper.pdf)
- 2020-CVPR-[Regularizing Class-Wise Predictions via Self-Knowledge Distillation](https://openaccess.thecvf.com/content_CVPR_2020/papers/Yun_Regularizing_Class-Wise_Predictions_via_Self-Knowledge_Distillation_CVPR_2020_paper.pdf)
- 2020-CVPR-[Explaining Knowledge Distillation by Quantifying the Knowledge](https://arxiv.org/abs/2003.03622)
- 2020-CVPR-[Dreaming to Distill: Data-Free Knowledge Transfer via DeepInversion](https://openaccess.thecvf.com/content_CVPR_2020/papers/Yin_Dreaming_to_Distill_Data-Free_Knowledge_Transfer_via_DeepInversion_CVPR_2020_paper.pdf) [[Code](https://github.com/NVlabs/DeepInversion)]

#### 5. Tensor Decomposition

- 2020-CVPR-[Low-rank Compression of Neural Nets: Learning the Rank of Each Layer](http://graduatestudent.ucmerced.edu/yidelbayev/papers/cvpr20/cvpr20a.pdf)
- 2020-CVPR-[Filter Grafting for Deep Neural Networks](https://arxiv.org/pdf/2001.05868.pdf)
- 2020-CVPR-[Structured Compression by Weight Encryption for Unstructured Pruning and Quantization](https://openaccess.thecvf.com/content_CVPR_2020/papers/Kwon_Structured_Compression_by_Weight_Encryption_for_Unstructured_Pruning_and_Quantization_CVPR_2020_paper.pdf)
- 2020-CVPR-[APQ: Joint Search for Network Architecture, Pruning and Quantization Policy](https://openaccess.thecvf.com/content_CVPR_2020/papers/Wang_APQ_Joint_Search_for_Network_Architecture_Pruning_and_Quantization_Policy_CVPR_2020_paper.pdf)
- 2020-CVPR-[Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression](https://openaccess.thecvf.com/content_CVPR_2020/papers/Li_Group_Sparsity_The_Hinge_Between_Filter_Pruning_and_Decomposition_for_CVPR_2020_paper.pdf) [[Code](https://github.com/ofsoundof/group_sparsity)]
- 2020-CVPR-[Multi-Dimensional Pruning: A Unified Framework for Model Compression](https://openaccess.thecvf.com/content_CVPR_2020/papers/Guo_Multi-Dimensional_Pruning_A_Unified_Framework_for_Model_Compression_CVPR_2020_paper.pdf)
- 2020-CVPR-[Discrete Model Compression With Resource Constraint for Deep Neural Networks](https://openaccess.thecvf.com/content_CVPR_2020/papers/Gao_Discrete_Model_Compression_With_Resource_Constraint_for_Deep_Neural_Networks_CVPR_2020_paper.pdf)
- 2020-CVPR-[Automatic Neural Network Compression by Sparsity-Quantization Joint Learning: A Constrained Optimization-Based Approach](https://openaccess.thecvf.com/content_CVPR_2020/papers/Yang_Automatic_Neural_Network_Compression_by_Sparsity-Quantization_Joint_Learning_A_Constrained_CVPR_2020_paper.pdf)
- 2020-CVPR-[Low-Rank Compression of Neural Nets: Learning the Rank of Each Layer](https://openaccess.thecvf.com/content_CVPR_2020/papers/Idelbayev_Low-Rank_Compression_of_Neural_Nets_Learning_the_Rank_of_Each_CVPR_2020_paper.pdf)
- 2020-CVPR-[The Knowledge Within: Methods for Data-Free Model Compression](https://openaccess.thecvf.com/content_CVPR_2020/papers/Haroush_The_Knowledge_Within_Methods_for_Data-Free_Model_Compression_CVPR_2020_paper.pdf)
- 2020-CVPR-[GAN Compression: Efficient Architectures for Interactive Conditional GANs](https://openaccess.thecvf.com/content_CVPR_2020/papers/Li_GAN_Compression_Efficient_Architectures_for_Interactive_Conditional_GANs_CVPR_2020_paper.pdf) [[Code](https://github.com/mit-han-lab/gan-compression)]
- 2020-CVPR-[Few Sample Knowledge Distillation for Efficient Network Compression](https://openaccess.thecvf.com/content_CVPR_2020/papers/Li_Few_Sample_Knowledge_Distillation_for_Efficient_Network_Compression_CVPR_2020_paper.pdf)
- 2020-CVPR-[Structured Multi-Hashing for Model Compression](https://openaccess.thecvf.com/content_CVPR_2020/papers/Eban_Structured_Multi-Hashing_for_Model_Compression_CVPR_2020_paper.pdf)
- 2020-CVPRo-[Towards Efficient Model Compression via Learned Global Ranking](https://arxiv.org/abs/1904.12368) [[Code](https://github.com/cmu-enyac/LeGR)]
- 2020-CVPR-[Training Quantized Neural Networks With a Full-Precision Auxiliary Module](https://arxiv.org/abs/1903.11236v3)
- 2020-CVPR-[Adaptive Loss-aware Quantization for Multi-bit Networks](https://arxiv.org/abs/1912.08883v4)
- 2020-CVPR-[ZeroQ: A Novel Zero Shot Quantization Framework](https://arxiv.org/abs/2001.00281v1)
- 2020-CVPR-[BiDet: An Efficient Binarized Object Detector](https://arxiv.org/abs/2003.03961v1) [[code](https://github.com/ZiweiWangTHU/BiDet)]
- 2020-CVPR-[Forward and Backward Information Retention for Accurate Binary Neural Networks](https://arxiv.org/abs/1909.10788v4)
- 2020-CVPR-[Binarizing MobileNet via Evolution-Based Searching](https://arxiv.org/abs/2005.06305v2)
- 2020-CVPR-[Collaborative Distillation for Ultra-Resolution Universal Style Transfer](https://arxiv.org/abs/2003.08436) [[Code](https://github.com/MingSun-Tse/Collaborative-Distillation)]
- 2020-CVPR-[Self-training with Noisy Student improves ImageNet classification](https://arxiv.org/abs/1911.04252) [[Code](https://github.com/google-research/noisystudent)]
- 2020-CVPR-[Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation From a Blackbox Model](https://openaccess.thecvf.com/content_CVPR_2020/papers/Wang_Neural_Networks_Are_More_Productive_Teachers_Than_Human_Raters_Active_CVPR_2020_paper.pdf)
- 2020-CVPR-[Heterogeneous Knowledge Distillation Using Information Flow Modeling](https://openaccess.thecvf.com/content_CVPR_2020/papers/Passalis_Heterogeneous_Knowledge_Distillation_Using_Information_Flow_Modeling_CVPR_2020_paper.pdf)
- 2020-CVPR-[Revisiting Knowledge Distillation via Label Smoothing Regularization](https://openaccess.thecvf.com/content_CVPR_2020/papers/Yuan_Revisiting_Knowledge_Distillation_via_Label_Smoothing_Regularization_CVPR_2020_paper.pdf)
- 2020-CVPR-[Distilling Knowledge From Graph Convolutional Networks](https://openaccess.thecvf.com/content_CVPR_2020/papers/Yang_Distilling_Knowledge_From_Graph_Convolutional_Networks_CVPR_2020_paper.pdf)
- 2020-CVPR-[MineGAN: Effective Knowledge Transfer From GANs to Target Domains With Few Images](https://openaccess.thecvf.com/content_CVPR_2020/papers/Wang_MineGAN_Effective_Knowledge_Transfer_From_GANs_to_Target_Domains_With_CVPR_2020_paper.pdf) [[Code](https://github.com/yaxingwang/MineGAN)]
- 2020-CVPR-[Distilling Cross-Task Knowledge via Relationship Matching](https://openaccess.thecvf.com/content_CVPR_2020/papers/Ye_Distilling_Cross-Task_Knowledge_via_Relationship_Matching_CVPR_2020_paper.pdf)

#### 6. Other

### 2020-ECCV

- 2020-ECCV-EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning
- 2020-ECCV-ReActNet: Towards Precise Binary Neural Network with Generalized Activation Functions
- 2020-ECCV-Knowledge Distillation Meets Self-Supervision
- 2020-ECCV-Differentiable Feature Aggregation Search for Knowledge Distillation
- 2020-ECCV-Post-Training Piecewise Linear Quantization for Deep Neural Networks
- 2020-ECCV-GAN Slimming: All-in-One GAN Compression by A Unified Optimization Framework
- 2020-ECCV-Online Ensemble Model Compression using Knowledge Distillation
- 2020-ECCV-Stable Low-rank Tensor Decomposition for Compression of Convolutional Neural Network
- 2020-ECCV-DSA: More Efficient Budgeted Pruning via Differentiable Sparsity Allocation
- 2020-ECCV-Accelerating CNN Training by Pruning Activation Gradients
- 2020-ECCV-DHP: Differentiable Meta Pruning via HyperNetworks
- 2020-ECCV-Differentiable Joint Pruning and Quantization for Hardware Efficiency
- 2020-ECCV-Meta-Learning with Network Pruning
- 2020-ECCV-BATS: Binary ArchitecTure Search
- 2020-ECCV-Learning Architectures for Binary Networks
- 2020-ECCV-DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search
- 2020-ECCV-Knowledge Transfer via Dense Cross-Layer Mutual-Distillation
- 2020-ECCV-Generative Low-bitwidth Data Free Quantization
- 2020-ECCV-HMQ: Hardware Friendly Mixed Precision Quantization Block for CNNs
- 2020-ECCV-Search What You Want: Barrier Panelty NAS for Mixed Precision Quantization
- 2020-ECCV-Rethinking Bottleneck Structure for Efficient Mobile Network Design
- 2020-ECCV-PSConv: Squeezing Feature Pyramid into One Compact Poly-Scale Convolutional Layer
- ...

### 2020-NIPS

- 2020-NIPS-[Rotated Binary Neural Network](https://arxiv.org/abs/2009.13055v2) [[code](https://github.com/lmbxmu/RBNN)]
- 2020-NIPS-[Path Sample-Analytic Gradient Estimators for Stochastic Binary Networks](https://arxiv.org/abs/2006.03143v1)
- 2020-NIPS-[Efficient Exact Verification of Binarized Neural Networks](https://arxiv.org/abs/2005.03597v1)
- 2020-NIPS-[Reintroducing Straight-Through Estimators asPrincipled Methods for Stochastic Binary Networks](https://arxiv.org/abs/2006.06880v1)
- 2020-NIPS-[Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot](https://arxiv.org/pdf/2009.11094)
- 2020-NIPS-Network Pruning via Greedy Optimization: Fast Rate and Efficient Algorithm
- 2020-NIPS-Neuron Merging: Compensating for Pruned Neurons
- 2020-NIPS-Scientific Control for Reliable Neural Network Pruning
- 2020-NIPS-Neuron-level Structured Pruning using Polarization Regularizer
- 2020-NIPS-[Directional Pruning of Deep Neural Networks](https://arxiv.org/pdf/2006.09358)
- 2020-NIPS-[Storage Efficient and Dynamic Flexible Runtime Channel Pruning via Deep Reinforcement Learning](https://openreview.net/pdf?id=S1ewjhEFwr)
- 2020-NIPS-[Movement Pruning: Adaptive Sparsity by Fine-Tuning](https://arxiv.org/pdf/2005.07683)
- 2020-NIPS-[The Generalization-Stability Tradeoff In Neural Network Pruning](https://arxiv.org/pdf/1906.03728)
- 2020-NIPS-[Pruning neural networks without any data by conserving synaptic flow](https://arxiv.org/pdf/2006.05467)
- 2020-NIPS-Network Pruning via Greedy Optimization: Fast Rate and Efficient Algorithms
- 2020-NIPS-[HYDRA: Pruning Adversarially Robust Neural Networks](https://arxiv.org/abs/2002.10509)
- 2020-NIPS-[Logarithmic Pruning is All You Need](https://arxiv.org/pdf/2006.12156)
- 2020-NIPS-[Pruning Filter in Filter](https://arxiv.org/pdf/2009.144100)
- 2020-NIPS-[Bayesian Bits: Unifying Quantization and Pruning](https://arxiv.org/pdf/2005.07093)
- 2020-NIPS-[Searching for Low-Bit Weights in Quantized Neural Networks](https://arxiv.org/pdf/2009.08695)
- 2020-NIPS-[Robust Quantization: One Model to Rule Them All](https://arxiv.org/pdf/2002.07686)
- 2020-NIPS-[Position-based Scaled Gradient for Model Quantization and Sparse Training](https://arxiv.org/pdf/2005.11035)
- 2020-NIPS-[Universally Quantized Neural Compression](https://arxiv.org/pdf/2006.09952)
- 2020-NIPS-[FleXOR: Trainable Fractional Quantization](https://arxiv.org/pdf/2009.04126)
- 2020-NIPS-[Efficient Exact Verification of Binarized Neural Networks](https://arxiv.org/pdf/2005.03597)
- 2020-NIPS-[Path Sample-Analytic Gradient Estimators for Stochastic Binary Networks](https://arxiv.org/pdf/2006.03143)
- 2020-NIPS-[WoodFisher: Efficient Second-Order Approximation for Neural Network Compression](https://arxiv.org/pdf/2004.14340)
- 2020-NIPS-Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher
- 2020-NIPS-Task-Oriented Feature Distillation
- 2020-NIPS-Cream of the Crop: Distilling Prioritized Paths For One-Shot Neural Architecture Search
- 2020-NIPS-[Self-Distillation Amplifies Regularization in Hilbert Space](https://arxiv.org/pdf/2002.05715)
- 2020-NIPS-[MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers](https://arxiv.org/pdf/2002.10957)
- 2020-NIPS-[Self-Distillation as Instance-Specific Label Smoothing](https://arxiv.org/pdf/2006.05065)
- 2020-NIPS-Agree to Disagree: Adaptive Ensemble Knowledge Distillation in Gradient Space
- 2020-NIPS-Distributed Distillation for On-Device Learning
- 2020-NIPS-Residual Distillation: Towards Portable Deep Neural Networks without Shortcuts
- 2020-NIPS-[Ensemble Distillation for Robust Model Fusion in Federated Learning](https://arxiv.org/pdf/2006.07242)
- 2020-NIPS-[Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation](https://arxiv.org/pdf/2006.14284)

### 2020-ICML

- 2020-ICML-[PENNI: Pruned Kernel Sharing for Efficient CNN Inference](https://arxiv.org/abs/2005.07133)
- 2020-ICML-[Operation-Aware Soft Channel Pruning using Differentiable Masks](https://arxiv.org/abs/2007.03938)
- 2020-ICML-[DropNet: Reducing Neural Network Complexity via Iterative Pruning](https://proceedings.icml.cc/static/paper_files/icml/2020/2026-Paper.pdf)
- 2020-ICML-[Proving the Lottery Ticket Hypothesis: Pruning is All You Need](https://arxiv.org/abs/2002.00585)
- 2020-ICML-[Network Pruning by Greedy Subnetwork Selection](https://arxiv.org/abs/2003.01794)
- 2020-ICML-[AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks](https://arxiv.org/abs/2006.08198)
- 2020-ICML-[Adversarial Neural Pruning with Latent Vulnerability Suppression](https://arxiv.org/abs/1908.04355)
- 2020-ICML-[Feature-map-level Online Adversarial Knowledge Distillation](http://arxiv.org/abs/2002.01775v3)
- 2020-ICML-[Knowledge transfer with jacobian matching](https://arxiv.org/abs/1803.00443v1)
- 2020-ICML-[Good Subnetworks Provably Exist Pruning via Greedy Forward Selection](https://arxiv.org/abs/2003.01794v2)
- 2020-ICML-[Training Binary Neural Networks through Learning with Noisy Supervision](https://proceedings.icml.cc/paper/2020/hash/fc221309746013ac554571fbd180e1c8)
- 2020-ICML-[Multi-Precision Policy Enforced Training (MuPPET) : A Precision-Switching Strategy for Quantised Fixed-Point Training of CNNs](https://proceedings.icml.cc/paper/2020/hash/dfb84a11f431c62436cfb760e30a34fe)
- 2020-ICML-[Divide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of Neural Networks](https://arxiv.org/abs/1906.06033v4)
- 2020-ICML-[Feature Quantization Improves GAN Training](https://arxiv.org/abs/2004.02088v2) [[code](https://github.com/YangNaruto/FQ-GAN)]
- 2020-ICML-[Towards Accurate Post-training Network Quantization via Bit-Split and Stitching](https://proceedings.icml.cc/paper/2020/hash/8d5e957f297893487bd98fa830fa6413)
- 2020-ICML-[Accelerating Large-Scale Inference with Anisotropic Vector Quantization](http://arxiv.org/abs/1908.10396v4)
- 2020-ICML-[Differentiable Product Quantization for Learning Compact Embedding Layers](http://arxiv.org/abs/1908.09756v3) [[code](https://github.com/chentingpc/dpq_embedding_compression)]
- 2020-ICML-[Up or Down? Adaptive Rounding for Post-Training Quantization](https://arxiv.org/abs/2004.10568v2)

### 2020-ICLR

- 2020-ICLR-[Lookahead: A Far-sighted Alternative of Magnitude-based Pruning](https://openreview.net/forum?id=ryl3ygHYDB) [[Code](https://github.com/alinlab/lookahead_pruning)]
- 2020-ICLR-[Dynamic Model Pruning with Feedback](https://openreview.net/pdf?id=SJem8lSFwB)
- 2020-ICLR-[Provable Filter Pruning for Efficient Neural Networks](https://openreview.net/forum?id=BJxkOlSYDH)
- 2020-ICLR-[Data-Independent Neural Pruning via Coresets](https://openreview.net/forum?id=H1gmHaEKwB)
- 2020-ICLR-[FSNet: Compression of Deep Convolutional Neural Networks by Filter Summary](https://openreview.net/forum?id=S1xtORNFwH)
- 2020-ICLR-[Probabilistic Connection Importance Inference and Lossless Compression of Deep Neural Networks](https://openreview.net/forum?id=HJgCF0VFwr)
- 2020-ICLR-[BlockSwap: Fisher-guided Block Substitution for Network Compression on a Budget](https://openreview.net/forum?id=SklkDkSFPB)
- 2020-ICLR-[Neural Epitome Search for Architecture-Agnostic Network Compression](https://openreview.net/forum?id=HyxjOyrKvr)
- 2020-ICLR-[One-Shot Pruning of Recurrent Neural Networks by Jacobian Spectrum Evaluation](https://openreview.net/forum?id=r1e9GCNKvH)
- 2020-ICLR-[DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity Measures](https://openreview.net/forum?id=rylBK34FDS) [[Code](https://github.com/yanghr/DeepHoyer)]
- 2020-ICLR-[Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers](https://openreview.net/forum?id=SJlbGJrtDB)
- 2020-ICLR-[Scalable Model Compression by Entropy Penalized Reparameterization](https://openreview.net/forum?id=HkgxW0EYDS)
- 2020-ICLR-[One-shot Pruning of Recurrent Neural Neworks by Jacobian Spectrum Evaluation](https://openreview.net/pdf?id=r1e9GCNKvH)
- 2020-ICLR-[Mixed Precision DNNs: All you need is a good parametrization](https://arxiv.org/abs/1905.11452v3)
- 2020-ICLR-[Comparing Fine-tuning and Rewinding in Neural Network Pruning](https://arxiv.org/abs/2003.02389v1)
- 2020-ICLR-[A Signal Propagation Perspective for Pruning Neural Networks at Initialization](https://arxiv.org/abs/1906.06307v2)
- 2020-ICLR-[Linear Symmetric Quantization of Neural Networks for Low-precision Integer Hardware](https://openreview.net/forum?id=H1lBj2VFPS)
- 2020-ICLR-[AutoQ: Automated Kernel-Wise Neural Network Quantization](https://arxiv.org/abs/1902.05690v3)
- 2020-ICLR-[Additive Powers-of-Two Quantization: A Non-uniform Discretization for Neural Networks](https://arxiv.org/abs/1909.13144v2)
- 2020-ICLR-[Learned Step Size Quantization](https://arxiv.org/abs/1902.08153v3)
- 2020-ICLR-[Sampling-Free Learning of Bayesian Quantized Neural Networks](https://arxiv.org/abs/1912.02992v1)
- 2020-ICLR-[Gradient $\ell_1$ Regularization for Quantization Robustness](https://arxiv.org/abs/2002.07520v1)
- 2020-ICLR-[BinaryDuo: Reducing Gradient Mismatch in Binary Activation Network by Coupling Binary Activations](https://openreview.net/forum?id=r1x0lxrFPS) [[code](https://github.com/Hyungjun-K1m/BinaryDuo)]
- 2020-ICLR-[Training binary neural networks with real-to-binary convolutions](https://arxiv.org/abs/2003.11535v1) [[code]](https://github.com/brais-martinez/real2binary)
- 2020-ICLR-[Critical initialisation in continuous approximations of binary neural networks](https://arxiv.org/abs/1902.00177v2)
- 2020-ICLR-[Comparing Rewinding and Fine-tuning in Neural Network Pruning](https://arxiv.org/abs/2003.02389)
- 2020-ICLR-[ProxSGD: Training Structured Neural Networks under Regularization and Constraints](https://openreview.net/forum?id=HygpthEtvr)
- ...

### 2020-AAAI

- 2020-AAAI-[Binarized Neural Architecture Search](https://arxiv.org/abs/1911.10862v2)

## 2019

### 2019-CVPR

- 2019-CVPRo-[HAQ: hardware-aware automated quantization](https://arxiv.org/pdf/1811.08886.pdf)
- 2019-CVPRo-[Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration](https://arxiv.org/abs/1811.00250) [[Code](https://github.com/he-y/filter-pruning-geometric-median)]
- 2019-CVPR-[All You Need is a Few Shifts: Designing Efficient Convolutional Neural Networks for Image Classification](https://arxiv.org/abs/1903.05285)
- 2019-CVPR-[Importance Estimation for Neural Network Pruning](http://openaccess.thecvf.com/content_CVPR_2019/html/Molchanov_Importance_Estimation_for_Neural_Network_Pruning_CVPR_2019_paper.html) [[Code](https://github.com/NVlabs/Taylor_pruning)]
- 2019-CVPR-[HetConv Heterogeneous Kernel-Based Convolutions for Deep CNNs](https://arxiv.org/abs/1903.04120)
- 2019-CVPR-[Fully Learnable Group Convolution for Acceleration of Deep Neural Networks](https://arxiv.org/abs/1904.00346)
- 2019-CVPR-[Towards Optimal Structured CNN Pruning via Generative Adversarial Learning](https://arxiv.org/abs/1903.09291)
- 2019-CVPR-[ChamNet: Towards Efficient Network Design through Platform-Aware Model Adaptation](http://openaccess.thecvf.com/content_CVPR_2019/papers/Dai_ChamNet_Towards_Efficient_Network_Design_Through_Platform-Aware_Model_Adaptation_CVPR_2019_paper.pdf)
- 2019-CVPR-[Partial Order Pruning: for Best Speed/Accuracy Trade-off in Neural Architecture Search](https://arxiv.org/pdf/1903.03777.pdf) [[Code](https://github.com/lixincn2015/Partial-Order-Pruning)]
- 2019-CVPR-[Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation](https://arxiv.org/pdf/1901.02985.pdf) [[Code](https://github.com/tensorflow/models/tree/master/research/deeplab)]
- 2019-CVPR-[MnasNet: Platform-Aware Neural Architecture Search for Mobile](https://arxiv.org/abs/1807.11626) [[Code](https://github.com/AnjieZheng/MnasNet-PyTorch)]
- 2019-CVPR-[MFAS: Multimodal Fusion Architecture Search](https://arxiv.org/pdf/1903.06496.pdf)
- 2019-CVPR-[A Neurobiological Evaluation Metric for Neural Network Model Search](https://arxiv.org/pdf/1805.10726.pdf)
- 2019-CVPR-[Fast Neural Architecture Search of Compact Semantic Segmentation Models via Auxiliary Cells](https://arxiv.org/abs/1810.10804)
- 2019-CVPR-[Efficient Neural Network Compression](https://arxiv.org/abs/1811.12781) [[Code](https://github.com/Hyeji-Kim/ENC)]
- 2019-CVPR-[T-Net: Parametrizing Fully Convolutional Nets with a Single High-Order Tensor](http://openaccess.thecvf.com/content_CVPR_2019/html/Kossaifi_T-Net_Parametrizing_Fully_Convolutional_Nets_With_a_Single_High-Order_Tensor_CVPR_2019_paper.html)
- 2019-CVPR-[Centripetal SGD for Pruning Very Deep Convolutional Networks with Complicated Structure](https://arxiv.org/abs/1904.03837) [[Code](https://github.com/ShawnDing1994/Centripetal-SGD)]
- 2019-CVPR-[DSC: Dense-Sparse Convolution for Vectorized Inference of Convolutional Neural Networks](http://openaccess.thecvf.com/content_CVPRW_2019/html/SAIAD/Frickenstein_DSC_Dense-Sparse_Convolution_for_Vectorized_Inference_of_Convolutional_Neural_Networks_CVPRW_2019_paper.html)
- 2019-CVPR-[DupNet: Towards Very Tiny Quantized CNN With Improved Accuracy for Face Detection](http://openaccess.thecvf.com/content_CVPRW_2019/html/EVW/Gao_DupNet_Towards_Very_Tiny_Quantized_CNN_With_Improved_Accuracy_for_CVPRW_2019_paper.html)
- 2019-CVPR-[ECC: Platform-Independent Energy-Constrained Deep Neural Network Compression via a Bilinear Regression Model](http://openaccess.thecvf.com/content_CVPR_2019/html/Yang_ECC_Platform-Independent_Energy-Constrained_Deep_Neural_Network_Compression_via_a_Bilinear_CVPR_2019_paper.html)
- 2019-CVPR-[Variational Convolutional Neural Network Pruning](http://openaccess.thecvf.com/content_CVPR_2019/html/Zhao_Variational_Convolutional_Neural_Network_Pruning_CVPR_2019_paper.html)
- 2019-CVPR-[Accelerating Convolutional Neural Networks via Activation Map Compression](http://openaccess.thecvf.com/content_CVPR_2019/html/Georgiadis_Accelerating_Convolutional_Neural_Networks_via_Activation_Map_Compression_CVPR_2019_paper.html)
- 2019-CVPR-[Compressing Convolutional Neural Networks via Factorized Convolutional Filters](http://openaccess.thecvf.com/content_CVPR_2019/html/Li_Compressing_Convolutional_Neural_Networks_via_Factorized_Convolutional_Filters_CVPR_2019_paper.html)
- 2019-CVPR-[Deep Virtual Networks for Memory Efficient Inference of Multiple Tasks](http://openaccess.thecvf.com/content_CVPR_2019/html/Kim_Deep_Virtual_Networks_for_Memory_Efficient_Inference_of_Multiple_Tasks_CVPR_2019_paper.html)
- 2019-CVPR-[Exploiting Kernel Sparsity and Entropy for Interpretable CNN Compression](http://openaccess.thecvf.com/content_CVPR_2019/html/Li_Exploiting_Kernel_Sparsity_and_Entropy_for_Interpretable_CNN_Compression_CVPR_2019_paper.html)
- 2019-CVPR-[MBS: Macroblock Scaling for CNN Model Reduction](http://openaccess.thecvf.com/content_CVPR_2019/html/Lin_MBS_Macroblock_Scaling_for_CNN_Model_Reduction_CVPR_2019_paper.html)
- 2019-CVPR-[On Implicit Filter Level Sparsity in Convolutional Neural Networks](http://openaccess.thecvf.com/content_CVPR_2019/html/Mehta_On_Implicit_Filter_Level_Sparsity_in_Convolutional_Neural_Networks_CVPR_2019_paper.html)
- 2019-CVPR-[Structured Pruning of Neural Networks With Budget-Aware Regularization](http://openaccess.thecvf.com/content_CVPR_2019/html/Lemaire_Structured_Pruning_of_Neural_Networks_With_Budget-Aware_Regularization_CVPR_2019_paper.html)
- 2019-CVPRo-[Neural Rejuvenation: Improving Deep Network Training by Enhancing Computational Resource Utilization](https://arxiv.org/abs/1812.00481) [[Code](https://github.com/joe-siyuan-qiao/NeuralRejuvenation-CVPR19)]
- 2019-CVPR-[Knowledge Representing: Efficient, Sparse Representation of Prior Knowledge for Knowledge Distillation](http://openaccess.thecvf.com/content_CVPRW_2019/html/CEFRL/Liu_Knowledge_Representing_Efficient_Sparse_Representation_of_Prior_Knowledge_for_Knowledge_CVPRW_2019_paper.html)
- 2019-CVPR-[Knowledge Distillation via Instance Relationship Graph](http://openaccess.thecvf.com/content_CVPR_2019/html/Liu_Knowledge_Distillation_via_Instance_Relationship_Graph_CVPR_2019_paper.html)
- 2019-CVPR-[Variational Information Distillation for Knowledge Transfer](http://openaccess.thecvf.com/content_CVPR_2019/html/Ahn_Variational_Information_Distillation_for_Knowledge_Transfer_CVPR_2019_paper.html)
- 2019-CVPR-[Learning Metrics from Teachers Compact Networks for Image Embedding](http://openaccess.thecvf.com/content_CVPR_2019/html/Yu_Learning_Metrics_From_Teachers_Compact_Networks_for_Image_Embedding_CVPR_2019_paper.html) [[Code](https://github.com/yulu0724/EmbeddingDistillation)]
- 2019-CVPR-Enhanced Bayesian Compression via Deep Reinforcement Learning
- 2019-CVPR-Cross Domain Model Compression by Structurally Weight Sharing
- 2019-CVPR-Cascaded Projection: End-To-End Network Compression and Acceleration
- 2019-CVPR-Fully Quantized Network for Object Detection
- 2019-CVPR-Learning to Quantize Deep Networks by Optimizing Quantization Intervals With Task Loss
- 2019-CVPR-Quantization Networks
- 2019-CVPR-SeerNet: Predicting Convolutional Neural Network Feature-Map Sparsity Through Low-Bit Quantization
- 2019-CVPR-Simultaneously Optimizing Weight and Quantizer of Ternary Neural Network Using Truncated Gaussian Approximation
- 2019-CVPR-Binary Ensemble Neural Network: More Bits per Network or More Networks per Bit?
- 2019-CVPR-A Main/Subsidiary Network Framework for Simplifying Binary Neural Networks
- 2019-CVPR-Regularizing Activation Distribution for Training Binarized Deep Networks
- 2019-CVPR-Structured Binary Neural Networks for Accurate Image Classification and Semantic Segmentation
- 2019-CVPR-Learning Channel-Wise Interactions for Binary Convolutional Neural Networks
- 2019-CVPR-Circulant Binary Convolutional Networks: Enhancing the Performance of 1-Bit DCNNs With Circulant Back Propagation
- 2019-CVPR-HAQ: Hardware-Aware Automated Quantization With Mixed Precision
- 2019-CVPR-ECC: Platform-Independent Energy-Constrained Deep Neural Network Compression via a Bilinear Regression Model
- [2019-CVPR-OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural Networks](https://arxiv.org/abs/1905.11664)
- ...

### 2019-ICCV

- 2019-ICCV-[Rethinking ImageNet Pre-training](https://arxiv.org/abs/1811.08883)
- 2019-ICCV-[Universally Slimmable Networks and Improved Training Techniques](https://arxiv.org/abs/1903.05134)
- 2019-ICCV-[MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning](https://arxiv.org/abs/1903.10258) [[Code](https://github.com/liuzechun/MetaPruning)]
- 2019-ICCV-[Progressive Differentiable Architecture Search: Bridging the Depth Gap between Search and Evaluation](https://arxiv.org/abs/1904.12760) [[Code](https://github.com/chenxin061/pdarts)]
- 2019-ICCV-[ACNet: Strengthening the Kernel Skeletons for Powerful CNN via Asymmetric Convolution Blocks](https://arxiv.org/abs/1908.03930)
- 2019-ICCV-[A Comprehensive Overhaul of Feature Distillation](http://openaccess.thecvf.com/content_ICCV_2019/html/Heo_A_Comprehensive_Overhaul_of_Feature_Distillation_ICCV_2019_paper.html)
- 2019-ICCV-[Similarity-Preserving Knowledge Distillation](https://arxiv.org/abs/1907.09682)
- 2019-ICCV-[Correlation Congruence for Knowledge Distillation](https://arxiv.org/abs/1904.01802)
- 2019-ICCV-[Data-Free Learning of Student Networks](https://arxiv.org/abs/1904.01186)
- 2019-ICCV-[Learning Lightweight Lane Detection CNNs by Self Attention Distillation](https://arxiv.org/abs/1908.00821) [[Code](https://github.com/cardwing/Codes-for-Lane-Detection)]
- 2019-ICCV-[Attention bridging network for knowledge transfer](http://openaccess.thecvf.com/content_ICCV_2019/html/Li_Attention_Bridging_Network_for_Knowledge_Transfer_ICCV_2019_paper.html)
- 2019-ICCV-Learning Filter Basis for Convolutional Neural Network Compression
- 2019-ICCV-[Accelerate CNN via Recursive Bayesian Pruning](https://arxiv.org/abs/1812.00353)
- 2019-ICCV-[Adversarial Robustness vs Model Compression, or Both?](https://arxiv.org/abs/1903.12561)
- ...

### 2019-NIPS

- 2019-NIPS-[Global Sparse Momentum SGD for Pruning Very Deep Neural Networks](https://arxiv.org/abs/1909.12778)
- 2019-NIPS-[Model Compression with Adversarial Robustness: A Unified Optimization Framework](http://papers.nips.cc/paper/8410-model-compression-with-adversarial-robustness-a-unified-optimization-framework)
- 2019-NIPS-[AutoPrune: Automatic Network Pruning by Regularizing Auxiliary Parameters](https://nips.cc/Conferences/2019/Schedule?showEvent=14303)
- 2019-NIPS-[Double Quantization for Communication-Efficient Distributed Optimization](https://nips.cc/Conferences/2019/Schedule?showEvent=13598)
- 2019-NIPS-[Focused Quantization for Sparse CNNs](https://nips.cc/Conferences/2019/Schedule?showEvent=13686)
- 2019-NIPS-[E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings](http://papers.nips.cc/paper/8757-e2-train-training-state-of-the-art-cnns-with-over-80-less-energy)
- 2019-NIPS-[MetaQuant: Learning to Quantize by Learning to Penetrate Non-differentiable Quantization](https://papers.nips.cc/paper/8647-metaquant-learning-to-quantize-by-learning-to-penetrate-non-differentiable-quantization)
- 2019-NIPS-[Random Projections with Asymmetric Quantization](https://papers.nips.cc/paper/9268-random-projections-with-asymmetric-quantization)
- 2019-NIPS-[Network Pruning via Transformable Architecture Search](https://arxiv.org/abs/1905.09717) [[Code](https://github.com/D-X-Y/TAS)]
- 2019-NIPS-[Point-Voxel CNN for Efficient 3D Deep Learning](http://papers.nips.cc/paper/8382-point-voxel-cnn-for-efficient-3d-deep-learning) [[Code](https://github.com/mit-han-lab/pvcnn)]
- 2019-NIPS-[Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks](https://papers.nips.cc/paper/8486-gate-decorator-global-filter-pruning-method-for-accelerating-deep-convolutional-neural-networks)
- 2019-NIPS-[A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth Trade-Off](https://papers.nips.cc/paper/8926-a-mean-field-theory-of-quantized-deep-networks-the-quantization-depth-trade-off)
- 2019-NIPS-[Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification and Local Computations](https://papers.nips.cc/paper/9610-qsparse-local-sgd-distributed-sgd-with-quantization-sparsification-and-local-computations)
- 2019-NIPS-[Post training 4-bit quantization of convolutional networks for rapid-deployment](https://papers.nips.cc/paper/9008-post-training-4-bit-quantization-of-convolutional-networks-for-rapid-deployment)
- 2019-NIPS-[Zero-shot Knowledge Transfer via Adversarial Belief Matching](https://papers.nips.cc/paper/9151-zero-shot-knowledge-transfer-via-adversarial-belief-matching) [[Code](https://github.com/polo5/ZeroShotKnowledgeTransfer)] (spotlight)
- 2019-NIPS-Efficient and Effective Quantization for Sparse DNNs
- 2019-NIPS-Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization [[paper\]](https://papers.nips.cc/paper/8971-latent-weights-do-not-exist-rethinking-binarized-neural-network-optimization.pdf)
- 2019-NIPS-Post-training 4-bit quantization of convolution networks for rapid-deployment
- 2019-NIPS-PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization [[paper\]](https://arxiv.org/pdf/1905.13727.pdf)
- 2019-NIPS-Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
- 2019-NIPS-Channel Gating Neural Network
- 2019-NIPS-Positive-Unlabeled Compression on the Cloud [[paper\]](https://arxiv.org/abs/1909.09757)
- 2019-NIPS-Einconv: Exploring Unexplored Tensor Decompositions for Convolutional Neural Networks [[paper\]](https://arxiv.org/abs/1908.04471) [[codes\]](https://github.com/pfnet-research/einconv)
- 2019-NIPS-A Tensorized Transformer for Language Modeling [[paper\]](https://arxiv.org/pdf/1906.09777.pdf)
- 2019-NIPS-Shallow RNN: Accurate Time-series Classification on Resource Constrained Devices [[paper\]](https://papers.nips.cc/paper/9451-shallow-rnn-accurate-time-series-classification-on-resource-constrained-devices.pdf)
- 2019-NIPS-CondConv: Conditionally Parameterized Convolutions for Efficient Inference [[paper\]](https://papers.nips.cc/paper/8412-condconv-conditionally-parameterized-convolutions-for-efficient-inference.pdf)
- 2019-NIPS-SCAN: A Scalable Neural Networks Framework Towards Compact and Efficient Models [[paper\]](https://papers.nips.cc/paper/8657-scan-a-scalable-neural-networks-framework-towards-compact-and-efficient-models.pdf)
- 2019-NIPS-Hybrid 8-bit Floating Point (HFP8) Training and Inference for Deep Neural Networks [[paper\]](https://papers.nips.cc/paper/8736-hybrid-8-bit-floating-point-hfp8-training-and-inference-for-deep-neural-networks.pdf)
- 2019-NIPS-Backprop with Approximate Activations for Memory-efficient Network Training [[paper\]](https://arxiv.org/pdf/1901.07988.pdf)
- 2019-NIPS-Dimension-Free Bounds for Low-Precision Training
- [2019-NIPS-One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers](https://arxiv.org/abs/1906.02773)
- ...

### 2019-ICML

- 2019-ICML-[Approximated Oracle Filter Pruning for Destructive CNN Width Optimization](https://arxiv.org/abs/1905.04748) [[Code](https://github.com/ShawnDing1994/AOFP)]
- 2019-ICML-[EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis](https://arxiv.org/abs/1905.05934) [[Code](https://github.com/alecwangcq/EigenDamage-Pytorch)]
- 2019-ICML-[Zero-Shot Knowledge Distillation in Deep Networks](https://arxiv.org/abs/1905.08114) [[Code](https://github.com/vcl-iisc/ZSKD)]
- 2019-ICML-[LegoNet: Efficient Convolutional Neural Networks with Lego Filters](http://proceedings.mlr.press/v97/yang19c.html) [[Code](https://github.com/zhaohui-yang/LegoNet_pytorch)]
- 2019-ICML-[EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) [[Code](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet)]
- 2019-ICML-[Collaborative Channel Pruning for Deep Networks](http://proceedings.mlr.press/v97/peng19c.html)
- 2019-ICML-[Training CNNs with Selective Allocation of Channels](http://proceedings.mlr.press/v97/jeong19c.html)
- 2019-ICML-[NAS-Bench-101: Towards Reproducible Neural Architecture Search](https://arxiv.org/abs/1902.09635) [[Code](https://github.com/google-research/nasbench)]
- 2019-ICMLw-[Towards Learning of Filter-Level Heterogeneous Compression of Convolutional Neural Networks](https://arxiv.org/abs/1904.09872) [[Code](https://github.com/yochaiz/Slimmable)] (AutoML workshop)
- 2019-ICML-Improving Neural Network Quantization without Retraining using Outlier Channel Splitting
- 2019-ICML-Same, Same But Different: Recovering Neural Network Quantization Error Through Weight Factorization
- 2019-ICML-Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization
- 2019-ICML-LIT: Learned Intermediate Representation Training for Model Compression
- 2019-ICML-Towards Understanding Knowledge Distillation
- 2019-ICML-Rate Distortion For Model Compression: From Theory To Practice
- [2019-ICML-Approximated Oracle Filter Pruning for Destructive CNN Width Optimization github](https://arxiv.org/abs/1905.04748)
- ...

### 2019-ICLR

- 2019-ICLRo-[The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks](https://openreview.net/forum?id=rJl-b3RcF7) (best paper!)
- 2019-ICLR-[Slimmable Neural Networks](https://openreview.net/forum?id=H1gMCsAqY7) [[Code](https://github.com/JiahuiYu/slimmable_networks)]
- 2019-ICLR-[Defensive Quantization: When Efficiency Meets Robustness](https://arxiv.org/abs/1904.08444)
- 2019-ICLR-[Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters](https://openreview.net/forum?id=r1f0YiCctm) [[Code](https://github.com/cambridge-mlg/miracle)]
- 2019-ICLR-[ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware](https://arxiv.org/abs/1812.00332) [[Code](https://github.com/MIT-HAN-LAB/ProxylessNAS)]
- 2019-ICLR-[SNIP: Single-shot Network Pruning based on Connection Sensitivity](https://openreview.net/forum?id=B1VZqjAcYX)
- 2019-ICLR-[Non-vacuous Generalization Bounds at the ImageNet Scale: a PAC-Bayesian Compression Approach](https://openreview.net/forum?id=BJgqqsAct7)
- 2019-ICLR-[Dynamic Channel Pruning: Feature Boosting and Suppression](https://openreview.net/forum?id=BJxh2j0qYm)
- 2019-ICLR-[Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking](https://openreview.net/forum?id=BylBr3C9K7)
- 2019-ICLR-[RotDCF: Decomposition of Convolutional Filters for Rotation-Equivariant Deep Networks](https://openreview.net/forum?id=H1gTEj09FX)
- 2019-ICLR-[Dynamic Sparse Graph for Efficient Deep Learning](https://openreview.net/forum?id=H1goBoR9F7)
- 2019-ICLR-[Big-Little Net: An Efficient Multi-Scale Feature Representation for Visual and Speech Recognition](https://openreview.net/forum?id=HJMHpjC9Ym)
- 2019-ICLR-[Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds](https://openreview.net/forum?id=HJfwJ2A5KX)
- 2019-ICLR-[Learning Recurrent Binary/Ternary Weights](https://openreview.net/forum?id=HkNGYjR9FX)
- 2019-ICLR-[Double Viterbi: Weight Encoding for High Compression Ratio and Fast On-Chip Reconstruction for Deep Neural Network](https://openreview.net/forum?id=HkfYOoCcYX)
- 2019-ICLR-[Relaxed Quantization for Discretized Neural Networks](https://openreview.net/forum?id=HkxjYoCqKX)
- 2019-ICLR-[Integer Networks for Data Compression with Latent-Variable Models](https://openreview.net/forum?id=S1zz2i0cY7)
- 2019-ICLR-[Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters](https://openreview.net/forum?id=r1f0YiCctm)
- 2019-ICLR-[Analysis of Quantized Models](https://openreview.net/forum?id=ryM_IoAqYX)
- 2019-ICLR-[DARTS: Differentiable Architecture Search](https://arxiv.org/abs/1806.09055) [[Code](https://github.com/quark0/darts)]
- 2019-ICLR-[Graph HyperNetworks for Neural Architecture Search](https://arxiv.org/abs/1810.05749)
- 2019-ICLR-[Learnable Embedding Space for Efficient Neural Architecture Compression](https://openreview.net/forum?id=S1xLN3C9YX) [[Code](https://github.com/Friedrich1006/ESNAC)]
- 2019-ICLR-[Efficient Multi-Objective Neural Architecture Search via Lamarckian Evolution](https://arxiv.org/abs/1804.09081)
- 2019-ICLR-[SNAS: stochastic neural architecture search](https://openreview.net/pdf?id=rylqooRqK7)
- 2019-ICLR-[Integral Pruning on Activations and Weights for Efficient Neural Networks](https://openreview.net/forum?id=HyevnsCqtQ)
- 2019-ICLR-Rethinking the Value of Network Pruning
- 2019-ICLR-ProxQuant: Quantized Neural Networks via Proximal Operators
- 2019-ICLR-Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm
- 2019-ICLR-Combinatorial Attacks on Binarized Neural Networks
- 2019-ICLR-ARM: Augment-REINFORCE-Merge Gradient for Stochastic Binary Networks
- 2019-ICLR-An Empirical study of Binary Neural Networks' Optimisation
- 2019-ICLR-On the Universal Approximability and Complexity Bounds of Quantized ReLU Neural Networks
- 2019-ICLR-Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets
- ...

## 2018

### 2018-CVPR

- 2018-CVPR-[Context-Aware Deep Feature Compression for High-Speed Visual Tracking](http://openaccess.thecvf.com/content_cvpr_2018/papers/Choi_Context-Aware_Deep_Feature_CVPR_2018_paper.pdf)
- 2018-CVPR-[NISP: Pruning Networks using Neuron Importance Score Propagation](https://arxiv.org/pdf/1711.05908.pdf)
- 2018-CVPR-[Condensenet: An efficient densenet using learned group convolutions](http://openaccess.thecvf.com/content_cvpr_2018/html/Huang_CondenseNet_An_Efficient_CVPR_2018_paper.html) [[Code](https://github.com/ShichenLiu/CondenseNet)]
- 2018-CVPR-[Shift: A zero flop, zero parameter alternative to spatial convolutions](http://openaccess.thecvf.com/content_cvpr_2018/html/Wu_Shift_A_Zero_CVPR_2018_paper.html)
- 2018-CVPR-[Explicit Loss-Error-Aware Quantization for Low-Bit Deep Neural Networks](http://openaccess.thecvf.com/content_cvpr_2018/html/Zhou_Explicit_Loss-Error-Aware_Quantization_CVPR_2018_paper.html)
- 2018-CVPR-[Interleaved structured sparse convolutional neural networks](http://openaccess.thecvf.com/content_cvpr_2018/html/Xie_Interleaved_Structured_Sparse_CVPR_2018_paper.html)
- 2018-CVPR-[Towards Effective Low-bitwidth Convolutional Neural Networks](http://openaccess.thecvf.com/content_cvpr_2018/papers/Zhuang_Towards_Effective_Low-Bitwidth_CVPR_2018_paper.pdf)
- 2018-CVPR-[CLIP-Q: Deep Network Compression Learning by In-Parallel Pruning-Quantization](http://openaccess.thecvf.com/content_cvpr_2018/html/Tung_CLIP-Q_Deep_Network_CVPR_2018_paper.html)
- 2018-CVPR-[Blockdrop: Dynamic inference paths in residual networks](http://openaccess.thecvf.com/content_cvpr_2018/html/Wu_BlockDrop_Dynamic_Inference_CVPR_2018_paper.html)
- 2018-CVPR-[Nestednet: Learning nested sparse structures in deep neural networks](http://openaccess.thecvf.com/content_cvpr_2018/html/Kim_NestedNet_Learning_Nested_CVPR_2018_paper.html)
- 2018-CVPR-[Stochastic downsampling for cost-adjustable inference and improved regularization in convolutional networks](http://openaccess.thecvf.com/content_cvpr_2018/html/Kuen_Stochastic_Downsampling_for_CVPR_2018_paper.html)
- 2018-CVPR-[Wide Compression: Tensor Ring Nets](http://openaccess.thecvf.com/content_cvpr_2018/html/Wang_Wide_Compression_Tensor_CVPR_2018_paper.html)
- 2018-CVPR-[Learning Compact Recurrent Neural Networks With Block-Term Tensor Decomposition](http://openaccess.thecvf.com/content_cvpr_2018/html/Ye_Learning_Compact_Recurrent_CVPR_2018_paper.html)
- 2018-CVPR-[Learning Time/Memory-Efficient Deep Architectures With Budgeted Super Networks](http://openaccess.thecvf.com/content_cvpr_2018/html/Veniat_Learning_TimeMemory-Efficient_Deep_CVPR_2018_paper.html)
- 2018-CVPR-[HydraNets: Specialized Dynamic Architectures for Efficient Inference](http://openaccess.thecvf.com/content_cvpr_2018/html/Mullapudi_HydraNets_Specialized_Dynamic_CVPR_2018_paper.html)
- 2018-CVPR-[SYQ: Learning Symmetric Quantization for Efficient Deep Neural Networks](http://openaccess.thecvf.com/content_cvpr_2018/html/Faraone_SYQ_Learning_Symmetric_CVPR_2018_paper.html)
- 2018-CVPR-[Towards Effective Low-Bitwidth Convolutional Neural Networks](http://openaccess.thecvf.com/content_cvpr_2018/html/Zhuang_Towards_Effective_Low-Bitwidth_CVPR_2018_paper.html)
- 2018-CVPR-[Two-Step Quantization for Low-Bit Neural Networks](http://openaccess.thecvf.com/content_cvpr_2018/html/Wang_Two-Step_Quantization_for_CVPR_2018_paper.html)
- 2018-CVPR-[Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference](http://openaccess.thecvf.com/content_cvpr_2018/html/Jacob_Quantization_and_Training_CVPR_2018_paper.html)
- 2018-CVPR-["Learning-Compression" Algorithms for Neural Net Pruning](http://openaccess.thecvf.com/content_cvpr_2018/papers/Carreira-Perpinan_Learning-Compression_Algorithms_for_CVPR_2018_paper.pdf)
- 2018-CVPR-[PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning](https://arxiv.org/pdf/1711.05769v2.pdf) [[Code](https://github.com/arunmallya/packnet)]
- 2018-CVPR-[MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep Networks](http://openaccess.thecvf.com/content_cvpr_2018/html/Gordon_MorphNet_Fast__CVPR_2018_paper.html) [[Code](https://github.com/google-research/morph-net)]
- 2018-CVPR-[ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices](http://openaccess.thecvf.com/content_cvpr_2018/html/Zhang_ShuffleNet_An_Extremely_CVPR_2018_paper.html)
- 2018-CVPRw-[Squeezenext: Hardware-aware neural network design](http://openaccess.thecvf.com/content_cvpr_2018_workshops/w33/html/Gholami_SqueezeNext_Hardware-Aware_Neural_CVPR_2018_paper.html)
- 2018-CVPR-Quantization of Fully Convolutional Networks for Accurate Biomedical Image Segmentation
- ...

### 2018-ECCV

- 2018-ECCV-[A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers](http://openaccess.thecvf.com/content_ECCV_2018/papers/Tianyun_Zhang_A_Systematic_DNN_ECCV_2018_paper.pdf)
- 2018-ECCV-[Coreset-Based Neural Network Compression](http://openaccess.thecvf.com/content_ECCV_2018/papers/Abhimanyu_Dubey_Coreset-Based_Convolutional_Neural_ECCV_2018_paper.pdf)
- 2018-ECCV-[Data-Driven Sparse Structure Selection for Deep Neural Networks](http://openaccess.thecvf.com/content_ECCV_2018/papers/Zehao_Huang_Data-Driven_Sparse_Structure_ECCV_2018_paper.pdf) [[Code](https://github.com/TuSimple/sparse-structure-selection)]
- 2018-ECCV-[Training Binary Weight Networks via Semi-Binary Decomposition](http://openaccess.thecvf.com/content_ECCV_2018/html/Qinghao_Hu_Training_Binary_Weight_ECCV_2018_paper.html)
- 2018-ECCV-[Learning Compression from Limited Unlabeled Data](http://openaccess.thecvf.com/content_ECCV_2018/html/Xiangyu_He_Learning_Compression_from_ECCV_2018_paper.html)
- 2018-ECCV-[Constraint-Aware Deep Neural Network Compression](http://openaccess.thecvf.com/content_ECCV_2018/html/Changan_Chen_Constraints_Matter_in_ECCV_2018_paper.html)
- 2018-ECCV-[Sparsely Aggregated Convolutional Networks](http://openaccess.thecvf.com/content_ECCV_2018/html/Ligeng_Zhu_Sparsely_Aggregated_Convolutional_ECCV_2018_paper.html)
- 2018-ECCV-[Deep Expander Networks: Efficient Deep Networks from Graph Theory](http://openaccess.thecvf.com/content_ECCV_2018/html/Ameya_Prabhu_Deep_Expander_Networks_ECCV_2018_paper.html) [[Code](https://github.com/DrImpossible/Deep-Expander-Networks)]
- 2018-ECCV-[SparseNet-Sparsely Aggregated Convolutional Networks](https://arxiv.org/abs/1801.05895) [[Code](https://github.com/Lyken17/SparseNet)]
- 2018-ECCV-[Ask, acquire, and attack: Data-free uap generation using class impressions](http://openaccess.thecvf.com/content_ECCV_2018/html/Konda_Reddy_Mopuri_Ask_Acquire_and_ECCV_2018_paper.html)
- 2018-ECCV-[Netadapt: Platform-aware neural network adaptation for mobile applications](http://openaccess.thecvf.com/content_ECCV_2018/html/Tien-Ju_Yang_NetAdapt_Platform-Aware_Neural_ECCV_2018_paper.html)
- 2018-ECCV-[Clustering Convolutional Kernels to Compress Deep Neural Networks](https://link.springer.com/content/pdf/10.1007%2F978-3-030-01237-3_14.pdf)
- 2018-ECCV-[Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved Representational Capability and Advanced Training Algorithm](http://openaccess.thecvf.com/content_ECCV_2018/html/zechun_liu_Bi-Real_Net_Enhancing_ECCV_2018_paper.html)
- 2018-ECCV-[Extreme Network Compression via Filter Group Approximation](http://openaccess.thecvf.com/content_ECCV_2018/html/Bo_Peng_Extreme_Network_Compression_ECCV_2018_paper.html)
- 2018-ECCV-[Convolutional Networks with Adaptive Inference Graphs](http://openaccess.thecvf.com/content_ECCV_2018/html/Andreas_Veit_Convolutional_Networks_with_ECCV_2018_paper.html)
- 2018-ECCV-[SkipNet: Learning Dynamic Routing in Convolutional Networks](https://arxiv.org/abs/1711.09485) [[Code](https://github.com/ucbdrive/skipnet)]
- 2018-ECCV-[Value-aware Quantization for Training and Inference of Neural Networks](http://openaccess.thecvf.com/content_ECCV_2018/html/Eunhyeok_Park_Value-aware_Quantization_for_ECCV_2018_paper.html)
- 2018-ECCV-[LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks](http://openaccess.thecvf.com/content_ECCV_2018/html/Dongqing_Zhang_Optimized_Quantization_for_ECCV_2018_paper.html)
- 2018-ECCV-[AMC: AutoML for Model Compression and Acceleration on Mobile Devices](http://openaccess.thecvf.com/content_ECCV_2018/html/Yihui_He_AMC_Automated_Model_ECCV_2018_paper.html)
- 2018-ECCV-[Piggyback: Adapting a single network to multiple tasks by learning to mask weights](http://openaccess.thecvf.com/content_ECCV_2018/html/Arun_Mallya_Piggyback_Adapting_a_ECCV_2018_paper.html)
- ...

### 2018-NIPS

- 2018-NIPS-[Discrimination-aware Channel Pruning for Deep Neural Networks](http://papers.nips.cc/paper/7367-discrimination-aware-channel-pruning-for-deep-neural-networks.pdf)
- 2018-NIPS-[Frequency-Domain Dynamic Pruning for Convolutional Neural Networks](http://papers.nips.cc/paper/7382-frequency-domain-dynamic-pruning-for-convolutional-neural-networks.pdf)
- 2018-NIPS-[ChannelNets: Compact and Efficient Convolutional Neural Networks via Channel-Wise Convolutions](http://papers.nips.cc/paper/7766-channelnets-compact-and-efficient-convolutional-neural-networks-via-channel-wise-convolutions.pdf)
- 2018-NIPS-[DropBlock: A regularization method for convolutional networks](http://papers.nips.cc/paper/8271-dropblock-a-regularization-method-for-convolutional-networks)
- 2018-NIPS-[Constructing fast network through deconstruction of convolution](http://papers.nips.cc/paper/7835-constructing-fast-network-through-deconstruction-of-convolution)
- 2018-NIPS-[Learning Versatile Filters for Efficient Convolutional Neural Networks](https://papers.nips.cc/paper/7433-learning-versatile-filters-for-efficient-convolutional-neural-networks) [[Code](https://github.com/NoahEC/Versatile-Filters)]
- 2018-NIPS-[Moonshine: Distilling with cheap convolutions](http://papers.nips.cc/paper/7553-moonshine-distilling-with-cheap-convolutions)
- 2018-NIPS-[HitNet: hybrid ternary recurrent neural network](http://papers.nips.cc/paper/7341-hitnet-hybrid-ternary-recurrent-neural-network)
- 2018-NIPS-[FastGRNN: A Fast, Accurate, Stable and Tiny Kilobyte Sized Gated Recurrent Neural Network](http://papers.nips.cc/paper/8116-fastgrnn-a-fast-accurate-stable-and-tiny-kilobyte-sized-gated-recurrent-neural-network)
- 2018-NIPS-[Training DNNs with Hybrid Block Floating Point](http://papers.nips.cc/paper/7327-training-dnns-with-hybrid-block-floating-point)
- 2018-NIPS-[Reversible Recurrent Neural Networks](http://papers.nips.cc/paper/8117-reversible-recurrent-neural-networks)
- 2018-NIPS-[Synaptic Strength For Convolutional Neural Network](http://papers.nips.cc/paper/8218-synaptic-strength-for-convolutional-neural-network)
- 2018-NIPS-[Learning sparse neural networks via sensitivity-driven regularization](http://papers.nips.cc/paper/7644-learning-sparse-neural-networks-via-sensitivity-driven-regularization)
- 2018-NIPS-[Multi-Task Zipping via Layer-wise Neuron Sharing](http://papers.nips.cc/paper/7841-multi-task-zipping-via-layer-wise-neuron-sharing)
- 2018-NIPS-[A Linear Speedup Analysis of Distributed Deep Learning with Sparse and Quantized Communication](http://papers.nips.cc/paper/7519-a-linear-speedup-analysis-of-distributed-deep-learning-with-sparse-and-quantized-communication)
- 2018-NIPS-[Gradient Sparsification for Communication-Efficient Distributed Optimization](http://papers.nips.cc/paper/7405-gradient-sparsification-for-communication-efficient-distributed-optimization)
- 2018-NIPS-[GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training](http://papers.nips.cc/paper/7759-gradiveq-vector-quantization-for-bandwidth-efficient-gradient-aggregation-in-distributed-cnn-training)
- 2018-NIPS-[ATOMO: Communication-efficient Learning via Atomic Sparsification](http://papers.nips.cc/paper/8191-atomo-communication-efficient-learning-via-atomic-sparsification)
- 2018-NIPS-[Norm matters: efficient and accurate normalization schemes in deep networks](http://papers.nips.cc/paper/7485-norm-matters-efficient-and-accurate-normalization-schemes-in-deep-networks)
- 2018-NIPS-[Sparsified SGD with memory](http://papers.nips.cc/paper/7697-sparsified-sgd-with-memory)
- 2018-NIPS-[Pelee: A Real-Time Object Detection System on Mobile Devices](http://papers.nips.cc/paper/7466-pelee-a-real-time-object-detection-system-on-mobile-devices)
- 2018-NIPS-[Scalable methods for 8-bit training of neural networks](http://papers.nips.cc/paper/7761-scalable-methods-for-8-bit-training-of-neural-networks)
- 2018-NIPS-[TETRIS: TilE-matching the TRemendous Irregular Sparsity](http://papers.nips.cc/paper/7666-tetris-tile-matching-the-tremendous-irregular-sparsity)
- 2018-NIPS-[Training deep neural networks with 8-bit floating point numbers](http://papers.nips.cc/paper/7994-training-deep-neural-networks-with-8-bit-floating-point-numbers)
- 2018-NIPS-[Multiple instance learning for efficient sequential data classification on resource-constrained devices](http://papers.nips.cc/paper/8292-multiple-instance-learning-for-efficient-sequential-data-classification-on-resource-constrained-devices)
- 2018-NIPSw-[Pruning neural networks: is it time to nip it in the bud?](https://openreview.net/forum?id=r1lbgwFj5m)
- 2018-NIPSwb-[Rethinking the Value of Network Pruning](https://openreview.net/forum?id=r1eLk2mKiX) [[2019 ICLR version](https://openreview.net/forum?id=rJlnB3C5Ym)]
- 2018-NIPSw-[Structured Pruning for Efficient ConvNets via Incremental Regularization](https://openreview.net/forum?id=S1e_xM7_iQ) [[2019 IJCNN version](https://arxiv.org/abs/1804.09461)] [[Code](https://github.com/MingSun-Tse/Caffe_IncReg)]
- 2018-NIPSw-[Adaptive Mixture of Low-Rank Factorizations for Compact Neural Modeling](https://openreview.net/forum?id=B1eHgu-Fim)
- 2018-NIPSw-[Learning Sparse Networks Using Targeted Dropout](https://arxiv.org/abs/1905.13678) [[OpenReview](https://openreview.net/forum?id=HkghWScuoQ)] [[Code](https://github.com/for-ai/TD)]
- 2018-NIPS-BinGAN: Learning Compact Binary Descriptors with a Regularized GAN
- 2018-NIPS-Paraphrasing Complex Network: Network Compression via Factor Transfer
- ...

### 2018-ICML

- 2018-ICML-[Compressing Neural Networks using the Variational Information Bottleneck](http://proceedings.mlr.press/v80/dai18d.html)
- 2018-ICML-[DCFNet: Deep Neural Network with Decomposed Convolutional Filters](http://proceedings.mlr.press/v80/qiu18a.html)
- 2018-ICML-[Deep k-Means Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions](http://proceedings.mlr.press/v80/wu18h.html)
- 2018-ICML-[Error Compensated Quantized SGD and its Applications to Large-scale Distributed Optimization](http://proceedings.mlr.press/v80/wu18d.html)
- 2018-ICML-[High Performance Zero-Memory Overhead Direct Convolutions](http://proceedings.mlr.press/v80/zhang18d.html)
- 2018-ICML-[Kronecker Recurrent Units](http://proceedings.mlr.press/v80/jose18a.html)
- 2018-ICML-[Weightless: Lossy weight encoding for deep neural network compression](http://proceedings.mlr.press/v80/reagan18a.html)
- 2018-ICML-[StrassenNets: Deep learning with a multiplication budget](http://proceedings.mlr.press/v80/tschannen18a.html)
- 2018-ICML-[Learning Compact Neural Networks with Regularization](http://proceedings.mlr.press/v80/oymak18a.html)
- 2018-ICML-[WSNet: Compact and Efficient Networks Through Weight Sampling](http://proceedings.mlr.press/v80/jin18d.html)
- 2018-ICML-[Gradually Updated Neural Networks for Large-Scale Image Recognition](https://arxiv.org/abs/1711.09280) [[Code](https://github.com/joe-siyuan-qiao/GUNN)]
- ...

### 2018-ICLR

- 2018-ICLRo-[Training and Inference with Integers in Deep Neural Networks](https://openreview.net/forum?id=HJGXzmspb)
- 2018-ICLR-[Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers](https://openreview.net/forum?id=HJ94fqApW)
- 2018-ICLR-[N2N learning: Network to Network Compression via Policy Gradient Reinforcement Learning](https://openreview.net/forum?id=B1hcZZ-AW)
- 2018-ICLR-[Model compression via distillation and quantization](https://openreview.net/forum?id=S1XolQbRW)
- 2018-ICLR-[Towards Image Understanding from Deep Compression Without Decoding](https://openreview.net/forum?id=HkXWCMbRW)
- 2018-ICLR-[Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training](https://openreview.net/forum?id=SkhQHMW0W)
- 2018-ICLR-[Mixed Precision Training of Convolutional Neural Networks using Integer Operations](https://openreview.net/forum?id=H135uzZ0-)
- 2018-ICLR-[Mixed Precision Training](https://openreview.net/forum?id=r1gs9JgRZ)
- 2018-ICLR-[Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy](https://openreview.net/forum?id=B1ae1lZRb)
- 2018-ICLR-[Loss-aware Weight Quantization of Deep Networks](https://openreview.net/forum?id=BkrSv0lA-)
- 2018-ICLR-[Alternating Multi-bit Quantization for Recurrent Neural Networks](https://openreview.net/forum?id=S19dR9x0b)
- 2018-ICLR-[Adaptive Quantization of Neural Networks](https://openreview.net/forum?id=SyOK1Sg0W)
- 2018-ICLR-[Variational Network Quantization](https://openreview.net/forum?id=ry-TW-WAb)
- 2018-ICLR-[Espresso: Efficient Forward Propagation for Binary Deep Neural Networks](https://openreview.net/forum?id=Sk6fD5yCb&noteId=Sk6fD5yCb)
- 2018-ICLR-[Learning to share: Simultaneous parameter tying and sparsification in deep learning](https://openreview.net/forum?id=rypT3fb0b&noteId=rkwxPE67M)
- 2018-ICLR-[Learning Sparse Neural Networks through L0 Regularization](https://arxiv.org/abs/1712.01312)
- 2018-ICLR-[WRPN: Wide Reduced-Precision Networks](https://openreview.net/forum?id=B1ZvaaeAZ&noteId=B1ZvaaeAZ)
- 2018-ICLR-[Deep rewiring: Training very sparse deep networks](https://openreview.net/forum?id=BJ_wN01C-&noteId=BJ_wN01C-)
- 2018-ICLR-[Efficient sparse-winograd convolutional neural networks](https://arxiv.org/pdf/1802.06367.pdf) [[Code](https://github.com/xingyul/Sparse-Winograd-CNN)]
- 2018-ICLR-[Learning Intrinsic Sparse Structures within Long Short-term Memory](https://arxiv.org/pdf/1709.05027)
- 2018-ICLR-[Multi-scale dense networks for resource efficient image classification](https://arxiv.org/abs/1703.09844)
- 2018-ICLR-[Compressing Word Embedding via Deep Compositional Code Learning](https://openreview.net/forum?id=BJRZzFlRb&noteId=BJRZzFlRb)
- 2018-ICLR-[Learning Discrete Weights Using the Local Reparameterization Trick](https://openreview.net/forum?id=BySRH6CpW)
- 2018-ICLR-[Training wide residual networks for deployment using a single bit for each weight](https://openreview.net/forum?id=rytNfI1AZ&noteId=rytNfI1AZ)
- 2018-ICLR-[The High-Dimensional Geometry of Binary Neural Networks](https://openreview.net/forum?id=B1IDRdeCW&noteId=B1IDRdeCW)
- 2018-ICLRw-[To Prune, or Not to Prune: Exploring the Efficacy of Pruning for Model Compression](https://openreview.net/forum?id=Sy1iIDkPM) (Similar topic: [2018-NIPSw-nip in the bud](https://openreview.net/forum?id=r1lbgwFj5m), [2018-NIPSw-rethink](https://openreview.net/forum?id=r1eLk2mKiX))
- 2018-ICLRw-[Systematic Weight Pruning of DNNs using Alternating Direction Method of Multipliers](https://openreview.net/forum?id=B1_u3cRUG)
- 2018-ICLRw-[Weightless: Lossy weight encoding for deep neural network compression](https://openreview.net/forum?id=rJpXxgaIG)
- 2018-ICLRw-[Variance-based Gradient Compression for Efficient Distributed Deep Learning](https://openreview.net/forum?id=Sy6hd7kvM)
- 2018-ICLRw-[Stacked Filters Stationary Flow For Hardware-Oriented Acceleration Of Deep Convolutional Neural Networks](https://openreview.net/forum?id=HkeAoQQHM)
- 2018-ICLRw-[Training Shallow and Thin Networks for Acceleration via Knowledge Distillation with Conditional Adversarial Networks](https://openreview.net/forum?id=BJbtuRRLM)
- 2018-ICLRw-[Accelerating Neural Architecture Search using Performance Prediction](https://openreview.net/forum?id=HJqk3N1vG)
- 2018-ICLRw-[Nonlinear Acceleration of CNNs](https://openreview.net/forum?id=HkNpF_kDM)
- 2018-ICLRw-[Attention-Based Guided Structured Sparsity of Deep Neural Networks](https://arxiv.org/pdf/1802.09902v4.pdf) [[Code](https://github.com/astorfi/attention-guided-sparsity)]
- 2018-ICLR-Stochastic activation pruning for robust adversarial defense
- ...

## Refer

https://github.com/MingSun-Tse/EfficientDNNs

https://github.com/danielmcpark/awesome-pruning-acceleration

https://github.com/csyhhu/Awesome-Deep-Neural-Network-Compression

https://github.com/he-y/Awesome-Pruning