Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/ChanChiChoi/awesome-model-compression

papers about model compression
https://github.com/ChanChiChoi/awesome-model-compression

List: awesome-model-compression

model-compression

Last synced: 3 months ago
JSON representation

papers about model compression

Awesome Lists containing this project

README

        

# awesome-model-compression

this collecting the papers (main from arxiv.org) about Model compression:
**Structure**;
**Distillation**;
**Binarization**;
**Quantization**;
**Pruning**;
**Low Rank**.

---
# CONTENT
[PAPERS](https://github.com/ChanChiChoi/awesome-model-compression#papers)
[BOOKS](https://github.com/ChanChiChoi/awesome-model-compression#books)
[BLOGS&ATRICLES](https://github.com/ChanChiChoi/awesome-model-compression#blogs--atricles)
[LIBRARIES](https://github.com/ChanChiChoi/awesome-model-compression#libraries)
[PROJECTS](https://github.com/ChanChiChoi/awesome-model-compression#projects)
[OTHERS](https://github.com/ChanChiChoi/awesome-model-compression#others)
[REFERENCE](https://github.com/ChanChiChoi/awesome-model-compression#REFERENCE)

---
## PAPERS

### 1990
- 【Pruning】 LeCun Y, Denker J S, Solla S A. [Optimal brain damage](http://papers.nips.cc/paper/250-optimal-brain-damage.pdf) .[C]//Advances in neural information processing systems. 1990: 598-605.
- 【Distillation】[Neural Network Ensembles](https://www.researchgate.net/publication/3191841_Neural_Network_Ensembles), L.K. Hansen, P. Salamon, 1990

### 1993
- Hassibi, Babak, and David G. Stork. [Second order derivatives for network pruning: Optimal brain surgeon](http://papers.nips.cc/paper/647-second-order-derivatives-for-network-pruning-optimal-brain-surgeon.pdf) .[C]Advances in neural information processing systems. 1993.
- J. L. Holi and J. N. Hwang. [Finite precision error analysis of neural network hardware implementations]. In Ijcnn-91- Seattle International Joint Conference on Neural Networks, pages 519–525 vol.1, 1993.


### 1995
- 【Distillation】[Neural Network Ensembles, Cross Validation, and Active Learning](https://papers.nips.cc/paper/1001-neural-network-ensembles-cross-validation-and-active-learning.pdf), Andres Krogh, Jesper Vedelsby, 1995

### 1997
- [Knowledge Acquisition from Examples Via Multiple Models](https://homes.cs.washington.edu/~pedrod/papers/mlc97.pdf), Perdo Domingos, 1997

### 1998
- 【distillation】[Combining labeled and unlabeled data with co-training](https://www.cs.cmu.edu/~avrim/Papers/cotrain.pdf), A. Blum, T. Mitchell, 1998

### 2000
- 【Distillation】[Ensemble Methods in Machine Learning](http://web.engr.oregonstate.edu/~tgd/publications/mcs-ensembles.pdf), Thomas G. Dietterich, 2000
- [Using A Neural Network to Approximate An Ensemble of Classifiers](http://axon.cs.byu.edu/papers/zeng.npl2000.pdf), Xinchuan Zeng and Tony R. Martinez, 2000

### 2001
- Suzuki, Kenji, Isao Horiba, and Noboru Sugie. [A simple neural network pruning algorithm with application to filter synthesis](https://link.springer.com/article/10.1023/A:1009639214138) .[C] Neural Processing Letters 13.1 (2001): 43-53.. 2001


### 2006
- 【Distillation】[Model Compression](http://www.cs.cornell.edu/~caruana/compression.kdd06.pdf), Rich Caruana, 2006

### 2011
- 【Quantization】 Jegou, Herve, Matthijs Douze, and Cordelia Schmid. [Product quantization for nearest neighbor search](https://hal.inria.fr/inria-00514462/document) IEEE transactions on pattern analysis and machine intelligence 33.1 (2011): 117-128.
- 【Quantization】Vanhoucke V, Senior A, Mao M Z. [Improving the speed of neural networks on CPUs](https://ai.google/research/pubs/pub37631.pdf)[J]. 2011.

### 2012
- D. Hammerstrom. [A vlsi architecture for highperformance, low-cost, on-chip learning]. In IJCNN International Joint Conference on Neural Networks, pages 537– 544 vol.2, 2012.

### 2013
- M. Denil, B. Shakibi, L. Dinh, N. de Freitas, et al. [Predicting parameters in deep learning](http://papers.nips.cc/paper/5025-predicting-parameters-in-deep-learning.pdf). In Advances in Neural Information Processing Systems, pages 2148–2156, 2013
- 【Distillation】Mathieu M, Henaff M, LeCun Y. [Fast training of convolutional networks through ffts](https://arxiv.org/pdf/1312.5851)[J]. arXiv preprint arXiv:1312.5851, 2013.
【code:[Maratyszcza/NNPACK](https://github.com/Maratyszcza/NNPACK)】
- [Do Deep Nets Really Need to be Deep?](https://arxiv.org/pdf/1312.6184.pdf), Lei Jimmy Ba, Rich Caruana, 2013

### 2014
- K. Hwang and W. Sung. [Fixed-point feedforward deep neural network design using weights+ 1, 0, and- 1]. In 2014 IEEE Workshop on Signal Processing Systems (SiPS), pages 1–6. IEEE, 2014.
- M. Horowitz. [1.1 computing’s energy problem (and what we can do about it)](https://pdfs.semanticscholar.org/9476/20a1854655ed91a86b90d12695e05be85983.pdf). In Solid-State Circuits Conference Digest of Technical Papers, pages 10–14, 2014.
- Y. Chen, N. Sun, O. Temam, T. Luo, S. Liu, S. Zhang, L. He, J.Wang, L. Li, and T. Chen. [Dadiannao: A machinelearning supercomputer](http://pages.saclay.inria.fr/olivier.temam/files/eval/supercomputer.pdf). In Ieee/acm International Symposium on Microarchitecture, pages 609–622, 2014.
- 【Distillation】Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean. [Dark knowledge](http://www.ttic.edu/dl/dark14.pdf) .[C]Presented as the keynote in BayLearn 2 (2014).
- 【Low Rank】Jaderberg, Max, Andrea Vedaldi, and Andrew Zisserman. [Speeding up convolutional neural networks with low rank expansions](http://www.robots.ox.ac.uk/~vgg/publications/2014/Jaderberg14b/jaderberg14b.pdf) .[J] arXiv preprint arXiv:1405.3866 (2014).
- 【Low Rank】Emily Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, Rob Fergus .[Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation](https://arxiv.org/pdf/1404.00736) .[J] arXiv preprint arXiv:1404.00736
- 【Low rank】Jaderberg M, Vedaldi A, Zisserman A. [Speeding up convolutional neural networks with low rank expansions](https://arxiv.org/pdf/1405.3866)[J]. arXiv preprint arXiv:1405.3866, 2014.
- 【Low Rank】Xiangyu Zhang, Jianhua Zou, Xiang Ming, Kaiming He, Jian Sun .[Efficient and Accurate Approximations of Nonlinear Convolutional Networks](https://arxiv.org/pdf/1411.04229) .[J] arXiv preprint arXiv:1411.04229
- 【Distillation】[Learning with Pseudo-Ensembles](https://arxiv.org/pdf/1412.4864.pdf), Philip Bachman, Ouais Alsharif, Doina Precup, 2014
- 【Structure】 Jin J, Dundar A, Culurciello E. [Flattened convolutional neural networks for feedforward acceleration](https://arxiv.org/pdf/1412.5474) .[J]. arXiv preprint arXiv:1412.5474, 2014.
- 【Quantization】Yunchao Gong, Liu Liu, Ming Yang, Lubomir Bourdev .[Compressing Deep Convolutional Networks using Vector Quantization](https://arxiv.org/pdf/1412.06115) .[J] arXiv preprint arXiv:1412.06115
- 【Distillation】driana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, Yoshua Bengio .[FitNets: Hints for Thin Deep Nets](https://arxiv.org/pdf/1412.06550) .[J] arXiv preprint arXiv:1412.06550
- 【Quantization】[Compressing Deep Convolutional Networks using Vector Quantization](https://arxiv.org/pdf/1412.6115.pdf)
- 【Low Rank】 Lebedev V, Ganin Y, Rakhuba M, et al. [Speeding-up convolutional neural networks using fine-tuned cp-decomposition](https://arxiv.org/pdf/1412.6553) .[J]. arXiv preprint arXiv:1412.6553, 2014.
【code:[vadim-v-lebedev/cp-decomposition](https://github.com/vadim-v-lebedev/cp-decomposition); [jacobgil/pytorch-tensor-decompositions](https://github.com/jacobgil/pytorch-tensor-decompositions); [medium.com/@keremturgutlu/tensor-decomposition-fast-cnn-in-your-pocket-f03e9b2a6788](https://medium.com/@keremturgutlu/tensor-decomposition-fast-cnn-in-your-pocket-f03e9b2a6788)】
- 【Quantization】Courbariaux M, Bengio Y, David J P. [Training deep neural networks with low precision multiplications](https://arxiv.org/pdf/1412.7024.pdf))[J]. arXiv preprint arXiv:1412.7024, 2014.

### 2015
- 【Hardware】Dally W. [High-performance hardware for machine learning](https://media.nips.cc/Conferences/2015/tutorialslides/Dally-NIPS-Tutorial-2015.pdf)[J]. NIPS Tutorial, 2015.
- 【other】Liu B, Wang M, Foroosh H, et al. [Sparse convolutional neural networks](http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Liu_Sparse_Convolutional_Neural_2015_CVPR_paper.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015: 806-814.
- Zhang. [Optimizing fpga-based accelerator design for deep convolutional neural networks.] In Proceedings of the 2015 ACM/SIGDA International Symposium on Field- Programmable Gate Arrays, FPGA ’15, 2015.
- M. Courbariaux, Y. Bengio, and J.-P. David. [Binaryconnect: Training deep neural networks with binary weights during propagations](http://papers.nips.cc/paper/5647-binaryconnect-training-deep-neural-networks-with-binary-weights-during-propagations.pdf). In Advances in Neural Information Processing Systems, pages 3123–3131, 2015.
- 【System】Lane, Nicholas D., et al. [An Early Resource Characterization of Deep Learning on Wearables, Smartphones and Internet-of-Things Devices](http://niclane.org/pubs/iotapp15_early.pdf) .[C]Proceedings of the 2015 international workshop on internet of things towards applications. ACM, 2015.
- Han, Song, et al. [Learning both weights and connections for efficient neural network](http://papers.nips.cc/paper/5784-learning-both-weights-and-connections-for-efficient-neural-network) .[C] Advances in neural information processing systems. 2015.
- 【Low Rank】 Yang Z, Moczulski M, Denil M, et al. [Deep fried convnets](http://openaccess.thecvf.com/content_iccv_2015/papers/Yang_Deep_Fried_Convnets_ICCV_2015_paper.pdf) .[C]//Proceedings of the IEEE International Conference on Computer Vision. 2015: 1476-1483.
- 【Structure】 He K, Sun J. [Convolutional neural networks at constrained time cost](https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/He_Convolutional_Neural_Networks_2015_CVPR_paper.pdf) .[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 5353-5360.
- 【Quantization】 Courbariaux, Matthieu, Yoshua Bengio, and Jean-Pierre David. [Binaryconnect: Training deep neural networks with binary weights during propagations.](http://papers.nips.cc/paper/5647-binaryconnect-training-deep-neural-networks-with-binary-weights-during-propagations.pdf) Advances in neural information processing systems. 2015.
- 【Quantization】Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, Pritish Narayanan .[Deep Learning with Limited Numerical Precision](https://arxiv.org/pdf/1502.02551) .[J] arXiv preprint arXiv:1502.02551
- 【Distillation】Geoffrey Hinton, Oriol Vinyals, Jeff Dean .[Distilling the Knowledge in a Neural Network](https://arxiv.org/pdf/1503.02531) .[J] arXiv preprint arXiv:1503.02531
- Z. Cheng, D. Soudry, Z. Mao, and Z. Lan. [Training binary multilayer neural networks for image classification using expectation backpropagation](https://arxiv.org/pdf/1503.03562). arXiv preprint arXiv:1503.03562, 2015.
- 【Distillation】[Recurrent Neural Network Training with Dark Knowledge Transfer](https://arxiv.org/pdf/1505.04630.pdf), Zhiyuan Tang, Dong Wang, Zhiyong Zhang, 2015
- 【Low Rank】Xiangyu Zhang, Jianhua Zou, Kaiming He, Jian Sun .[Accelerating Very Deep Convolutional Networks for Classification and Detection](https://arxiv.org/pdf/1505.06798) .[J] arXiv preprint arXiv:1505.06798
- 【Pruning】Song Han, Jeff Pool, John Tran, William J. Dally .[Learning both Weights and Connections for Efficient Neural Networks](https://arxiv.org/pdf/1506.02626) .[J] arXiv preprint arXiv:1506.02626
【code:[jack-willturner/DeepCompression-PyTorch](https://github.com/jack-willturner/DeepCompression-PyTorch)】
- 【Distillation】[Cross Modal Distillation for Supervision Transfer](https://arxiv.org/pdf/1507.00448), Saurabh Gupta, Judy Hoffman, Jitendra Malik, 2015
- Srinivas, Suraj, and R. Venkatesh Babu. [Data-free parameter pruning for deep neural networks](https://arxiv.org/abs/1507.06149) .[J] arXiv preprint arXiv:1507.06149
- 【Pruning】Song Han, Huizi Mao, William J. Dally .[Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding](https://arxiv.org/pdf/1510.00149) .[J] arXiv preprint arXiv:1510.00149
【code:[songhan/Deep-Compression-AlexNet](https://github.com/songhan/Deep-Compression-AlexNet)】
- 【Distillation】[Distilling Model Knowledge](https://arxiv.org/pdf/1510.02437.pdf), George Papamakarios, 2015
- Z. Lin, M. Courbariaux, R. Memisevic, and Y. Bengio. [Neural networks with few multiplications](https://arxiv.org/pdf/1510.03009). arXiv preprint arXiv:1510.03009, 2015.
- 【Quantization】Courbariaux M, Bengio Y, David J P. [Binaryconnect: Training deep neural networks with binary weights during propagations](https://arxiv.org/pdf/1511.00363.pdf)[C]//Advances in neural information processing systems. 2015: 3123-3131.
- 【Distillation】[Unifying distillation and privileged information](https://arxiv.org/pdf/1511.03643), David Lopez-Paz, Léon Bottou, Bernhard Schölkopf, Vladimir Vapnik, 2015
- T. Dettmers. [8-bit approximations for parallelism in deep learning](https://arxiv.org/pdf/1511.04561). arXiv preprint arXiv:1511.04561, 2015.
- 【Distillation】[Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks](https://arxiv.org/pdf/1511.04508.pdf), Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, Ananthram Swami, 2015
- 【Distillation】[Heterogeneous Knowledge Transfer in Video Emotion Recognition, Attribution and Summarization](https://arxiv.org/pdf/1511.04798), Baohan Xu, Yanwei Fu, Yu-Gang Jiang, Boyang Li, Leonid Sigal, 2015
- 【other】Judd P, Albericio J, Hetherington T, et al. [Reduced-precision strategies for bounded memory in deep neural nets](https://arxiv.org/pdf/1511.05236)[J]. arXiv preprint arXiv:1511.05236, 2015.
- 【Distillation】Tianqi Chen, Ian Goodfellow, Jonathon Shlens .[Net2Net: Accelerating Learning via Knowledge Transfer](https://arxiv.org/pdf/1511.05641) .[J] arXiv preprint arXiv:1511.05641
- 【Low Rank】Cheng Tai, Tong Xiao, Yi Zhang, Xiaogang Wang, Weinan E .[Convolutional neural networks with low-rank regularization](https://arxiv.org/pdf/1511.06067) .[J] arXiv preprint arXiv:1511.06067
- 【Low Rank】Yong-Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, Dongjun Shin .[Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications](https://arxiv.org/pdf/1511.06530) .[J] arXiv preprint arXiv:1511.06530
- 【System】Seyyed Salar Latifi Oskouei, Hossein Golestani, Matin Hashemi, Soheil Ghiasi .[CNNdroid: GPU-Accelerated Execution of Trained Deep Convolutional Neural Networks on Android](https://arxiv.org/pdf/1511.07376) .[J] arXiv preprint arXiv:1511.07376
- 【Structure】mjad Almahairi, Nicolas Ballas, Tim Cooijmans, Yin Zheng, Hugo Larochelle, Aaron Courville .[Dynamic Capacity Networks](https://arxiv.org/pdf/1511.07838) .[J] arXiv preprint arXiv:1511.07838
- 【Quantization】Sungho Shin, Kyuyeon Hwang, Wonyong Sung .[Fixed-Point Performance Analysis of Recurrent Neural Networks](https://arxiv.org/pdf/1512.01322) .[J] arXiv preprint arXiv:1512.01322
- 【Quantization】Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, Jian Cheng .[Quantized Convolutional Neural Networks for Mobile Devices](https://arxiv.org/pdf/1512.06473) .[J] arXiv preprint arXiv:1512.06473
【code:[jiaxiang-wu/quantized-cnn](https://github.com/jiaxiang-wu/quantized-cnn)】
- 【Distillation】[Learning Using Privileged Information: Similarity Control and Knowledge Transfer](http://www.jmlr.org/papers/volume16/vapnik15b/vapnik15b.pdf), Vladimir Vapnik, Rauf Izmailov, 2015

### 2016
- 【other】Li D, Wang X, Kong D, et al. [DeepRebirth: A General Approach for Accelerating Deep Neural Network Execution on Mobile Devices](https://openreview.net/pdf?id=SkwSJ99ex)[J]. 2016.
- Luo P, Zhu Z, Liu Z, et al. [Face model compression by distilling knowledge from neurons](https://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/download/11977/12130)[C]//Thirtieth AAAI Conference on Artificial Intelligence. 2016.
- 【Distillation】Lavin A, Gray S. [Fast algorithms for convolutional neural networks](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Lavin_Fast_Algorithms_for_CVPR_2016_paper.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 4013-4021.
- 【Distillation】Luo, Ping, et al. [MobileID: Face Model Compression by Distilling Knowledge from Neurons](https://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/view/11977) Thirtieth AAAI Conference on Artificial Intelligence. 2016.
- Y.Wang, J. Xu, Y. Han, H. Li, and X. Li. [Deepburning: automatic generation of fpga-based learning accelerators for the neural network family](http://papers.nips.cc/paper/5647-binaryconnect-training-deep-neural-networks-with-binary-weights-during-propagations.pdf). In Design Automation Conference, page 110, 2016.
- Y. Guo, A. Yao, and Y. Chen. [Dynamic network surgery for efficient dnns](http://papers.nips.cc/paper/6165-dynamic-network-surgery-for-efficient-dnns.pdf). In Advances In Neural Information Processing Systems, pages 1379–1387, 2016.
- W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li. [Learning structured sparsity in deep neural networks.](https://papers.nips.cc/paper/6504-learning-structured-sparsity-in-deep-neural-networks.pdf) In Advances in Neural Information Processing Systems, pages 2074–2082, 2016.
- 【Pruning】V. Lebedev and V. Lempitsky. [Fast convnets using groupwise brain damage](http://openaccess.thecvf.com/content_cvpr_2016/papers/Lebedev_Fast_ConvNets_Using_CVPR_2016_paper.pdf). In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2554– 2564, 2016.
- 【Pruning】Molchanov, Pavlo, et al. [Pruning convolutional neural networks for resource efficient transfer learning](https://pdfs.semanticscholar.org/026e/cf916023e13191331a354271b7f9b86e50a1.pdf). arXiv preprint arXiv:1611.06440 3 (2016).
- 【Pruning】 Sun Y, Wang X, Tang X. [Sparsifying neural network connections for face recognition](http://openaccess.thecvf.com/content_cvpr_2016/papers/Sun_Sparsifying_Neural_Network_CVPR_2016_paper.pdf) .[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 4856-4864.
- 【Pruning】Babaeizadeh, Mohammad, Paris Smaragdis, and Roy H. Campbell. [A Simple yet Effective Method to Prune Dense Layers of Neural Networks](https://openreview.net/forum?id=HJIY0E9ge&noteId=HJIY0E9ge) (2016).
- S. Zhang, Z. Du, L. Zhang, H. Lan, S. Liu, L. Li, Q. Guo, T. Chen, and Y. Chen. [Cambricon-x: An accelerator for sparse neural networks](http://cslt.riit.tsinghua.edu.cn/mediawiki/images/f/f1/Cambricon-X.pdf). In Ieee/acm International Symposium on Microarchitecture, pages 1–12, 2016.
- S. I. Venieris and C. S. Bouganis. [fpgaconvnet: A framework for mapping convolutional neural networks on fpgas](https://spiral.imperial.ac.uk/bitstream/10044/1/44130/2/FCCM2016_camera_ready.pdf). In IEEE International Symposium on Field-Programmable Custom Computing Machines, pages 40–47, 2016.
- S. Liu, Z. Du, J. Tao, D. Han, T. Luo, Y. Xie, Y. Chen, and T. Chen. [Cambricon: An instruction set architecture for neural networks]. SIGARCH Comput. Archit. News, 44(3), June 2016.
- Suda. [Throughput-optimized opencl-based fpga accelerator for large-scale convolutional neural networks](http://isfpga.org/fpga2016/index_files/Slides/1_1.pdf). In Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, FPGA ’16, 2016.
- P. Wang and J. Cheng. [Accelerating convolutional neural networks for mobile applications](http://ir.ia.ac.cn/bitstream/173211/20148/1/Accelerating%20Convolutional%20Neural%20Networks%20for%20Mobile%20Applications.pdf). In Proceedings of the 2016 ACM on Multimedia Conference, pages 541–545. ACM, 2016.
- Qiu. [Going deeper with embedded fpga platform for convolutional neural network.](http://www.isfpga.org/fpga2016/index_files/Slides/1_2.pdf) In Proceedings of the 2016 ACM/SIGDA International Symposium on Field- Programmable Gate Arrays, FPGA ’16, 2016.
- L. Xia, T. Tang, W. Huangfu, M. Cheng, X. Yin, B. Li, Y. Wang, and H. Yang. [Switched by input: Power efficient structure for rram-based convolutional neural network](http://nicsefc.ee.tsinghua.edu.cn/media/publications/2016/DAC16_197.pdf). In Design Automation Conference, page 125, 2016.
- M. Alwani, H. Chen, M. Ferdman, and P. A. Milder. [Fusedlayer cnn accelerators](https://pdfs.semanticscholar.org/f30c/0a35edeaa8799a30851a74b974d293f9f3bf.pdf). In MICRO, 2016.
- K. Kim, J. Kim, J. Yu, J. Seo, J. Lee, and K. Choi. [Dynamic energy-accuracy trade-off using stochastic computing in deep neural networks]. In Design Automation Conference, page 124, 2016.
- J. Zhu, Z. Qian, and C. Y. Tsui. [Lradnn: High-throughput and energy-efficient deep neural network accelerator using low rank approximation](http://www.aspdac.com/aspdac2016/technical_program/pdf/6B-4.pdf). In Asia and South Pacific Design Automation Conference, pages 581–586, 2016.
- J. Wu, C. Leng, Y. Wang, Q. Hu, and J. Cheng.[Quantized convolutional neural networks for mobile devices](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Wu_Quantized_Convolutional_Neural_CVPR_2016_paper.pdf). IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
- J. Albericio, P. Judd, T. Hetherington, T. Aamodt, N. E. Jerger, and A. Moshovos. [Cnvlutin: Ineffectual-neuron-free deep neural network computing](https://www.ece.ubc.ca/~aamodt/publications/papers/Cnvlutin.ISCA2016.pdf). In International Symposium on Computer Architecture, pages 1–13, 2016.
- H. Sharma, J. Park, D. Mahajan, E. Amaro, J. K. Kim, C. Shao, A. Mishra, and H. Esmaeilzadeh. [From highlevel deep neural models to fpgas.](https://www.cc.gatech.edu/~hesmaeil/doc/paper/2016-micro-dnn_weaver.pdf) In Ieee/acm International Symposium on Microarchitecture, pages 1–12, 2016.
- D. Kim, J. Kung, S. Chai, S. Yalamanchili, and S. Mukhopadhyay. [Neurocube: A programmable digital neuromorphic architecture with high-density 3d memory](http://isca2016.eecs.umich.edu/wp-content/uploads/2016/07/6-2.pdf). In International Symposium on Computer Architecture, pages 380–392, 2016.
- C. Zhang, D. Wu, J. Sun, G. Sun, G. Luo, and J. Cong.[Energy-efficient cnn implementation on a deeply pipelined fpga cluster](http://vast.cs.ucla.edu/sites/default/files/publications/islped_chen.pdf). In Proceedings of the 2016 International Symposium on Low Power Electronics and Design, ISLPED ’16, 2016.
- C. Zhang, Z. Fang, P. Pan, P. Pan, and J. Cong. [Caffeine: towards uniformed representation and acceleration for deep convolutional neural networks](https://iceory.github.io/2018/04/25/caffeine-slides/Caffeine.pdf). In International Conference on Computer-Aided Design, page 12, 2016.
- 【Quantization】 Lin, Darryl, Sachin Talathi, and Sreekanth Annapureddy. [Fixed point quantization of deep convolutional networks.](http://www.jmlr.org/proceedings/papers/v48/linb16.pdf) International Conference on Machine Learning. 2016.
- 【Binarization】Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio .[Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1](https://arxiv.org/pdf/1602.02830) .[J] arXiv preprint arXiv:1602.02830
【code:[itayhubara/BinaryNet.pytorch](https://github.com/itayhubara/BinaryNet.pytorch); [itayhubara/BinaryNet.tf](https://github.com/itayhubara/BinaryNet.tf)】
- 【Binarization】Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi .[XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks](https://arxiv.org/pdf/1603.05279) .[J] arXiv preprint arXiv:1603.05279
【code:[allenai/XNOR-Net](https://github.com/allenai/XNOR-Net)】
- 【System】Huynh, Loc Nguyen, Rajesh Krishna Balan, and Youngki Lee. [DeepSense: A GPU-based deep convolutional neural network framework on commodity mobile devices](http://ink.library.smu.edu.sg/cgi/viewcontent.cgi?article=4278&context=sis_research) Proceedings of the 2016 Workshop on Wearable Systems and Applications. ACM, 2016.
- 【System】Lane, Nicholas D., et al. [DeepX: A Software Accelerator for Low-Power Deep Learning Inference on Mobile Devices](http://niclane.org/pubs/deepx_ipsn.pdf) .[C]Proceedings of the 15th International Conference on Information Processing in Sensor Networks. IEEE Press, 2016.
- 【System】Lane, Nicholas D., et al. [DXTK: Enabling Resource-efficient Deep Learning on Mobile and Embedded Devices with the DeepX Toolkit](http://niclane.org/pubs/dxtk_mobicase.pdf) .[J]MobiCASE. 2016.
- 【System】Han, Seungyeop, et al. [MCDNN: An Approximation-Based Execution Framework for Deep Stream Processing Under Resource Constraints](http://haneul.github.io/papers/mcdnn.pdf) .[C]Proceedings of the 14th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 2016.
- 【System】Bhattacharya, Sourav, and Nicholas D. Lane. [Sparsification and Separation of Deep Learning Layers for Constrained Resource Inference on Wearables](http://niclane.org/pubs/sparsesep_sensys.pdf) .[C]Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems CD-ROM. ACM, 2016.
- M. Kim and P. Smaragdis. [Bitwise neural networks](https://arxiv.org/pdf/1601.06071). arXiv preprint arXiv:1601.06071, 2016.
- 【System】Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A. Horowitz, William J. Dally .[EIE: Efficient Inference Engine on Compressed Deep Neural Network](https://arxiv.org/pdf/1602.01528) .[J] arXiv preprint arXiv:1602.01528
- 【Structure】【SqueezeNet】Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer .[SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size](https://arxiv.org/pdf/1602.07360) .[J] arXiv preprint arXiv:1602.07360
- D. Miyashita, E. H. Lee, and B. Murmann. [Convolutional neural networks using logarithmic data representation](https://arxiv.org/pdf/1603.01025). arXiv preprint arXiv:1603.01025, 2016.
- 【Distillation】[Do deep convolutional nets really need to be deep and convolutional?](https://arxiv.org/pdf/1603.05691.pdf), Gregor Urban, Krzysztof J. Geras, Samira Ebrahimi Kahou, Ozlem Aslan, Shengjie Wang, Rich Caruana, Abdelrahman Mohamed, Matthai Philipose, Matt Richardson, 2016
- 【Structure】Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, Kilian Weinberger .[Deep Networks with Stochastic Depth](https://arxiv.org/pdf/1603.09382) .[J] arXiv preprint arXiv:1603.09382
- 【Distillation】[Adapting Models to Signal Degradation using Distillation](https://arxiv.org/abs/1604.00433), Jong-Chyi Su, Subhransu Maji,2016
- F. Li, B. Zhang, and B. Liu. [Ternary weight networks](https://arxiv.org/pdf/1605.04711) . arXiv preprint arXiv:1605.04711, 2016.
【code:[fengfu-chris/caffe-twns](https://github.com/fengfu-chris/caffe-twns)】
- 【Quantization】 Gysel, Philipp. [Ristretto: Hardware-oriented approximation of convolutional neural networks.](https://arxiv.org/pdf/1604.03168.pdf) arXiv preprint arXiv:1605.06402 (2016).
- 【Structure】Roi Livni, Daniel Carmon, Amir Globerson .[Learning Infinite-Layer Networks: Without the Kernel Trick](https://arxiv.org/pdf/1606.05316) .[J] arXiv preprint arXiv:1606.05316
- 【Quantization】Zen H, Agiomyrgiannakis Y, Egberts N, et al. [Fast, compact, and high quality LSTM-RNN based statistical parametric speech synthesizers for mobile devices](https://arxiv.org/pdf/1606.06061)[J]. arXiv preprint arXiv:1606.06061, 2016.
- 【Binarization】Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou .[DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients](https://arxiv.org/pdf/1606.06160) .[J] arXiv preprint arXiv:1606.06160
【code:[tensorpack/DoReFa-Net](https://github.com/tensorpack/tensorpack/tree/master/examples/DoReFa-Net)】
- 【Distillation】Yoon Kim, Alexander M. Rush .[Sequence-Level Knowledge Distillation](https://arxiv.org/pdf/1606.07947) .[J] arXiv preprint arXiv:1606.07947
- 【Structure】【Pruning】Song Han, Jeff Pool, Sharan Narang, Huizi Mao, Enhao Gong, Shijian Tang, Erich Elsen, Peter Vajda, Manohar Paluri, John Tran, Bryan Catanzaro, William J. Dally .[DSD: Dense-Sparse-Dense Training for Deep Neural Networks](https://arxiv.org/pdf/1607.04381) .[J] arXiv preprint arXiv:1607.04381
【code:[songhan.github.io/DSD](https://songhan.github.io/DSD)】
- 【Quantization】Alvarez R, Prabhavalkar R, Bakhtin A. [On the efficient representation and execution of deep acoustic models](https://arxiv.org/pdf/1607.04683)[J]. arXiv preprint arXiv:1607.04683, 2016.
- 【Distillation】[Knowledge Distillation for Small-footprint Highway Networks](https://arxiv.org/pdf/1608.00892), Liang Lu, Michelle Guo, Steve Renals, 2016
- 【Pruning】Jongsoo Park, Sheng Li, Wei Wen, Ping Tak Peter Tang, Hai Li, Yiran Chen, Pradeep Dubey .[Faster CNNs with Direct Sparse Convolutions and Guided Pruning](https://arxiv.org/pdf/1608.01409) .[J] arXiv preprint arXiv:1608.01409
【code:[IntelLabs/SkimCaffe](https://github.com/IntelLabs/SkimCaffe)】
- 【Pruning】Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li .[Learning Structured Sparsity in Deep Neural Networks](https://arxiv.org/pdf/1608.03665) .[J] arXiv preprint arXiv:1608.03665
【code:[wenwei202/caffe/tree/scnn](https://github.com/wenwei202/caffe/tree/scnn)】
- 【Structure】 Wang M, Liu B, Foroosh H. [Design of efficient convolutional layers using single intra-channel convolution, topological subdivisioning and spatial" bottleneck" structure](https://arxiv.org/pdf/1608.04337) .[J]. arXiv preprint arXiv:1608.04337, 2016.
- 【Pruning】Yiwen Guo, Anbang Yao, Yurong Chen .[Dynamic Network Surgery for Efficient DNNs](https://arxiv.org/pdf/1608.04493) .[J] arXiv preprint arXiv:1608.04493
【code:[yiwenguo/Dynamic-Network-Surgery](https://github.com/yiwenguo/Dynamic-Network-Surgery)】
- 【Binarization】Felix Juefei-Xu, Vishnu Naresh Boddeti, Marios Savvides .[Local Binary Convolutional Neural Networks](https://arxiv.org/pdf/1608.06049) .[J] arXiv preprint arXiv:1608.06049
- 【Pruning】Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf .[Pruning Filters for Efficient ConvNets](https://arxiv.org/pdf/1608.08710) .[J] arXiv preprint arXiv:1608.08710
【code:[Eric-mingjie/rethinking-network-pruning](https://github.com/Eric-mingjie/rethinking-network-pruning/tree/master/imagenet/l1-norm-pruning)】
- 【other】Tramèr F, Zhang F, Juels A, et al. [Stealing machine learning models via prediction apis](https://arxiv.org/pdf/1609.02943.pdf)[C]//25th {USENIX} Security Symposium ({USENIX} Security 16). 2016: 601-618.
- 【Quantization】Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio .[Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations](https://arxiv.org/pdf/1609.07061) .[J] arXiv preprint arXiv:1609.07061
- 【Quantization】Wu Y, Schuster M, Chen Z, et al. [Google's neural machine translation system: Bridging the gap between human and machine translation](https://arxiv.org/abs/1609.08144)[J]. arXiv preprint arXiv:1609.08144, 2016.
- 【Structure】【Xception】François Chollet .[Xception: Deep Learning with Depthwise Separable Convolutions](https://arxiv.org/pdf/1610.02357) .[J] arXiv preprint arXiv:1610.02357
- 【Distillation】Bharat Bhusan Sau, Vineeth N. Balasubramanian .[Deep Model Compression: Distilling Knowledge from Noisy Teachers](https://arxiv.org/pdf/1610.09650) .[J] arXiv preprint arXiv:1610.09650
- 【other】Li X, Qin T, Yang J, et al. [LightRNN: Memory and computation-efficient recurrent neural networks](https://arxiv.org/abs/1610.09893)[C]//Advances in Neural Information Processing Systems. 2016: 4385-4393.
- 【Quantization】Lu Hou, Quanming Yao, James T. Kwok .[Loss-aware Binarization of Deep Networks](https://arxiv.org/pdf/1611.01600) .[J] arXiv preprint arXiv:1611.01600
- 【Low rank】Garipov T, Podoprikhin D, Novikov A, et al. [Ultimate tensorization: compressing convolutional and fc layers alike](https://arxiv.org/pdf/1611.03214)[J]. arXiv preprint arXiv:1611.03214, 2016.
【code:[timgaripov/TensorNet-TF](https://github.com/timgaripov/TensorNet-TF);[Bihaqo/TensorNet](https://github.com/Bihaqo/TensorNet)】
- 【Pruning】Tien-Ju Yang, Yu-Hsin Chen, Vivienne Sze .[Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning](https://arxiv.org/pdf/1611.05128) .[J] arXiv preprint arXiv:1611.05128
- 【Pruning】Aghasi A, Abdi A, Nguyen N, et al. [Net-trim: Convex pruning of deep neural networks with performance guarantee](https://arxiv.org/abs/1611.05162)[C]//Advances in Neural Information Processing Systems. 2017: 3177-3186.
【code:[DNNToolBox/Net-Trim-v1](https://github.com/DNNToolBox/Net-Trim-v1)】
- 【Quantization】Hantian Zhang, Jerry Li, Kaan Kara, Dan Alistarh, Ji Liu, Ce Zhang .[The ZipML Framework for Training Models with End-to-End Low Precision: The Cans, the Cannots, and a Little Bit of Deep Learning](https://arxiv.org/pdf/1611.05402) .[J] arXiv preprint arXiv:1611.05402
- 【Structure】Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, Kaiming He .[Aggregated Residual Transformations for Deep Neural Networks](https://arxiv.org/pdf/1611.05431) .[J] arXiv preprint arXiv:1611.05431
- A. Ren, Z. Li, C. Ding, Q. Qiu, Y. Wang, J. Li, X. Qian, and B. Yuan. [Sc-dcnn: Highly-scalable deep convolutional neural network using stochastic computing](https://arxiv.org/pdf/1611.05939). arXiv preprint arXiv:1611.05939, 2016.
- 【Pruning】Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz .[Pruning Convolutional Neural Networks for Resource Efficient Inference](https://arxiv.org/pdf/1611.06440) .[J] arXiv preprint arXiv:1611.06440
【code:[Tencent/PocketFlow#channel-pruning](https://github.com/Tencent/PocketFlow#channel-pruning)】
- 【other】Bagherinezhad H, Rastegari M, Farhadi A. [Lcnn: Lookup-based convolutional neural network](https://arxiv.org/pdf/1611.06473.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 7120-7129.
- 【Quantization】Hong S, Roh B, Kim K H, et al. [Pvanet: Lightweight deep neural networks for real-time object detection](https://arxiv.org/abs/1611.08588)[J]. arXiv preprint arXiv:1611.08588, 2016.
【code:[sanghoon/pva-faster-rcnn](https://github.com/sanghoon/pva-faster-rcnn)】
- 【Quantization】Qinyao He, He Wen, Shuchang Zhou, Yuxin Wu, Cong Yao, Xinyu Zhou, Yuheng Zou .[Effective Quantization Methods for Recurrent Neural Networks](https://arxiv.org/pdf/1611.10176) .[J] arXiv preprint arXiv:1611.10176
- 【Distallation】 Shen J, Vesdapunt N, Boddeti V N, et al. [In teacher we trust: Learning compressed models for pedestrian detection](https://arxiv.org/pdf/1612.00478.pdf) .[J]. arXiv preprint arXiv:1612.00478, 2016.
- 【Pruning】Song Han, Junlong Kang, Huizi Mao, Yiming Hu, Xin Li, Yubin Li, Dongliang Xie, Hong Luo, Song Yao, Yu Wang, Huazhong Yang, William J. Dally .[ESE: Efficient Speech Recognition Engine with Sparse LSTM on FPGA](https://arxiv.org/pdf/1612.00694) .[J] arXiv preprint arXiv:1612.00694
- 【Structure】Bichen Wu, Alvin Wan, Forrest Iandola, Peter H. Jin, Kurt Keutzer .[SqueezeDet: Unified, Small, Low Power Fully Convolutional Neural Networks for Real-Time Object Detection for Autonomous Driving](https://arxiv.org/pdf/1612.01051) .[J] arXiv preprint arXiv:1612.01051
- 【Quantization】Zhu C, Han S, Mao H, et al. [Trained ternary quantization](https://arxiv.org/pdf/1612.01064)[J]. arXiv preprint arXiv:1612.01064, 2016.
【code:[czhu95/ternarynet](https://github.com/czhu95/ternarynet)】
- 【Quantization】Yoojin Choi, Mostafa El-Khamy, Jungwon Lee .[Towards the Limit of Network Quantization](https://arxiv.org/pdf/1612.01543) .[J] arXiv preprint arXiv:1612.01543
- 【other】Joulin A, Grave E, Bojanowski P, et al. [Fasttext. zip: Compressing text classification models](https://arxiv.org/pdf/1612.03651)[J]. arXiv preprint arXiv:1612.03651, 2016.
- 【Distillation】Sergey Zagoruyko, Nikos Komodakis .[Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer](https://arxiv.org/pdf/1612.03928) .[J] arXiv preprint arXiv:1612.03928
- 【other】Hashemi S, Anthony N, Tann H, et al. [Understanding the impact of precision quantization on the accuracy and energy of neural networks](https://arxiv.org/pdf/1612.03940)[C]//Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017. IEEE, 2017: 1474-1479.
- Umuroglu. [Finn: A framework for fast, scalable binarized neural network inference](https://arxiv.org/pdf/1612.07119).[J] arXiv preprint arXiv:1612.07119

### 2017
- 【Thesis】Han S, Dally B. [Efficient methods and hardware for deep learning](https://stacks.stanford.edu/file/druid:qf934gh3708/EFFICIENT%20METHODS%20AND%20HARDWARE%20FOR%20DEEP%20LEARNING-augmented.pdf)[J]. University Lecture, 2017.
- 【Quantization】Cai Z, He X, Sun J, et al. [Deep learning with low precision by half-wave gaussian quantization](http://openaccess.thecvf.com/content_cvpr_2017/papers/Cai_Deep_Learning_With_CVPR_2017_paper.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 5918-5926.
【code:[zhaoweicai/hwgq](https://github.com/zhaoweicai/hwgq)】
- 【Quantization】Yonekawa H, Nakahara H. [On-chip memory based binarized convolutional deep neural network applying batch normalization free technique on an fpga](https://www.researchgate.net/profile/Hiroki_Nakahara/publication/318127383_On-Chip_Memory_Based_Binarized_Convolutional_Deep_Neural_Network_Applying_Batch_Normalization_Free_Technique_on_an_FPGA/links/5b59efffaca272a2d66cb247/On-Chip-Memory-Based-Binarized-Convolutional-Deep-Neural-Network-Applying-Batch-Normalization-Free-Technique-on-an-FPGA.pdf)[C]//2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 2017: 98-105.
- 【Quantization】Liang S, Yin S, Liu L, et al. [FP-BNN: Binarized neural network on FPGA](http://www.doc.ic.ac.uk/~wl/papers/17/neuro17sl0.pdf)[J]. Neurocomputing, 2018, 275: 1072-1086.
- 【Quantization】Li Z, Ni B, Zhang W, et al. [Performance guaranteed network acceleration via high-order residual quantization](http://openaccess.thecvf.com/content_ICCV_2017/papers/Li_Performance_Guaranteed_Network_ICCV_2017_paper.pdf)[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 2584-2592.
- 【Quantization】Hu Q, Wang P, Cheng J. [From hashing to cnns: Training binary weight networks via hashing](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewPDFInterstitial/16466/16691)[C]//Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
- 【Quantization】Lin X, Zhao C, Pan W. [Towards accurate binary convolutional neural network](http://papers.nips.cc/paper/6638-towards-accurate-binary-convolutional-neural-network.pdf)[C]//Advances in Neural Information Processing Systems. 2017: 345-353.
- 【Binarization】Yang H, Fritzsche M, Bartz C, et al. [Bmxnet: An open-source binary neural network implementation based on mxnet](https://arxiv.org/pdf/1705.09864)[C]//Proceedings of the 25th ACM international conference on Multimedia. ACM, 2017: 1209-1212.
【code:[hpi-xnor/BMXNet](https://github.com/hpi-xnor/BMXNet)】
- 【Structure】【ResNeXt】S. Xie, R. Girshick, P. Dollar, Z. Tu, and K. He. [ResNeXt: Aggregated residual transformations for deep neural networks](http://openaccess.thecvf.com/content_cvpr_2017/papers/Xie_Aggregated_Residual_Transformations_CVPR_2017_paper.pdf). In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
- 【other】Huang L, Liu X, Liu Y, et al. [Centered weight normalization in accelerating training of deep neural networks](http://openaccess.thecvf.com/content_ICCV_2017/papers/Huang_Centered_Weight_Normalization_ICCV_2017_paper.pdf)[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 2803-2811.
- Park E, Ahn J, Yoo S. [Weighted-entropy-based quantization for deep neural networks](http://openaccess.thecvf.com/content_cvpr_2017/papers/Park_Weighted-Entropy-Based_Quantization_for_CVPR_2017_paper.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 5456-5464.
【code:[EunhyeokPark/script_for_WQ](https://github.com/EunhyeokPark/script_for_WQ)】
- Guo Y, Yao A, Zhao H, et al. Network sketching: Exploiting binary structure in deep cnns](http://openaccess.thecvf.com/content_cvpr_2017/papers/Guo_Network_Sketching_Exploiting_CVPR_2017_paper.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 5955-5963.
- D. Nguyen, D. Kim, and J. Lee. [Double MAC: doubling the performance of convolutional neural networks on modern fpgas]. In Design, Automation and Test in Europe Conference and Exhibition, DATE 2017, Lausanne, Switzerland, March 27-31, 2017, pages 890–893, 2017.
- Edward. [Lognet: Energy-efficient neural networks using logarithmic computation]. 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5900–5904, 2017.
- H. Sim and J. Lee. [A new stochastic computing multiplier with application to deep convolutional neural networks]. In Design Automation Conference, page 29, 2017.
- H. Yang. [Time: A training-in-memory architecture for memristor-based deep neural networks](https://nicsefc.ee.tsinghua.edu.cn/media/publications/2017/DAC17_218.pdf). In Design Automation Conference, page 26, 2017.
- L. Chen, J. Li, Y. Chen, Q. Deng, J. Shen, X. Liang, and L. Jiang.[ Accelerator-friendly neural-network training: Learning variations and defects in rram crossbar]. In Design, Automation and Test in Europe Conference and Exhibition, pages 19–24, 2017.
- M. Gao, J. Pu, X. Yang, M. Horowitz, and C. Kozyrakis. [Tetris: Scalable and efficient neural network acceleration with 3d memory](https://dl.acm.org/ft_gateway.cfm?id=3037702&type=pdf). In International Conference on Architectural Support for Programming Languages and Operating Systems, pages 751–764, 2017.
- M. Price, J. Glass, and A. P. Chandrakasan. [14.4 a scalable speech recognizer with deep-neural-network acoustic models and voice-activated power gating.] In Solid-State Circuits Conference, pages 244–245, 2017.
- N. P. Jouppi. [In-datacenter performance analysis of a tensor processing unit](https://ieeexplore.ieee.org/iel7/8126322/8192462/08192463.pdf). In Proceedings of the 44th Annual International Symposium on Computer Architecture, ISCA ’17, 2017.
- Nurvitadhi. [Can fpgas beat gpus in accelerating nextgeneration deep neural networks?](https://jaewoong.org/pubs/fpga17-next-generation-dnns.pdf) In Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, FPGA ’17, 2017.
- P. Wang and J. Cheng. [Fixed-point factorized networks](http://openaccess.thecvf.com/content_cvpr_2017/papers/Wang_Fixed-Point_Factorized_Networks_CVPR_2017_paper.pdf). In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
- S. Venkataramani, A. Ranjan, S. Banerjee, D. Das, S. Avancha, A. Jagannathan, A. Durg, D. Nagaraj, B. Kaul, P. Dubey, and A. Raghunathan. [Scaledeep: A scalable compute architecture for learning and evaluating deep networks]. SIGARCH Comput. Archit. News, 45(2):13–26, June 2017.
- Wei. [Automated systolic array architecture synthesis for high throughput cnn inference on fpgas](http://ceca.pku.edu.cn/media/lw/6c22198b68248a761d8d8469080b48f1.pdf). In Proceedings of the 54th Annual Design Automation Conference 2017, DAC ’17, 2017.
- W. Tang, G. Hua, and L. Wang. [How to train a compact binary neural network with high accuracy?](https://www.aaai.org/ocs/index.php/AAAI/AAAI17/paper/download/14619/14454) In AAAI, pages 2625–2631, 2017.
- Xiao. [Exploring heterogeneous algorithms for accelerating deep convolutional neural networks on fpgas](http://ceca.pku.edu.cn/media/lw/9b2b54e7fbe742e085ca6c1ae1502791.pdf). In Proceedings of the 54th Annual Design Automation Conference 2017, DAC ’17, 2017.
- Y. H. Chen, T. Krishna, J. S. Emer, and V. Sze. [Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks](https://dspace.mit.edu/openaccess-disseminate/1721.1/101151). IEEE Journal of Solid-State Circuits, 52(1):127–138, 2017.
- Y. Ma, M. Kim, Y. Cao, S. Vrudhula, J. S. Seo, Y. Ma, M. Kim, Y. Cao, S. Vrudhula, and J. S. Seo. [End-to-end scalable fpga accelerator for deep residual networks.] In IEEE International Symposium on Circuits and Systems, pages 1–4, 2017.
- Y. Ma, Y. Cao, S. Vrudhula, and J. S. Seo. [An automatic rtl compiler for high-throughput fpga implementation of diverse deep convolutional neural networks]. In International Conference on Field Programmable Logic and Applications, pages 1–8, 2017.
- Y. Ma, Y. Cao, S. Vrudhula, and J.-s. Seo. [Optimizing loop operation and dataflow in fpga acceleration of deep convolutional neural networks](http://www.isfpga.org/fpga2017/slides/D1_S1_04.pdf). In Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, FPGA ’17, 2017.
- Y. Shen, M. Ferdman, and P. Milder. [Escher: A cnn accelerator with flexible buffering to minimize off-chip transfer](https://www.computer.org/csdl/proceedings/fccm/2017/4037/00/07966659.pdf). In IEEE International Symposium on Field-Programmable Custom Computing Machines, 2017.
- Zhao. [Accelerating binarized convolutional neural networks with software-programmable fpgas](https://dl.acm.org/ft_gateway.cfm?id=3021741&type=pdf). In Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, FPGA ’17, 2017.
- 【Distillation】Yim J, Joo D, Bae J, et al. [A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learnin](https://pdfs.semanticscholar.org/0410/659b6a311b281d10e0e44abce9b1c06be462.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 4133-4141.
- 【Distillation】Chen G, Choi W, Yu X, et al. [Learning Efficient Object Detection Models with Knowledge Distillation](http://papers.nips.cc/paper/6676-learning-efficient-object-detection-models-with-knowledge-distillation.pdf)[C]//Advances in Neural Information Processing Systems. 2017: 742-751.
- 【Distillation】[Local Affine Approximators for Improving Knowledge Transfer](https://lld-workshop.github.io/papers/LLD_2017_paper_28.pdf), Suraj Srinivas and Francois Fleuret, 2017
- 【Distillation】[Best of Both Worlds: Transferring Knowledge from Discriminative Learning to a Generative Visual Dialog Model](http://papers.nips.cc/paper/6635-best-of-both-worlds-transferring-knowledge-from-discriminative-learning-to-a-generative-visual-dialog-model.pdf), Jiasen Lu1, Anitha Kannan, Jianwei Yang, Devi Parikh, Dhruv Batra 2017
- 【Distillation】[Data-Free Knowledge Distillation For Deep Neural Networks](http://raphagl.com/research/replayed-distillation/), Raphael Gontijo Lopes, Stefano Fenu, 2017
- 【Miscellaneous】Wang Y, Xu C, Xu C, et al. [Beyond Filters: Compact Feature Map for Portable Deep Model](http://proceedings.mlr.press/v70/wang17m/wang17m.pdf)[C]//Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017: 3703-3711.
- 【Miscellaneous】Kim J, Park Y, Kim G, et al. [SplitNet: Learning to Semantically Split Deep Networks for Parameter Reduction and Model Parallelization](http://proceedings.mlr.press/v70/kim17b/kim17b.pdf)[C]//Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017: 1866-1874.
- 【Pruning】He Y, Zhang X, Sun J. [Channel pruning for accelerating very deep neural networks](http://openaccess.thecvf.com/content_ICCV_2017/papers/He_Channel_Pruning_for_ICCV_2017_paper.pdf)[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 1389-1397.
- 【Pruning】J. H. Ko, B. Mudassar, T. Na, and S. Mukhopadhyay. [Design of an energy-efficient accelerator for training of convolutional neural networks using frequency-domain computation](https://papers.nips.cc/paper/7382-frequency-domain-dynamic-pruning-for-convolutional-neural-networks.pdf). In Design Automation Conference, page 59, 2017.
- 【Pruning】Neklyudov K, Molchanov D, Ashukha A, et al. [Structured bayesian pruning via log-normal multiplicative noise](https://papers.nips.cc/paper/7254-structured-bayesian-pruning-via-log-normal-multiplicative-noise.pdf)[C]//Advances in Neural Information Processing Systems. 2017: 6775-6784.
【code;[necludov/group-sparsity-sbp](https://github.com/necludov/group-sparsity-sbp)】
- 【Pruning】Mallya A, Lazebnik S. [Packnet: Adding multiple tasks to a single network by iterative pruning](http://openaccess.thecvf.com/content_cvpr_2018/papers/Mallya_PackNet_Adding_Multiple_CVPR_2018_paper.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 7765-7773.
【code:[arunmallya/packnet](https://github.com/arunmallya/packnet)】
- 【Pruning】Vieira T, Eisner J. [Learning to Prune: Exploring the Frontier of Fast and Accurate Parsing](http://www.cs.jhu.edu/~jason/papers/vieira+eisner.tacl17.pdf)[J]. Transactions of the Association for Computational Linguistics, 2017, 5: 263-278.
- 【Pruning】Yu J, Lukefahr A, Palframan D, et al. [Scalpel: Customizing DNN Pruning to the Underlying Hardware Parallelism](http://www-personal.umich.edu/~jiecaoyu/papers/jiecaoyu-isca17.pdf)[C]//ACM SIGARCH Computer Architecture News. ACM, 2017, 45(2): 548-560.
- 【Pruning】Lin J, Rao Y, Lu J, et al. [Runtime neural pruning](http://papers.nips.cc/paper/6813-runtime-neural-pruning)[C]//Advances in Neural Information Processing Systems. 2017: 2181-2191.
- 【System】Mathur A, Lane N D, Bhattacharya S, et al.[DeepEye: Resource Efficient Local Execution of Multiple Deep Vision Models using Wearable Commodity Hardware](http://fahim-kawsar.net/papers/Mathur.MobiSys2017-Camera.pdf)[C]//Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 2017: 68-81.
- 【System】Huynh L N, Lee Y, Balan R K. [DeepMon: Mobile GPU-based Deep Learning Framework for Continuous Vision Applications](http://ink.library.smu.edu.sg/cgi/viewcontent.cgi?article=4673&context=sis_research)[C]//Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 2017: 82-95.
- Anwar S, Hwang K, Sung W. [Structured pruning of deep convolutional neural networks.](https://dl.acm.org/citation.cfm?id=3005348)[J]. ACM Journal on Emerging Technologies in Computing Systems (JETC), 2017, 13(3): 32.
- He Y, Zhang X, Sun J. [Channel pruning for accelerating very deep neural networks](http://openaccess.thecvf.com/content_ICCV_2017/papers/He_Channel_Pruning_for_ICCV_2017_paper.pdf)[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 1389-1397.
- Aghasi A, Abdi A, Nguyen N, et al. [Net-trim: Convex pruning of deep neural networks with performance guarantee.](http://papers.nips.cc/paper/6910-net-trim-convex-pruning-of-deep-neural-networks-with-performance-guarantee)[C]//Advances in Neural Information Processing Systems. 2017: 3177-3186.
- 【Quantization】Meng W, Gu Z, Zhang M, et al. [Two-bit networks for deep learning on resource-constrained embedded devices](https://arxiv.org/pdf/1701.00485)[J]. arXiv preprint arXiv:1701.00485, 2017.
- 【other】Ghosh T. [Quicknet: Maximizing efficiency and efficacy in deep architectures](https://arxiv.org/pdf/1701.02291)[J]. arXiv preprint arXiv:1701.02291, 2017.
- 【Pruning】Wolfe N, Sharma A, Drude L, et al. [The incredible shrinking neural network: New perspectives on learning representations through the lens of pruning](https://arxiv.org/abs/1701.04465)[J]. 2016.
- 【other】Chandrasekhar V, Lin J, Liao Q, et al. [Compression of deep neural networks for image instance retrieval](https://arxiv.org/pdf/1701.04923)[C]//2017 Data Compression Conference (DCC). IEEE, 2017: 300-309.
- 【other】Molchanov D, Ashukha A, Vetrov D. [Variational dropout sparsifies deep neural networks](https://arxiv.org/pdf/1701.05369)[C]//Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017: 2498-2507.
【code:[ars-ashuha/variational-dropout-sparsifies-dnn](https://github.com/ars-ashuha/variational-dropout-sparsifies-dnn)】
- 【Decomposition】Astrid M, Lee S I. [Cp-decomposition with tensor power method for convolutional neural networks compression](https://arxiv.org/pdf/1701.07148)[C]//2017 IEEE International Conference on Big Data and Smart Computing (BigComp). IEEE, 2017: 115-118.
- 【Quantization】Zhaowei Cai, Xiaodong He, Jian Sun, Nuno Vasconcelos .[Deep Learning with Low Precision by Half-wave Gaussian Quantization](https://arxiv.org/pdf/1702.00953) .[J] arXiv preprint arXiv:1702.00953
- 【Quantization】 Zhou, Aojun, et al. [Incremental network quantization: Towards lossless cnns with low-precision weights.](https://arxiv.org/pdf/1702.03044.pdf) arXiv preprint arXiv:1702.03044 (2017).
- 【Pruning】Karen Ullrich, Edward Meeds, Max Welling .[Soft Weight-Sharing for Neural Network Compression](https://arxiv.org/pdf/1702.04008) .[J] arXiv preprint arXiv:1702.04008
- 【Pruning】Changpinyo S, Sandler M, Zhmoginov A. [The power of sparsity in convolutional neural networks](https://arxiv.org/pdf/1702.06257)[J]. arXiv preprint arXiv:1702.06257, 2017.
- 【Quantization】Shin S, Boo Y, Sung W. [Fixed-point optimization of deep neural networks with adaptive step size retraining](https://arxiv.org/pdf/1702.08171)[C]//2017 IEEE International conference on acoustics, speech and signal processing (ICASSP). IEEE, 2017: 1203-1207.
- 【Quantization】Graham B. [Low-precision batch-normalized activations](https://arxiv.org/pdf/1702.08231)[J]. arXiv preprint arXiv:1702.08231, 2017.
- 【Pruning】Li S, Park J, Tang P T P. [Enabling sparse winograd convolution by native pruning](https://arxiv.org/pdf/1702.08597)[J]. arXiv preprint arXiv:1702.08597, 2017.
- 【other】Boulch A. [Sharesnet: reducing residual network parameter number by sharing weights](https://arxiv.org/pdf/1702.08782)[J]. arXiv preprint arXiv:1702.08782, 2017.
- 【Distillation】[Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results](https://arxiv.org/pdf/1703.01780), Antti Tarvainen, Harri Valpola, 2017
- 【Distillation】[Learning from Noisy Labels with Distillation](https://arxiv.org/abs/1703.02391), Yuncheng Li, Jianchao Yang, Yale Song, Liangliang Cao, Jiebo Luo, Li-Jia Li, 2017
- 【Survey】Sze V, Chen Y H, Yang T J, et al. [Efficient processing of deep neural networks: A tutorial and survey](https://arxiv.org/abs/1703.09039)[J]. Proceedings of the IEEE, 2017, 105(12): 2295-2329.
- 【Structure】Wei Wen, Cong Xu, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li .[Coordinating Filters for Faster Deep Neural Networks](https://arxiv.org/pdf/1703.09746) .[J] arXiv preprint arXiv:1703.09746
- 【Structure】【MobileNet】ndrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam .[MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/pdf/1704.04861) .[J] arXiv preprint arXiv:1704.04861
- 【Structure】Fei Wang, Mengqing Jiang, Chen Qian, Shuo Yang, Cheng Li, Honggang Zhang, Xiaogang Wang, Xiaoou Tang .[Residual Attention Network for Image Classification](https://arxiv.org/pdf/1704.06904) .[J] arXiv preprint arXiv:1704.06904
- 【Pruning】Liu W, Wen Y, Yu Z, et al. [Sphereface: Deep hypersphere embedding for face recognition](http://openaccess.thecvf.com/content_cvpr_2017/papers/Liu_SphereFace_Deep_Hypersphere_CVPR_2017_paper.pdf)[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 212-220.
【code;[isthatyoung/Sphereface-prune](https://github.com/isthatyoung/Sphereface-prune)】
- 【Quantization】Mellempudi N, Kundu A, Mudigere D, et al. [Ternary neural networks with fine-grained quantization](https://arxiv.org/pdf/1705.01462)[J]. arXiv preprint arXiv:1705.01462, 2017.
- 【Structure】Louizos C, Ullrich K, Welling M. [Bayesian compression for deep learning](https://arxiv.org/pdf/1705.08665.pdf)[C]//Advances in Neural Information Processing Systems. 2017: 3288-3298.
- H. Tann, S. Hashemi, I. Bahar, and S. Reda. [Hardwaresoftware codesign of accurate, multiplier-free deep neural networks](https://arxiv.org/pdf/1705.04288). arXiv preprint arXiv:1705.04288, 2017.
- 【Pruning】Dong X, Chen S, Pan S. [Learning to prune deep neural networks via layer-wise optimal brain surgeon](https://arxiv.org/abs/1705.07565)[C]//Advances in Neural Information Processing Systems. 2017: 4857-4867.
【code:[csyhhu/L-OBS](https://github.com/csyhhu/L-OBS)】
- H. Mao, S. Han, J. Pool, W. Li, X. Liu, Y. Wang, and W. J. Dally. [Exploring the regularity of sparse structure in convolutional neural networks](https://arxiv.org/pdf/1705.08922). arXiv preprint arXiv:1705.08922, 2017.
- 【System】Qingqing Cao, Niranjan Balasubramanian, Aruna Balasubramanian .[MobiRNN: Efficient Recurrent Neural Network Execution on Mobile GPU](https://arxiv.org/pdf/1706.00878) .[J] arXiv preprint arXiv:1706.00878
- 【Quantization】Denis A. Gudovskiy, Luca Rigazio .[ShiftCNN: Generalized Low-Precision Architecture for Inference of Convolutional Neural Networks](https://arxiv.org/pdf/1706.02393) .[J] arXiv preprint arXiv:1706.02393
- 【Structure】Zhe Li, Xiaoyu Wang, Xutao Lv, Tianbao Yang .[SEP-Nets: Small and Effective Pattern Networks](https://arxiv.org/pdf/1706.03912) .[J] arXiv preprint arXiv:1706.03912
- Zhang Y, Xiang T, Hospedales T M, et al. [Deep mutual learning](https://arxiv.org/pdf/1706.00384.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 4320-4328.
- 【Structure】【ShuffleNet】Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, Jian Sun .[ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices](https://arxiv.org/pdf/1707.01083) .[J] arXiv preprint arXiv:1707.01083
- 【Survey】Miguel Á. Carreira-Perpiñán .[Model compression as constrained optimization, with application to neural nets. Part I: general framework](https://arxiv.org/pdf/1707.01209) .[J] arXiv preprint arXiv:1707.01209
- 【Pruning】Zehao Huang, Naiyan Wang .[Data-Driven Sparse Structure Selection for Deep Neural Networks](https://arxiv.org/pdf/1707.01213) .[J] arXiv preprint arXiv:1707.01213
【code:[TuSimple/sparse-structure-selection](https://github.com/TuSimple/sparse-structure-selection)】
- 【Distillation】Zehao Huang, Naiyan Wang .[Like What You Like: Knowledge Distill via Neuron Selectivity Transfer](https://arxiv.org/pdf/1707.01219) .[J] arXiv preprint arXiv:1707.01219
- 【Distillation】Yuntao Chen, Naiyan Wang, Zhaoxiang Zhang .[DarkRank: Accelerating Deep Metric Learning via Cross Sample Similarities Transfer](https://arxiv.org/pdf/1707.01220) .[J] arXiv preprint arXiv:1707.01220
- 【Structure】Ting Zhang, Guo-Jun Qi, Bin Xiao, Jingdong Wang. [Interleaved Group Convolutions for Deep Neural Networks](https://arxiv.org/abs/1707.02725).[J] arXiv preprint arXiv:1707.02725
- 【Survey】Miguel Á. Carreira-Perpiñán, Yerlan Idelbayev .[Model compression as constrained optimization, with application to neural nets. Part II: quantization](https://arxiv.org/pdf/1707.04319) .[J] arXiv preprint arXiv:1707.04319
- 【Binarization】Jeng-Hau Lin, Tianwei Xing, Ritchie Zhao, Zhiru Zhang, Mani Srivastava, Zhuowen Tu, Rajesh K. Gupta .[Binarized Convolutional Neural Networks with Separable Filters for Efficient Hardware Acceleration](https://arxiv.org/pdf/1707.04693) .[J] arXiv preprint arXiv:1707.04693
- 【Pruning】Yihui He, Xiangyu Zhang, Jian Sun .[Channel Pruning for Accelerating Very Deep Neural Networks](https://arxiv.org/pdf/1707.06168) .[J] arXiv preprint arXiv:1707.06168
【code:[yihui-he/channel-pruning](https://github.com/yihui-he/channel-pruning);[Eric-mingjie/rethinking-network-pruning](https://github.com/Eric-mingjie/rethinking-network-pruning/tree/master/imagenet)】
- 【Structure】【Pruning】Jian-Hao Luo, Jianxin Wu, Weiyao Lin .[ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression](https://arxiv.org/pdf/1707.06342) .[J] arXiv preprint arXiv:1707.06342
【code:[Roll920/ThiNet](https://github.com/Roll920/ThiNet);[Eric-mingjie/rethinking-network-pruning](https://github.com/Eric-mingjie/rethinking-network-pruning/tree/master/imagenet)】
- 【Structure】Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le .[Learning Transferable Architectures for Scalable Image Recognition](https://arxiv.org/pdf/1707.07012) .[J] arXiv preprint arXiv:1707.07012
- 【other】Delmas A, Sharify S, Judd P, et al. [Tartan: Accelerating fully-connected and convolutional layers in deep learning networks by exploiting numerical precision variability](https://arxiv.org/pdf/1707.09068)[J]. arXiv preprint arXiv:1707.09068, 2017.
- 【Pruning】Frederick Tung, Srikanth Muralidharan, Greg Mori .[Fine-Pruning: Joint Fine-Tuning and Compression of a Convolutional Network with Bayesian Optimization](https://arxiv.org/pdf/1707.09102) .[J] arXiv preprint arXiv:1707.09102
- 【Quantization】Leng C, Dou Z, Li H, et al. [Extremely low bit neural network: Squeeze the last bit out with admm](https://arxiv.org/abs/1707.09870)[C]//Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
- 【Distillation】[Rocket Launching: A Universal and Efficient Framework for Training Well-performing Light Net](https://arxiv.org/pdf/1708.04106.pdf), Zihao Liu, Qi Liu, Tao Liu, Yanzhi Wang, Wujie Wen, 2017
- A. Parashar, M. Rhu, A. Mukkara, A. Puglielli, R. Venkatesan, B. Khailany, J. Emer, S. W. Keckler, and W. J. Dally. [Scnn: An accelerator for compressed-sparse convolutional neural networks](https://arxiv.org/pdf/1708.04485). arXiv preprint arXiv:1708.04485, 2017.
- 【Structure】Dawei Li, Xiaolong Wang, Deguang Kong .[DeepRebirth: Accelerating Deep Neural Network Execution on Mobile Devices](https://arxiv.org/pdf/1708.04728) .[J] arXiv preprint arXiv:1708.04728
- 【Distillation】[Revisiting knowledge transfer for training object class detectors](https://arxiv.org/pdf/1708.06128.pdf), Jasper Uijlings, Stefan Popov, Vittorio Ferrari, 2017
- 【Pruning】Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, Changshui Zhang .[Learning Efficient Convolutional Networks through Network Slimming](https://arxiv.org/pdf/1708.06519) .[J] arXiv preprint arXiv:1708.06519
【code:[Eric-mingjie/network-slimming](https://github.com/Eric-mingjie/network-slimming)】
- 【Distillation】Zheng Xu, Yen-Chang Hsu, Jiawei Huang .[Learning Loss for Knowledge Distillation with Conditional Adversarial Networks](https://arxiv.org/pdf/1709.00513) .[J] arXiv preprint arXiv:1709.00513
- 【other】Masana M, van de Weijer J, Herranz L, et al. [Domain-adaptive deep network compression](https://arxiv.org/abs/1709.01041)[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 4289-4297.
- Mishra A, Nurvitadhi E, Cook J J, et al. [WRPN: wide reduced-precision networks](https://arxiv.org/pdf/1709.01134)[J]. arXiv preprint arXiv:1709.01134, 2017.
- 【Distillation】Chong Wang, Xipeng Lan, Yangang Zhang .[Model Distillation with Knowledge Transfer from Face Classification to Alignment and Verification](https://arxiv.org/pdf/1709.02929) .[J] arXiv preprint arXiv:1709.02929
- 【Structure】Mohammad Javad Shafiee, Brendan Chywl, Francis Li, Alexander Wong .[Fast YOLO: A Fast You Only Look Once System for Real-time Embedded Object Detection in Video](https://arxiv.org/pdf/1709.05943) .[J] arXiv preprint arXiv:1709.05943
- 【Train】Ashok A, Rhinehart N, Beainy F, et al. [N2n learning: Network to network compression via policy gradient reinforcement learning](https://arxiv.org/pdf/1709.06030)[J]. arXiv preprint arXiv:1709.06030, 2017.
- 【Pruning】Michael Zhu, Suyog Gupta .[To prune, or not to prune: exploring the efficacy of pruning for model compression](https://arxiv.org/pdf/1710.01878) .[J] arXiv preprint arXiv:1710.01878
- 【Distillation】Raphael Gontijo Lopes, Stefano Fenu, Thad Starner .[Data-Free Knowledge Distillation for Deep Neural Networks](https://arxiv.org/pdf/1710.07535) .[J] arXiv preprint arXiv:1710.07535
- 【Survey】Yu Cheng, Duo Wang, Pan Zhou, Tao Zhang .[A Survey of Model Compression and Acceleration for Deep Neural Networks](https://arxiv.org/pdf/1710.09282) .[J] arXiv preprint arXiv:1710.09282
- 【Distillation】Zhi Zhang, Guanghan Ning, Zhihai He .[Knowledge Projection for Deep Neural Networks](https://arxiv.org/pdf/1710.09505) .[J] arXiv preprint arXiv:1710.09505
- 【Structure】Mohammad Ghasemzadeh, Mohammad Samragh, Farinaz Koushanfar .[ReBNet: Residual Binarized Neural Network](https://arxiv.org/pdf/1711.01243) .[J] arXiv preprint arXiv:1711.01243
- 【Distillation】Elliot J. Crowley, Gavin Gray, Amos Storkey .[Moonshine: Distilling with Cheap Convolutions](https://arxiv.org/pdf/1711.02613) .[J] arXiv preprint arXiv:1711.02613
- 【Quantization】Reagen B, Gupta U, Adolf R, et al. [Weightless: Lossy weight encoding for deep neural network compression](https://arxiv.org/pdf/1711.04686)[J]. arXiv preprint arXiv:1711.04686, 2017.
- 【Distillation】Mishra A, Marr D. [Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy](https://arxiv.org/pdf/1711.05852)[J]. arXiv preprint arXiv:1711.05852, 2017.
- 【Pruning】Ruichi Yu, Ang Li, Chun-Fu Chen, Jui-Hsin Lai, Vlad I. Morariu, Xintong Han, Mingfei Gao, Ching-Yung Lin, Larry S. Davis .[NISP: Pruning Networks using Neuron Importance Score Propagation](https://arxiv.org/pdf/1711.05908) .[J] arXiv preprint arXiv:1711.05908
- 【Pruning】iel Gordon, Elad Eban, Ofir Nachum, Bo Chen, Tien-Ju Yang, Edward Choi .[MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep Networks](https://arxiv.org/pdf/1711.06798) .[J] arXiv preprint arXiv:1711.06798
【code:[google-research/morph-net](https://github.com/google-research/morph-net)】
- 【System】Stylianos I. Venieris, Christos-Savvas Bouganis .[fpgaConvNet: A Toolflow for Mapping Diverse Convolutional Neural Networks on Embedded FPGAs](https://arxiv.org/pdf/1711.08740) .[J] arXiv preprint arXiv:1711.08740
- 【Structure】Gao Huang, Shichen Liu, Laurens van der Maaten, Kilian Q. Weinberger .[CondenseNet: An Efficient DenseNet using Learned Group Convolutions](https://arxiv.org/pdf/1711.09224) .[J] arXiv preprint arXiv:1711.09224
- 【Quantization】Zhou Y, Moosavi-Dezfooli S M, Cheung N M, et al. [Adaptive quantization for deep neural network](https://arxiv.org/abs/1712.01048)[C]//Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
- [Learning Sparse Neural Networks through L0 Regularization](https://arxiv.org/abs/1712.01312) .[J] arXiv preprint arXiv:1711.01312
- 【Train】Lin Y, Han S, Mao H, et al. [Deep gradient compression: Reducing the communication bandwidth for distributed training](https://arxiv.org/pdf/1712.01887.pdf)[J]. arXiv preprint arXiv:1712.01887, 2017.
- 【Low Rank】ndrew Tulloch, Yangqing Jia .[High performance ultra-low-precision convolutions on mobile devices](https://arxiv.org/pdf/1712.02427) .[J] arXiv preprint arXiv:1712.02427
- 【Train】Chen C Y, Choi J, Brand D, et al. [Adacomp: Adaptive residual gradient compression for data-parallel distributed training](https://arxiv.org/abs/1712.02679)[C]//Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
- 【Structure】【StrassenNets】Tschannen M, Khanna A, Anandkumar A. [StrassenNets: Deep learning with a multiplication budget](https://arxiv.org/pdf/1712.03942)[J]. arXiv preprint arXiv:1712.03942, 2017.
- 【Distillation】[Data Distillation: Towards Omni-Supervised Learning](https://arxiv.org/pdf/1712.04440.pdf), Ilija Radosavovic, Piotr Dollár, Ross Girshick, Georgia Gkioxari, Kaiming He, 2017
- 【Decomposition】Ye J, Wang L, Li G, et al. [Learning compact recurrent neural networks with block-term tensor decomposition](https://arxiv.org/abs/1712.05134)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 9378-9387.
- 【Quantization】Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, Dmitry Kalenichenko .[Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference](https://arxiv.org/pdf/1712.05877) .[J] arXiv preprint arXiv:1712.05877

### 2018
- 【Thesis】[Algorithms for speeding up convolutional neural networks](https://www.skoltech.ru/app/data/uploads/2018/10/Thesis-Final.pdf)
- 【Survey】Cheng Y, Wang D, Zhou P, et al. [Model compression and acceleration for deep neural networks: The principles, progress, and challenges](https://www.gwern.net/docs/ai/2018-cheng.pdf)[J]. IEEE Signal Processing Magazine, 2018, 35(1): 126-136.
- 【Structure】【ChannelNets】Gao H, Wang Z, Ji S. [Channelnets: Compact and efficient convolutional neural networks via channel-wise convolutions](https://papers.nips.cc/paper/7766-channelnets-compact-and-efficient-convolutional-neural-networks-via-channel-wise-convolutions.pdf)[C]//Advances in Neural Information Processing Systems. 2018: 5197-5205.
【code:[HongyangGao/ChannelNets](https://github.com/HongyangGao/ChannelNets)】
- 【Structure】【Shift】Wu B, Wan A, Yue X, et al. [Shift: A zero flop, zero parameter alternative to spatial convolutions](http://openaccess.thecvf.com/content_cvpr_2018/papers/Wu_Shift_A_Zero_CVPR_2018_paper.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 9127-9135.
【code:[alvinwan/shiftresnet-cifar](https://github.com/alvinwan/shiftresnet-cifar)】
- 【Quantization】Son S, Nah S, Mu Lee K. [Clustering convolutional kernels to compress deep neural networks](http://openaccess.thecvf.com/content_ECCV_2018/papers/Sanghyun_Son_Clustering_Kernels_for_ECCV_2018_paper.pdf)[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 216-232.
- 【Quantization】Yu T, Yuan J, Fang C, et al. [Product quantization network for fast image retrieval](http://openaccess.thecvf.com/content_ECCV_2018/papers/Tan_Yu_Product_Quantization_Network_ECCV_2018_paper.pdf)[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 186-201.
- 【Quantization】Achterhold J, Koehler J M, Schmeink A, et al.[Variational network quantization](https://openreview.net/pdf?id=ry-TW-WAb)[J]. 2018.
- 【Quantization】Martinez J, Zakhmi S, Hoos H H, et al. [LSQ++: Lower running time and higher recall in multi-codebook quantization](http://openaccess.thecvf.com/content_ECCV_2018/papers/Julieta_Martinez_LSQ_lower_runtime_ECCV_2018_paper.pdf)[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 491-506.
- 【Quantization】Zhou A, Yao A, Wang K, et al. [Explicit loss-error-aware quantization for low-bit deep neural networks](http://openaccess.thecvf.com/content_cvpr_2018/papers/Zhou_Explicit_Loss-Error-Aware_Quantization_CVPR_2018_paper.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 9426-9435.
- 【Quantization】Nakanishi K, Maeda S, Miyato T, et al. [Adaptive Sample-space & Adaptive Probability coding: a neural-network based approach for compression](https://openreview.net/pdf?id=HkzNXhC9KQ)[J]. 2018.
- 【Quantization】Mukherjee L, Ravi S N, Peng J, et al. [A Biresolution Spectral Framework for Product Quantization](http://openaccess.thecvf.com/content_cvpr_2018/papers/Mukherjee_A_Biresolution_Spectral_CVPR_2018_paper.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 3329-3338.
- 【Quantization】Zhou Y, Moosavi-Dezfooli S M, Cheung N M, et al. [Adaptive quantization for deep neural network](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewPDFInterstitial/16248/16774)[C]//Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
- 【Quantization】Li D, Wang X, Kong D. [Deeprebirth: Accelerating deep neural network execution on mobile devices](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewPDFInterstitial/16652/15946)[C]//Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
- 【Quantization】Wang P, Hu Q, Zhang Y, et al. [Two-step quantization for low-bit neural networks](http://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_Two-Step_Quantization_for_CVPR_2018_paper.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 4376-4384.
- 【Quantization】Leng C, Dou Z, Li H, et al. [Extremely low bit neural network: Squeeze the last bit out with admm](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewPDFInterstitial/16767/16728)[C]//Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
- 【other】Chen T, Lin L, Zuo W, et al. [Learning a wavelet-like auto-encoder to accelerate deep neural networks](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/download/16655/16254)[C]//Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
- 【other】He X, Cheng J. [Learning Compression from Limited Unlabeled Data](http://openaccess.thecvf.com/content_ECCV_2018/papers/Xiangyu_He_Learning_Compression_from_ECCV_2018_paper.pdf)[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 752-769.
- 【other】Dai L, Tang L, Xie Y, et al. [Designing by training: acceleration neural network for fast high-dimensional convolution](https://papers.nips.cc/paper/7420-designing-by-training-acceleration-neural-network-for-fast-high-dimensional-convolution.pdf)[C]//Advances in Neural Information Processing Systems. 2018: 1466-1475.
- 【other】Cicek S, Fawzi A, Soatto S. [Saas: Speed as a supervisor for semi-supervised learning](http://openaccess.thecvf.com/content_ECCV_2018/papers/Safa_Cicek_SaaS_Speed_as_ECCV_2018_paper.pdf)[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 149-163.
- 【other】Chen W, Wilson J, Tyree S, et al. [Compressing convolutional neural networks in the frequency domain](https://dl.acm.org/ft_gateway.cfm?id=2939839&type=pdf)[C]//Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2016: 1475-1484.
- 【other】Lim C H. [An efficient pruning algorithm for robust isotonic regression](http://papers.nips.cc/paper/7306-an-efficient-pruning-algorithm-for-robust-isotonic-regression.pdf)[C]//Advances in Neural Information Processing Systems. 2018: 219-229.
- 【Train】Wang N, Choi J, Brand D, et al. [Training deep neural networks with 8-bit floating point numbers](https://papers.nips.cc/paper/7994-training-deep-neural-networks-with-8-bit-floating-point-numbers.pdf)[C]//Advances in neural information processing systems. 2018: 7675-7684.
- 【Pruning】Fu Y, Zhang S, Li D, et al. [pruning in training: learning and ranking sparse connections in deep convolutional networks](https://openreview.net/pdf?id=r1GgDj0cKX)[J]. 2018.
- 【Pruning】Gao W, Wei Y, Li Q, et al. [pruning with hints: an efficient framework for model acceleration](https://openreview.net/pdf?id=Hyffti0ctQ)[J]. 2018.
- 【Pruning】Tung F, Mori G. [Clip-q: Deep network compression learning by in-parallel pruning-quantization](http://www.sfu.ca/~ftung/papers/clipq_cvpr18.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 7873-7882.
- 【Pruning】Zeng W, Urtasun R. [MLPrune: Multi-Layer Pruning for Automated Neural Network Compression](https://openreview.net/pdf?id=r1g5b2RcKm)[J]. 2018.
- 【Pruning】Carreira-Perpinán M A, Idelbayev Y. [“Learning-Compression” Algorithms for Neural Net Pruning](http://openaccess.thecvf.com/content_cvpr_2018/papers/Carreira-Perpinan_Learning-Compression_Algorithms_for_CVPR_2018_paper.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 8532-8541.
- 【Pruning】[Cumulative Saliency based Globally Balanced Filter Pruning For Efficient Convolutional Neural Networks ](https://openreview.net/forum?id=H1fevoAcKX)
- 【Pruning】Yeh C K, Yen I E H, Chen H Y, et al. [deep-trim: revisiting l1 regularization for connection pruning of deep network](https://openreview.net/pdf?id=r1exVhActQ)[J]. 2018.
- 【Pruning】Zhang X, Zhu Z, Xu Z. [Learning to Search Efficient DenseNet with Layer-wise Pruning](https://openreview.net/pdf?id=r1fWmnR5tm)[J]. 2018.
- 【Pruning】Liu Z, Xu J, Peng X, et al. [Frequency-domain dynamic pruning for convolutional neural networks](https://papers.nips.cc/paper/7382-frequency-domain-dynamic-pruning-for-convolutional-neural-networks.pdf)[C]//Advances in Neural Information Processing Systems. 2018: 1043-1053.
- 【Pruning】Evci U, Le Roux N, Castro P, et al. [Mean Replacement Pruning](https://openreview.net/pdf?id=BJxRVnC5Fm)[J]. 2018.
- 【Pruning】Svoboda F, Liberis E, Lane N D. [In search of theoretically grounded pruning](https://openreview.net/pdf?id=SkfQAiA9YX)[J]. 2018.
- 【Pruning】He, Yihui, et al. [AMC: AutoML for model compression and acceleration on mobile devices](http://openaccess.thecvf.com/content_ECCV_2018/papers/Yihui_He_AMC_Automated_Model_ECCV_2018_paper.pdf) Proceedings of the European Conference on Computer Vision (ECCV). 2018.
- 【Pruning】Chen C, Tung F, Vedula N, et al. [Constraint-aware deep neural network compression](http://openaccess.thecvf.com/content_ECCV_2018/papers/Changan_Chen_Constraints_Matter_in_ECCV_2018_paper.pdf)[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 400-415.
【code:[ChanganVR/ConstraintAwareCompression](https://github.com/ChanganVR/ConstraintAwareCompression)】
- 【Pruning】Carreira-Perpinán, Miguel A., and Yerlan Idelbayev. [“Learning-Compression” Algorithms for Neural Net Pruning](http://faculty.ucmerced.edu/mcarreira-perpinan/papers/cvpr18.pdf) Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.
- 【Pruning】Yang Q, Wen W, Wang Z, et al. [Integral Pruning on Activations and Weights for Efficient Neural Networks](https://openreview.net/pdf?id=HyevnsCqtQ)[J]. 2018.
- 【Distillation】[Self-supervised knowledge distillation using singular value decomposition](http://openaccess.thecvf.com/content_ECCV_2018/html/SEUNG_HYUN_LEE_Self-supervised_Knowledge_Distillation_ECCV_2018_paper.html), Seung Hyun Lee, Dae Ha Kim, Byung Cheol Song, 2018
- 【Distillation】Park S U, Kwak N. [FEED: Feature-level Ensemble Effect for knowledge Distillation](https://openreview.net/pdf?id=BJxYEsAqY7)[J]. 2018.
- 【Distillation】Tao Z, Xia Q, Li Q. [knowledge distill via learning neuron manifold](https://openreview.net/pdf?id=SJlYcoCcKX)[J]. 2018.
- 【Distillation】[Exploration by random distillation](https://openreview.net/forum?id=H1lJJnR5Ym)
-【Distillation】Wang X, Zhang R, Sun Y, et al. [KDGAN: knowledge distillation with generative adversarial networks](https://papers.nips.cc/paper/7358-kdgan-knowledge-distillation-with-generative-adversarial-networks.pdf)[C]//Advances in Neural Information Processing Systems. 2018: 775-786.
- Sakr C, Choi J, Wang Z, et al. [True Gradient-Based Training of Deep Binary Activated Neural Networks Via Continuous Binarization](http://sakr2.web.engr.illinois.edu/papers/2018/icassp_binarization_final.pdf)[C]//2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018: 2346-2350.
- Su J, Li J, Bhattacharjee B, et al. [Exploiting Invariant Structures for Compression in Neural Networks](https://openreview.net/pdf?id=rkl85oRqYX)[J]. 2018.
- Suau X, Zappella L, Apostoloff N. [Network compression using correlation analysis of layer responses](https://openreview.net/pdf?id=rkl42iA5t7)[J]. 2018.
- [Architecture Compression](https://openreview.net/forum?id=BygGNnCqKQ)
- Darlow L N, Storkey A. [What Information Does a ResNet Compress?](https://openreview.net/pdf?id=HklbTjRcKX)[J]. 2018.
- Shwartz-Ziv R, Painsky A, Tishby N. [representation compression and generalization in deep neural networks](https://openreview.net/pdf?id=SkeL6sCqK7)[J]. 2018.
- Zhuang B, Shen C, Tan M, et al. [Towards effective low-bitwidth convolutional neural networks](http://openaccess.thecvf.com/content_cvpr_2018/papers/Zhuang_Towards_Effective_Low-Bitwidth_CVPR_2018_paper.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 7920-7928.
【code:[nowgood/QuantizeCNNModel](https://github.com/nowgood/QuantizeCNNModel)】
- Liu Z, Wu B, Luo W, et al. [Bi-real net: Enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm](http://openaccess.thecvf.com/content_ECCV_2018/papers/zechun_liu_Bi-Real_Net_Enhancing_ECCV_2018_paper.pdf)[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 722-737.
【code:[liuzechun/Bi-Real-net](https://github.com/liuzechun/Bi-Real-net)】
- G. Li, F. Li, T. Zhao, and J. Cheng. [Block convolution: Towards memory-efficeint inference of large-scale cnns on fpga]. In Design Automation and Test in Europe, 2018.
- Schindler G, Roth W, Pernkopf F, et al. [N-Ary Quantization for CNN Model Compression and Inference Acceleration](https://openreview.net/pdf?id=HylDpoActX)[J]. 2018.
- P. Wang, Q. Hu, Z. Fang, C. Zhao, and J. Cheng. [Deepsearch: A fast image search framework for mobile devices](http://159.226.21.68/bitstream/173211/20896/1/TOMM1401-06.pdf). ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 14, 2018.
- Q. Hu, P.Wang, and J. Cheng. [From hashing to cnns: Training binary weight networks via hashing](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewPDFInterstitial/16466/16691). In AAAI, February 2018.
J. Cheng, J. Wu, C. Leng, Y. Wang, and Q. Hu. [Quantized cnn: A unified approach to accelerate and compress convolutional networks]. IEEE Transactions on Neural Networks and Learning Systems (TNNLS), PP:1–14.
- 【Structure】【MobileNetV2】Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen .[MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/pdf/1801.04381) .[J] arXiv preprint arXiv:1801.04381
【code:[tensorflow/models](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet)】
- Theodore S. Nowak, Jason J. Corso .[Deep Net Triage: Analyzing the Importance of Network Layers via Structural Compression](https://arxiv.org/pdf/1801.04651) .[J] arXiv preprint arXiv:1801.04651.
- Lucas Theis, Iryna Korshunova, Alykhan Tejani, Ferenc Huszár .[Faster gaze prediction with dense networks and Fisher pruning](https://arxiv.org/pdf/1801.05787) .[J] arXiv preprint arXiv:1801.05787.
- Brian Trippe, Richard Turner .[Overpruning in Variational Bayesian Neural Networks](https://arxiv.org/pdf/1801.06230) .[J] arXiv preprint arXiv:1801.06230.
- Qiangui Huang, Kevin Zhou, Suya You, Ulrich Neumann .[Learning to Prune Filters in Convolutional Neural Networks](https://arxiv.org/pdf/1801.07365) .[J] arXiv preprint arXiv:1801.07365.
- 【Distillation】Sarah Tan, Rich Caruana, Giles Hooker, Albert Gordo .[Transparent Model Distillation](https://arxiv.org/pdf/1801.08640) .[J] arXiv preprint arXiv:1801.08640.
- Congzheng Song, Yiming Sun .[Kernel Distillation for Gaussian Processes](https://arxiv.org/pdf/1801.10273) .[J] arXiv preprint arXiv:1801.10273.
- Deepak Mittal, Shweta Bhardwaj, Mitesh M. Khapra, Balaraman Ravindran .[Recovering from Random Pruning: On the Plasticity of Deep Convolutional Neural Networks](https://arxiv.org/pdf/1801.10447) .[J] arXiv preprint arXiv:1801.10447.
- Jialiang Guo, Bo Zhou, Xiangrui Zeng, Zachary Freyberg, Min Xu .[Model compression for faster structural separation of macromolecules captured by Cellular Electron Cryo-Tomography](https://arxiv.org/pdf/1801.10597) .[J] arXiv preprint arXiv:1801.10597.
- 【Pruning】Jianbo Ye, Xin Lu, Zhe Lin, James Z. Wang .[Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers](https://arxiv.org/pdf/1802.00124) .[J] arXiv preprint arXiv:1802.00124.
【code:[jack-willturner/batchnorm-pruning](https://github.com/jack-willturner/batchnorm-pruning)】
- 【Quantization】Chen Xu, Jianqiang Yao, Zhouchen Lin, Wenwu Ou, Yuanbin Cao, Zhirong Wang, Hongbin Zha .[Alternating Multi-bit Quantization for Recurrent Neural Networks](https://arxiv.org/pdf/1802.00150) .[J] arXiv preprint arXiv:1802.00150.
- Yixing Li, Fengbo Ren .[Build a Compact Binary Neural Network through Bit-level Sensitivity and Data Pruning](https://arxiv.org/pdf/1802.00904) .[J] arXiv preprint arXiv:1802.00904.
- 【Survey】Jian Cheng, Peisong Wang, Gang Li, Qinghao Hu, Hanqing Lu .[Recent Advances in Efficient Computation of Deep Convolutional Neural Networks](https://arxiv.org/pdf/1802.00939) .[J] arXiv preprint arXiv:1802.00939
- Yoojin Choi, Mostafa El-Khamy, Jungwon Lee .[Universal Deep Neural Network Compression](https://arxiv.org/pdf/1802.02271) .[J] arXiv preprint arXiv:1802.02271.
- Md Zahangir Alom, Adam T Moody, Naoya Maruyama, Brian C Van Essen, Tarek M. Taha .[Effective Quantization Approaches for Recurrent Neural Networks](https://arxiv.org/pdf/1802.02615) .[J] arXiv preprint arXiv:1802.02615.
- 【Distillation】[Efficient Neural Architecture Search via Parameters Sharing](https://arxiv.org/pdf/1802.03268), Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, Jeff Dean, 2018
- 【Pruning】Yihui He, Song Han .[ADC: Automated Deep Compression and Acceleration with Reinforcement Learning](https://arxiv.org/pdf/1802.03494) .[J] arXiv preprint arXiv:1802.03494.
【code:[Tencent/PocketFlow#channel-pruning](https://github.com/Tencent/PocketFlow#channel-pruning);[mit-han-lab/amc-release](https://github.com/mit-han-lab/amc-release);[mit-han-lab/amc-compressed-models](https://github.com/mit-han-lab/amc-compressed-models)】
- 【Quantization】Yukun Ding, Jinglan Liu, Yiyu Shi .[On the Universal Approximability of Quantized ReLU Neural Networks](https://arxiv.org/pdf/1802.03646) .[J] arXiv preprint arXiv:1802.03646
- 【Structured】Qin Z, Zhang Z, Chen X, et al. [Fd-mobilenet: Improved mobilenet with a fast downsampling strategy](https://arxiv.org/pdf/1802.03750)[C]//2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018: 1363-1367.
- Jeff Zhang, Kartheek Rangineni, Zahra Ghodsi, Siddharth Garg .[ThUnderVolt: Enabling Aggressive Voltage Underscaling and Timing Error Resilience for Energy Efficient Deep Neural Network Accelerators](https://arxiv.org/pdf/1802.03806) .[J] arXiv preprint arXiv:1802.03806.
- Jeff Zhang, Tianyu Gu, Kanad Basu, Siddharth Garg .[Analyzing and Mitigating the Impact of Permanent Faults on a Systolic Array Based Neural Network Accelerator](https://arxiv.org/pdf/1802.04657) .[J] arXiv preprint arXiv:1802.04657.
- 【Quantization】Wu S, Li G, Chen F, et al.[Training and Inference with Integers in Deep Neural Networks](https://arxiv.org/pdf/1802.04680) .[J] arXiv preprint arXiv:1802.04680
【code:[boluoweifenda/WAGE](https://github.com/boluoweifenda/WAGE)】
- Luiz M Franca-Neto .[Field-Programmable Deep Neural Network (DNN) Learning and Inference accelerator: a concept](https://arxiv.org/pdf/1802.04899) .[J] arXiv preprint arXiv:1802.04899.
- 【other】Jia Z, Lin S, Qi C R, et al. [Exploring hidden dimensions in parallelizing convolutional neural networks](https://arxiv.org/pdf/1802.04924)[J]. arXiv preprint arXiv:1802.04924, 2018.
- 【other】Jangho Kim, SeoungUK Park, Nojun Kwak .[Paraphrasing Complex Network: Network Compression via Factor Transfer](https://arxiv.org/pdf/1802.04977) .[J] arXiv preprint arXiv:1802.04977.
- 【Quantization】Dai L, Tang L, Xie Y, et al. [Designing by training: acceleration neural network for fast high-dimensional convolution](https://papers.nips.cc/paper/7420-designing-by-training-acceleration-neural-network-for-fast-high-dimensional-convolution.pdf)[C]//Advances in Neural Information Processing Systems. 2018: 1466-1475.
- Qi Liu, Tao Liu, Zihao Liu, Yanzhi Wang, Yier Jin, Wujie Wen .[Security Analysis and Enhancement of Model Compressed Deep Learning Systems under Adversarial Attacks](https://arxiv.org/pdf/1802.05193) .[J] arXiv preprint arXiv:1802.05193.
- 【other】Sanjeev Arora, Rong Ge, Behnam Neyshabur, Yi Zhang .[Stronger generalization bounds for deep nets via a compression approach](https://arxiv.org/pdf/1802.05296) .[J] arXiv preprint arXiv:1802.05296.
- 【Quantization】Antonio Polino, Razvan Pascanu, Dan Alistarh .[Model compression via distillation and quantization](https://arxiv.org/pdf/1802.05668) .[J] arXiv preprint arXiv:1802.05668.
【code:[antspy/quantized_distillation)](https://github.com/antspy/quantized_distillation)】
- Tianyun Zhang, Shaokai Ye, Yipeng Zhang, Yanzhi Wang, Makan Fardad .[Systematic Weight Pruning of DNNs using Alternating Direction Method of Multipliers](https://arxiv.org/pdf/1802.05747) .[J] arXiv preprint arXiv:1802.05747.
- 【other】Arora S, Cohen N, Hazan E. [On the optimization of deep networks: Implicit acceleration by overparameterization](https://arxiv.org/pdf/1802.06509)[J]. arXiv preprint arXiv:1802.06509, 2018.
- Jiangyan Yi, Jianhua Tao, Zhengqi Wen, Bin Liu .[Distilling Knowledge Using Parallel Data for Far-field Speech Recognition](https://arxiv.org/pdf/1802.06941) .[J] arXiv preprint arXiv:1802.06941.
- Matthew Sotoudeh, Sara S. Baghsorkhi .[DeepThin: A Self-Compressing Library for Deep Neural Networks](https://arxiv.org/pdf/1802.06944) .[J] arXiv preprint arXiv:1802.06944.
- Ming Yu, Zhaoran Wang, Varun Gupta, Mladen Kolar .[Recovery of simultaneous low rank and two-way sparse coefficient matrices, a nonconvex approach](https://arxiv.org/pdf/1802.06967) .[J] arXiv preprint arXiv:1802.06967.
- Babajide O. Ayinde, Jacek M. Zurada .[Building Efficient ConvNets using Redundant Feature Pruning](https://arxiv.org/pdf/1802.07653) .[J] arXiv preprint arXiv:1802.07653.
- 【Binarization】McDonnell M D. [Training wide residual networks for deployment using a single bit for each weight](https://arxiv.org/pdf/1802.08530)[J]. arXiv preprint arXiv:1802.08530, 2018.
【code:[szagoruyko/binary-wide-resnet](https://github.com/szagoruyko/binary-wide-resnet)】
- 【Quantization】Lu Hou, James T. Kwok .[Loss-aware Weight Quantization of Deep Networks](https://arxiv.org/pdf/1802.08635) .[J] arXiv preprint arXiv:1802.08635.
- 【Deconvolution】Wenqi Wang, Yifan Sun, Brian Eriksson, Wenlin Wang, Vaneet Aggarwal .[Wide Compression: Tensor Ring Nets](https://arxiv.org/pdf/1802.09052) .[J] arXiv preprint arXiv:1802.09052.
- Jinglan Liu, Jiaxin Zhang, Yukun Ding, Xiaowei Xu, Meng Jiang, Yiyu Shi .[PBGen: Partial Binarization of Deconvolution-Based Generators for Edge Intelligence](https://arxiv.org/pdf/1802.09153) .[J] arXiv preprint arXiv:1802.09153.
- 【Other】Bin Dai, Chen Zhu, David Wipf .[Compressing Neural Networks using the Variational Information Bottleneck](https://arxiv.org/pdf/1802.10399) .[J] arXiv preprint arXiv:1802.10399.
- Andros Tjandra, Sakriani Sakti, Satoshi Nakamura .[Tensor Decomposition for Compressing Recurrent Neural Network](https://arxiv.org/pdf/1802.10410) .[J] arXiv preprint arXiv:1802.10410.
- .[Learning Sparse Structured Ensembles with SG-MCMC and Network Pruning](https://arxiv.org/pdf/1803.00184) .[J] arXiv preprint arXiv:1803.00184.
- 【Quantization】.[Deep Neural Network Compression with Single and Multiple Level Quantization](https://arxiv.org/pdf/1803.03289) .[J] arXiv preprint arXiv:1803.03289.
- 【Pruning】.[The Lottery Ticket Hypothesis: Training Pruned Neural Networks](https://arxiv.org/pdf/1803.03635) .[J] arXiv preprint arXiv:1803.03635.
【code:[google-research/lottery-ticket-hypothesis](https://github.com/google-research/lottery-ticket-hypothesis)】
- 【Distillation】.[Interpreting Deep Classifier by Visual Distillation of Dark Knowledge](https://arxiv.org/pdf/1803.04042) .[J] arXiv preprint arXiv:1803.04042.
- .[FeTa: A DCA Pruning Algorithm with Generalization Error Guarantees](https://arxiv.org/pdf/1803.04239) .[J] arXiv preprint arXiv:1803.04239.
- 【Distillation】[Multimodal Recurrent Neural Networks with Information Transfer Layers for Indoor Scene Labeling](https://arxiv.org/pdf/1803.04687.pdf), Abrar H. Abdulnabi, Bing Shuai, Zhen Zuo, Lap-Pui Chau, Gang Wang, 2018
- 【Quantization】.[Quantization of Fully Convolutional Networks for Accurate Biomedical Image Segmentation](https://arxiv.org/pdf/1803.04907) .[J] arXiv preprint arXiv:1803.04907.
- 【Distillation】[Defensive Collaborative Multi-task Training - Defending against Adversarial Attack towards Deep Neural Networks](https://arxiv.org/pdf/1803.05123), Derek Wang, Chaoran Li, Sheng Wen, Yang Xiang, Wanlei Zhou, Surya Nepal, 2018
- Dong Wang, Lei Zhou, Xueni Zhang, Xiao Bai, Jun Zhou .[Exploring Linear Relationship in Feature Map Subspace for ConvNets Compression](https://arxiv.org/pdf/1803.05729) .[J] arXiv preprint arXiv:1803.05729.
- 【Distillation】[Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples](https://arxiv.org/pdf/1803.05787), Zihao Liu, Qi Liu, Tao Liu, Yanzhi Wang, Wujie Wen, 2018
- 【Distillation】[Deep Co-Training for Semi-Supervised Image Recognition](https://arxiv.org/pdf/1803.05984), Siyuan Qiao, Wei Shen, Zhishuai Zhang, Bo Wang, Alan Yuille, 2018
- Shuo Wang, Zhe Li, Caiwen Ding, Bo Yuan, Yanzhi Wang, Qinru Qiu, Yun Liang
- 【Train】Tang H, Gan S, Zhang C, et al. [Communication compression for decentralized training](https://arxiv.org/abs/1803.06443)[C]//Advances in Neural Information Processing Systems. 2018: 7652-7662.
.[C-LSTM: Enabling Efficient LSTM using Structured Compression Techniques on FPGAs](https://arxiv.org/pdf/1803.06305) .[J] arXiv preprint arXiv:1803.06305.
- Qing Tian, Tal Arbel, James J. Clark .[Fisher Pruning of Deep Nets for Facial Trait Classification](https://arxiv.org/pdf/1803.08134) .[J] arXiv preprint arXiv:1803.08134.
- Tao Sheng, Chen Feng, Shaojie Zhuo, Xiaopeng Zhang, Liang Shen, Mickey Aleksic .[A Quantization-Friendly Separable Convolution for MobileNets](https://arxiv.org/pdf/1803.08607) .[J] arXiv preprint arXiv:1803.08607.
- Maksym Kholiavchenko .[Iterative Low-Rank Approximation for CNN Compression](https://arxiv.org/pdf/1803.08995) .[J] arXiv preprint arXiv:1803.08995.
- 【Distillation】Zheng Hui, Xiumei Wang, Xinbo Gao .[Fast and Accurate Single Image Super-Resolution via Information Distillation Network](https://arxiv.org/pdf/1803.09454) .[J] arXiv preprint arXiv:1803.09454.
- Jongwon Choi, Hyung Jin Chang, Tobias Fischer, Sangdoo Yun, Kyuewang Lee, Jiyeoup Jeong, Yiannis Demiris, Jin Young Choi .[Context-aware Deep Feature Compression for High-speed Visual Tracking](https://arxiv.org/pdf/1803.10537) .[J] arXiv preprint arXiv:1803.10537.
- 【Structure】Gholami A, Kwon K, Wu B, et al. [Squeezenext: Hardware-aware neural network design](https://arxiv.org/pdf/1803.10615.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2018: 1638-1647.
- Vasileios Belagiannis, Azade Farshad, Fabio Galasso .[Adversarial Network Compression](https://arxiv.org/pdf/1803.10750) .[J] arXiv preprint arXiv:1803.10750.
- Ameya Prabhu, Vishal Batchu, Sri Aurobindo Munagala, Rohit Gajawada, Anoop Namboodiri .[Distribution-Aware Binarization of Neural Networks for Sketch Recognition](https://arxiv.org/pdf/1804.02941) .[J] arXiv preprint arXiv:1804.02941.
- 【Distillation】[Large scale distributed neural network training through online distillation](https://arxiv.org/abs/1804.03235), Rohan Anil, Gabriel Pereyra, Alexandre Passos, Robert Ormandi, George E. Dahl, Geoffrey E. Hinton, 2018
- 【Pruning】Tianyun Zhang, Shaokai Ye, Kaiqi Zhang, Jian Tang, Wujie Wen, Makan Fardad, Yanzhi Wang .[A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers](https://arxiv.org/pdf/1804.03294) .[J] arXiv preprint arXiv:1804.03294.
【code:[KaiqiZhang/admm-pruning](https://github.com/KaiqiZhang/admm-pruning)】
- Guanglu Song, Yu Liu, Ming Jiang, Yujie Wang, Junjie Yan, Biao Leng .[Beyond Trade-off: Accelerate FCN-based Face Detector with Higher Accuracy](https://arxiv.org/pdf/1804.05197) .[J] arXiv preprint arXiv:1804.05197.
- Cenk Baykal, Lucas Liebenwein, Igor Gilitschenski, Dan Feldman, Daniela Rus .[Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds](https://arxiv.org/pdf/1804.05345) .[J] arXiv preprint arXiv:1804.05345.
- Wenda Zhou, Victor Veitch, Morgane Austern, Ryan P. Adams, Peter Orbanz .[Compressibility and Generalization in Large-Scale Deep Learning](https://arxiv.org/pdf/1804.05862) .[J] arXiv preprint arXiv:1804.05862.
- 【Structured】Xie G, Wang J, Zhang T, et al. [Interleaved structured sparse convolutional neural networks](https://arxiv.org/abs/1804.06202)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 8847-8856.
- Xu J, Nie Y, Wang P, et al. [Training a Binary Weight Object Detector by Knowledge Transfer for Autonomous Driving](https://arxiv.org/pdf/1804.06332)[C]//2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019: 2379-2384.
- 【Quantization】Eunhyeok Park, Sungjoo Yoo, Peter Vajda .[Value-aware Quantization for Training and Inference of Neural Networks](https://arxiv.org/pdf/1804.07802) .[J] arXiv preprint arXiv:1804.07802.
- Liyuan Liu, Xiang Ren, Jingbo Shang, Jian Peng, Jiawei Han .[Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling](https://arxiv.org/pdf/1804.07827) .[J] arXiv preprint arXiv:1804.07827.
- Huan Wang, Qiming Zhang, Yuehai Wang, Roland Hu .[Structured Deep Neural Network Pruning by Varying Regularization Parameters](https://arxiv.org/pdf/1804.09461) .[J] arXiv preprint arXiv:1804.09461.
- Takashi Shinozaki .[Competitive Learning Enriches Learning Representation and Accelerates the Fine-tuning of CNNs](https://arxiv.org/pdf/1804.09859) .[J] arXiv preprint arXiv:1804.09859.
- Hyeong-Ju Kang .[Accelerator-Aware Pruning for Convolutional Neural Networks](https://arxiv.org/pdf/1804.09862) .[J] arXiv preprint arXiv:1804.09862.
- Chenrui Zhang, Yuxin Peng .[Better and Faster: Knowledge Transfer from Multiple Self-supervised Learning Tasks via Graph Distillation for Video Classification](https://arxiv.org/pdf/1804.10069) .[J] arXiv preprint arXiv:1804.10069.
- Chaim Baskin, Eli Schwartz, Evgenii Zheltonozhskii, Natan Liss, Raja Giryes, Alex M. Bronstein, Avi Mendelson .[UNIQ: Uniform Noise Injection for the Quantization of Neural Networks](https://arxiv.org/pdf/1804.10969) .[J] arXiv preprint arXiv:1804.10969.
- Xuemeng Song, Fuli Feng, Xianjing Han, Xin Yang, Wei Liu, Liqiang Nie .[Neural Compatibility Modeling with Attentive Knowledge Distillation](https://arxiv.org/pdf/1805.00313) .[J] arXiv preprint arXiv:1805.00313.
- Baohua Sun, Lin Yang, Patrick Dong, Wenhan Zhang, Jason Dong, Charles Young .[Ultra Power-Efficient CNN Domain Specific Accelerator with 9.3TOPS/Watt for Mobile and Embedded Applications](https://arxiv.org/pdf/1805.00361) .[J] arXiv preprint arXiv:1805.00361.
- Biao Zhang, Deyi Xiong, Jinsong Su .[Accelerating Neural Transformer via an Average Attention Network](https://arxiv.org/pdf/1805.00631) .[J] arXiv preprint arXiv:1805.00631.
- Brian Bartoldson, Adrian Barbu, Gordon Erlebacher .[Enhancing the Regularization Effect of Weight Pruning in Artificial Neural Networks](https://arxiv.org/pdf/1805.01930) .[J] arXiv preprint arXiv:1805.01930.
- 【Quantization】Yi Wei, Xinyu Pan, Hongwei Qin, Wanli Ouyang, Junjie Yan .[Quantization Mimic: Towards Very Tiny CNN for Object Detection](https://arxiv.org/pdf/1805.02152) .[J] arXiv preprint arXiv:1805.02152.
- Fuqiang Liu, C. Liu .[Towards Accurate and High-Speed Spiking Neuromorphic Systems with Data Quantization-Aware Deep Networks](https://arxiv.org/pdf/1805.03054) .[J] arXiv preprint arXiv:1805.03054.
- 【Distillation】Dan Xu, Wanli Ouyang, Xiaogang Wang, Nicu Sebe .[PAD-Net: Multi-Tasks Guided Prediction-and-Distillation Network for Simultaneous Depth Estimation and Scene Parsing](https://arxiv.org/pdf/1805.04409) .[J] arXiv preprint arXiv:1805.04409.
- 【Distillation】[Born Again Neural Networks](https://arxiv.org/abs/1805.04770), Tommaso Furlanello, Zachary C. Lipton, Michael Tschannen, Laurent Itti, Anima Anandkumar, 2018
- 【Distillation】Byeongho Heo, Minsik Lee, Sangdoo Yun, Jin Young Choi .[Knowledge Distillation with Adversarial Samples Supporting Decision Boundary](https://arxiv.org/pdf/1805.05532) .[J] arXiv preprint arXiv:1805.05532.
- Chenglin Yang, Lingxi Xie, Siyuan Qiao, Alan Yuille .[Knowledge Distillation in Generations: More Tolerant Teachers Educate Better Students](https://arxiv.org/pdf/1805.05551) .[J] arXiv preprint arXiv:1805.05551.
- 【Quantization】Choi J, Wang Z, Venkataramani S, et al. [Pact: Parameterized clipping activation for quantized neural networks](https://arxiv.org/pdf/1805.06085)[J]. arXiv preprint arXiv:1805.06085, 2018.
- Aupendu Kar, Sri Phani Krishna Karri, Nirmalya Ghosh, Ramanathan Sethuraman, Debdoot Sheet .[Fully Convolutional Model for Variable Bit Length and Lossy High Density Compression of Mammograms](https://arxiv.org/pdf/1805.06909) .[J] arXiv preprint arXiv:1805.06909.
- Silvia L. Pintea, Yue Liu, Jan C. van Gemert .[Recurrent knowledge distillation](https://arxiv.org/pdf/1805.07170) .[J] arXiv preprint arXiv:1805.07170.
- Thorsten Laude, Yannick Richter, Jörn Ostermann .[Neural Network Compression using Transform Coding and Clustering](https://arxiv.org/pdf/1805.07258) .[J] arXiv preprint arXiv:1805.07258.
- Yoojin Choi, Mostafa El-Khamy, Jungwon Lee .[Compression of Deep Convolutional Neural Networks under Joint Sparsity Constraints](https://arxiv.org/pdf/1805.08303) .[J] arXiv preprint arXiv:1805.08303.
- Panagiotis G. Mousouliotis, Loukas P. Petrou .[SqueezeJet: High-level Synthesis Accelerator Design for Deep Convolutional Neural Networks](https://arxiv.org/pdf/1805.08695) .[J] arXiv preprint arXiv:1805.08695.
- 【Quantization】Felix Sattler, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek .[Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication](https://arxiv.org/pdf/1805.08768) .[J] arXiv preprint arXiv:1805.08768.
- 【Pruning】Jian-Hao Luo, Jianxin Wu .[AutoPruner: An End-to-End Trainable Filter Pruning Method for Efficient Deep Model Inference](https://arxiv.org/pdf/1805.08941) .[J] arXiv preprint arXiv:1805.08941.
- Yi Yang, Andy Chen, Xiaoming Chen, Jiang Ji, Zhenyang Chen, Yan Dai .[Deploy Large-Scale Deep Neural Networks in Resource Constrained IoT Devices with Local Quantization Region](https://arxiv.org/pdf/1805.09473) .[J] arXiv preprint arXiv:1805.09473.
- Jiahao Su, Jingling Li, Bobby Bhattacharjee, Furong Huang .[Tensorized Spectrum Preserving Compression for Neural Networks](https://arxiv.org/pdf/1805.10352) .[J] arXiv preprint arXiv:1805.10352.
- Josh Fromm, Shwetak Patel, Matthai Philipose .[Heterogeneous Bitwidth Binarization in Convolutional Neural Networks](https://arxiv.org/pdf/1805.10368) .[J] arXiv preprint arXiv:1805.10368.
- 【other】Zhou P, Feng J. [Understanding generalization and optimization performance of deep CNNs](https://arxiv.org/pdf/1805.10767)[J]. arXiv preprint arXiv:1805.10767, 2018.
- Krzysztof Wróbel, Marcin Pietroń, Maciej Wielgosz, Michał Karwatowski, Kazimierz Wiatr .[Convolutional neural network compression for natural language processing](https://arxiv.org/pdf/1805.10796) .[J] arXiv preprint arXiv:1805.10796.
- François Plesse, Alexandru Ginsca, Bertrand Delezoide, Françoise Prêteux .[Visual Relationship Detection Based on Guided Proposals and Semantic Knowledge Distillation](https://arxiv.org/pdf/1805.10802) .[J] arXiv preprint arXiv:1805.10802.
- 【Train】Banner R, Hubara I, Hoffer E, et al. [Scalable methods for 8-bit training of neural networks](https://arxiv.org/abs/1805.11046)[C]//Advances in Neural Information Processing Systems. 2018: 5145-5153.
- Yijia Liu, Wanxiang Che, Huaipeng Zhao, Bing Qin, Ting Liu .[Distilling Knowledge for Search-based Structured Prediction](https://arxiv.org/pdf/1805.11224) .[J] arXiv preprint arXiv:1805.11224.
- Dongsoo Lee, Byeongwook Kim .[Retraining-Based Iterative Weight Quantization for Deep Neural Networks](https://arxiv.org/pdf/1805.11233) .[J] arXiv preprint arXiv:1805.11233.
- Yiming Hu, Siyang Sun, Jianquan Li, Xingang Wang, Qingyi Gu .[A novel channel pruning method for deep neural network compression](https://arxiv.org/pdf/1805.11394) .[J] arXiv preprint arXiv:1805.11394.
- Xiaoliang Dai, Hongxu Yin, Niraj K. Jha .[Grow and Prune Compact, Fast, and Accurate LSTMs](https://arxiv.org/pdf/1805.11797) .[J] arXiv preprint arXiv:1805.11797.
- Lazar Supic, Rawan Naous, Ranko Sredojevic, Aleksandra Faust, Vladimir Stojanovic .[MPDCompress - Matrix Permutation Decomposition Algorithm for Deep Neural Network Compression](https://arxiv.org/pdf/1805.12085) .[J] arXiv preprint arXiv:1805.12085.
- 【Pruning】Weizhe Hua, Christopher De Sa, Zhiru Zhang, G. Edward Suh.[Channel Gating Neural Networks](https://arxiv.org/abs/1805.12549)[j] arXiv preprint arXiv:1805.12549
- Jie Zhang, Xiaolong Wang, Dawei Li, Yalin Wang .[Dynamically Hierarchy Revolution: DirNet for Compressing Recurrent Neural Network on Mobile Devices](https://arxiv.org/pdf/1806.01248) .[J] arXiv preprint arXiv:1806.01248.
- Jianzhong Sheng, Chuanbo Chen, Chenchen Fu, Chun Jason Xue .[EasyConvPooling: Random Pooling with Easy Convolution for Accelerating Training and Testing](https://arxiv.org/pdf/1806.01729) .[J] arXiv preprint arXiv:1806.01729.
- Yang H, Zhu Y, Liu J. [Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking](https://arxiv.org/pdf/1806.04321)[J]. arXiv preprint arXiv:1806.04321, 2018.
- 【Distillation】Xu Lan, Xiatian Zhu, Shaogang Gong .[Knowledge Distillation by On-the-Fly Native Ensemble](https://arxiv.org/pdf/1806.04606) .[J] arXiv preprint arXiv:1806.04606.
- Yijun Bian, Yijun Wang, Yaqiang Yao, Huanhuan Chen .[Ensemble Pruning based on Objection Maximization with a General Distributed Framework](https://arxiv.org/pdf/1806.04899) .[J] arXiv preprint arXiv:1806.04899.
- Huiyuan Zhuo, Xuelin Qian, Yanwei Fu, Heng Yang, Xiangyang Xue .[SCSP: Spectral Clustering Filter Pruning with Soft Self-adaption Manners](https://arxiv.org/pdf/1806.05320) .[J] arXiv preprint arXiv:1806.05320.
- Yibo Yang, Nicholas Ruozzi, Vibhav Gogate .[Scalable Neural Network Compression and Pruning Using Hard Clustering and L1 Regularization](https://arxiv.org/pdf/1806.05355) .[J] arXiv preprint arXiv:1806.05355.
- Kohei Yamamoto, Kurato Maeno .[PCAS: Pruning Channels with Attention Statistics](https://arxiv.org/pdf/1806.05382) .[J] arXiv preprint arXiv:1806.05382.
- Mohsen Imani, Mohammad Samragh, Yeseong Kim, Saransh Gupta, Farinaz Koushanfar, Tajana Rosing .[RAPIDNN: In-Memory Deep Neural Network Acceleration Framework](https://arxiv.org/pdf/1806.05794) .[J] arXiv preprint arXiv:1806.05794.
- 【Structure】Xingyu Liu, Jeff Pool, Song Han, William J. Dally .[Efficient Sparse-Winograd Convolutional Neural Networks](https://arxiv.org/pdf/1802.06367) .[J] arXiv preprint arXiv:1802.06367
- 【Structure】Sun K, Li M, Liu D, et al. [Igcv3: Interleaved low-rank group convolutions for efficient deep neural networks](https://arxiv.org/abs/1806.00178)[J]. arXiv preprint arXiv:1806.00178, 2018.
【code:[homles11/IGCV3](https://github.com/homles11/IGCV3)】
- Alireza Aghasi, Afshin Abdi, Justin Romberg .[Fast Convex Pruning of Deep Neural Networks](https://arxiv.org/pdf/1806.06457) .[J] arXiv preprint arXiv:1806.06457.
- Maximilian Golub, Guy Lemieux, Mieszko Lis .[DropBack: Continuous Pruning During Training](https://arxiv.org/pdf/1806.06949) .[J] arXiv preprint arXiv:1806.06949.
- Zhu S, Dong X, Su H. [Binary Ensemble Neural Network: More Bits per Network or More Networks per Bit?](https://arxiv.org/pdf/1806.07550.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 4923-4932.
【code:[XinDongol/BENN-PyTorch](https://github.com/XinDongol/BENN-PyTorch)】
- 【Quantization】Krishnamoorthi R. [Quantizing deep convolutional networks for efficient inference: A whitepaper](https://arxiv.org/pdf/1806.08342)[J]. arXiv preprint arXiv:1806.08342, 2018.
- 【Quantization】Junru Wu, Yue Wang, Zhenyu Wu, Zhangyang Wang, Ashok Veeraraghavan, Yingyan Lin .[Deep $k$-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions](https://arxiv.org/pdf/1806.09228) .[J] arXiv preprint arXiv:1806.09228.
- Behzad Salami, Osman Unsal, Adrian Cristal .[On the Resilience of RTL NN Accelerators: Fault Characterization and Mitigation](https://arxiv.org/pdf/1806.09679) .[J] arXiv preprint arXiv:1806.09679.
- 【other】Wang K C, Vicol P, Lucas J, et al. [Adversarial distillation of bayesian neural network posteriors](https://arxiv.org/pdf/1806.10317)[J]. arXiv preprint arXiv:1806.10317, 2018.
- 【Quantization】Julian Faraone, Nicholas Fraser, Michaela Blott, Philip H.W. Leong .[SYQ: Learning Symmetric Quantization For Efficient Deep Neural Networks](https://arxiv.org/pdf/1807.00301) .[J] arXiv preprint arXiv:1807.00301.
【code:[julianfaraone/SYQ](https://github.com/julianfaraone/SYQ)】
- Amogh Agrawal, Akhilesh Jaiswal, Bing Han, Gopalakrishnan Srinivasan, Kaushik Roy .[Xcel-RAM: Accelerating Binary Neural Networks in High-Throughput SRAM Compute Arrays](https://arxiv.org/pdf/1807.00343) .[J] arXiv preprint arXiv:1807.00343.
- Jeff Zhang, Siddharth Garg .[FATE: Fast and Accurate Timing Error Prediction Framework for Low Power DNN Accelerator Design](https://arxiv.org/pdf/1807.00480) .[J] arXiv preprint arXiv:1807.00480.
- Ekta Gujral, Ravdeep Pasricha, Tianxiong Yang, Evangelos E. Papalexakis .[OCTen: Online Compression-based Tensor Decomposition](https://arxiv.org/pdf/1807.01350) .[J] arXiv preprint arXiv:1807.01350.
- Hamed Hakkak .[Auto Deep Compression by Reinforcement Learning Based Actor-Critic Structure](https://arxiv.org/pdf/1807.02886) .[J] arXiv preprint arXiv:1807.02886.
- Salaheddin Alakkari, John Dingliana .[An Acceleration Scheme for Memory Limited, Streaming PCA](https://arxiv.org/pdf/1807.06530) .[J] arXiv preprint arXiv:1807.06530.
- Seung Hyun Lee, Dae Ha Kim, Byung Cheol Song .[Self-supervised Knowledge Distillation Using Singular Value Decomposition](https://arxiv.org/pdf/1807.06819) .[J] arXiv preprint arXiv:1807.06819.
- Grant P. Strimel, Kanthashree Mysore Sathyendra, Stanislav Peshterliev .[Statistical Model Compression for Small-Footprint Natural Language Understanding](https://arxiv.org/pdf/1807.07520) .[J] arXiv preprint arXiv:1807.07520.
- 【Binarization】He Z, Gong B, Fan D. [Optimize deep convolutional neural network with ternarized weights and high accuracy](https://arxiv.org/pdf/1807.07948)[C]//2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2019: 913-921.
- Qianru Zhang, Meng Zhang, Tinghuan Chen, Zhifei Sun, Yuzhe Ma, Bei Yu .[Recent Advances in Convolutional Neural Network Acceleration](https://arxiv.org/pdf/1807.08596) .[J] arXiv preprint arXiv:1807.08596.
- Armin Mehrabian, Yousra Al-Kabani, Volker J Sorger, Tarek El-Ghazawi .[PCNNA: A Photonic Convolutional Neural Network Accelerator](https://arxiv.org/pdf/1807.08792) .[J] arXiv preprint arXiv:1807.08792.
- 【Pruning】Abhimanyu Dubey, Moitreya Chatterjee, Narendra Ahuja .[Coreset-Based Neural Network Compression](https://arxiv.org/pdf/1807.09810) .[J] arXiv preprint arXiv:1807.09810.
【code:[metro-smiles/CNN_Compression](https://github.com/metro-smiles/CNN_Compression)】
- 【Quantization】Dongqing Zhang, Jiaolong Yang, Dongqiangzi Ye, Gang Hua .[LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks](https://arxiv.org/pdf/1807.10029) .[J] arXiv preprint arXiv:1807.10029.
【code:[microsoft/LQ-Nets](https://github.com/microsoft/LQ-Nets)】
- Hongyu Guo, Yongyi Mao, Richong Zhang .[Aggregated Learning: A Vector Quantization Approach to Learning with Neural Networks](https://arxiv.org/pdf/1807.10251) .[J] arXiv preprint arXiv:1807.10251.
- Xavier Suau, Luca Zappella, Vinay Palakkode, Nicholas Apostoloff .[Principal Filter Analysis for Guided Network Compression](https://arxiv.org/pdf/1807.10585) .[J] arXiv preprint arXiv:1807.10585.
- Jin Hee Kim, Brett Grady, Ruolong Lian, John Brothers, Jason H. Anderson .[FPGA-Based CNN Inference Accelerator Synthesized from Multi-Threaded C Software](https://arxiv.org/pdf/1807.10695) .[J] arXiv preprint arXiv:1807.10695.
- Ling Liang, Lei Deng, Yueling Zeng, Xing Hu, Yu Ji, Xin Ma, Guoqi Li, Yuan Xie .[Crossbar-aware neural network pruning](https://arxiv.org/pdf/1807.10816) .[J] arXiv preprint arXiv:1807.10816.
- Tianyun Zhang, Kaiqi Zhang, Shaokai Ye, Jiayu Li, Jian Tang, Wujie Wen, Xue Lin, Makan Fardad, Yanzhi Wang .[ADAM-ADMM: A Unified, Systematic Framework of Structured Weight Pruning for DNNs](https://arxiv.org/pdf/1807.11091) .[J] arXiv preprint arXiv:1807.11091.
- 【Structure】【Shufflenet V2】Ma N, Zhang X, Zheng H T, et al. [Shufflenet v2: Practical guidelines for efficient cnn architecture design](https://arxiv.org/abs/1807.11164)[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 116-131.
- 【Structure】Chen Y, Kalantidis Y, Li J, et al. [Multi-fiber networks for video recognition](https://arxiv.org/abs/1807.11195)[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 352-367.
【code:[cypw/PyTorch-MFNet](https://github.com/cypw/PyTorch-MFNet)】
- 【Low rank】Bo Peng, Wenming Tan, Zheyang Li, Shun Zhang, Di Xie, Shiliang Pu .[Extreme Network Compression via Filter Group Approximation](https://arxiv.org/pdf/1807.11254) .[J] arXiv preprint arXiv:1807.11254.
- 【Structure】Tan M, Chen B, Pang R, et al. [Mnasnet: Platform-aware neural architecture search for mobile](https://arxiv.org/pdf/1807.11626.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 2820-2828.
【code:[tensorflow/tpu](https://github.com/tensorflow/tpu/tree/master/models/official/mnasnet)】
- David M. Chan, Roshan Rao, Forrest Huang, John F. Canny .[t-SNE-CUDA: GPU-Accelerated t-SNE and its Applications to Modern Data](https://arxiv.org/pdf/1807.11824) .[J] arXiv preprint arXiv:1807.11824.
- Ini Oguntola, Subby Olubeko, Christopher Sweeney .[SlimNets: An Exploration of Deep Model Compression and Acceleration](https://arxiv.org/pdf/1808.00496) .[J] arXiv preprint arXiv:1808.00496.
- Zhanxuan Hu, Feiping Nie, Lai Tian, Rong Wang, Xuelong Li .[A Comprehensive Survey for Low Rank Regularization](https://arxiv.org/pdf/1808.04521) .[J] arXiv preprint arXiv:1808.04521.
- Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, Jack Xin .[Blended Coarse Gradient Descent for Full Quantization of Deep Neural Networks](https://arxiv.org/pdf/1808.05240) .[J] arXiv preprint arXiv:1808.05240.
- Denis A. Gudovskiy, Alec Hodgkinson, Luca Rigazio .[DNN Feature Map Compression using Learned Representation over GF(2)](https://arxiv.org/pdf/1808.05285) .[J] arXiv preprint arXiv:1808.05285.
- Sangil Jung, Changyong Son, Seohyung Lee, Jinwoo Son, Youngjun Kwak, Jae-Joon Han, Changkyu Choi .[Joint Training of Low-Precision Neural Network with Quantization Interval Parameters](https://arxiv.org/pdf/1808.05779) .[J] arXiv preprint arXiv:1808.05779.
- 【Pruning】Yang He, Guoliang Kang, Xuanyi Dong, Yanwei Fu, Yi Yang .[Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks](https://arxiv.org/pdf/1808.06866) .[J] arXiv preprint arXiv:1808.06866.
【code:[he-y/soft-filter-pruning](https://github.com/he-y/soft-filter-pruning)】
- Yang He, Xuanyi Dong, Guoliang Kang, Yanwei Fu, Yi Yang .[Progressive Deep Neural Networks Acceleration via Soft Filter Pruning](https://arxiv.org/pdf/1808.07471) .[J] arXiv preprint arXiv:1808.07471.
- Ali Athar .[An Overview of Datatype Quantization Techniques for Convolutional Neural Networks](https://arxiv.org/pdf/1808.07530) .[J] arXiv preprint arXiv:1808.07530.
- Yichen Zhou, Zhengze Zhou, Giles Hooker .[Approximation Trees: Statistical Stability in Model Distillation](https://arxiv.org/pdf/1808.07573) .[J] arXiv preprint arXiv:1808.07573.
- Taiji Suzuki, Hiroshi Abe, Tomoya Murata, Shingo Horiuchi, Kotaro Ito, Tokuma Wachi, So Hirai, Masatoshi Yukishima, Tomoaki Nishimura .[Spectral-Pruning: Compressing deep neural network via spectral analysis](https://arxiv.org/pdf/1808.08558) .[J] arXiv preprint arXiv:1808.08558.
- Junran Peng, Lingxi Xie, Zhaoxiang Zhang, Tieniu Tan, Jingdong Wang .[Accelerating Deep Neural Networks with Spatial Bottleneck Modules](https://arxiv.org/pdf/1809.02601) .[J] arXiv preprint arXiv:1809.02601.
- Abdallah Moussawi, Kamal Haddad, Anthony Chahine .[An FPGA-Accelerated Design for Deep Learning Pedestrian Detection in Self-Driving Vehicles](https://arxiv.org/pdf/1809.05879) .[J] arXiv preprint arXiv:1809.05879.
- 【Distillation】[Multi-Label Image Classification via Knowledge Distillation from Weakly-Supervised Detection](https://arxiv.org/abs/1809.05884), Yongcheng Liu, Lu Sheng, Jing Shao, Junjie Yan, Shiming Xiang, Chunhong Pan, 2018
- Jiaxi Tang, Ke Wang .[Ranking Distillation: Learning Compact Ranking Models With High Performance for Recommender System](https://arxiv.org/pdf/1809.07428) .[J] arXiv preprint arXiv:1809.07428.
- Matthias Springer .[SoaAlloc: Accelerating Single-Method Multiple-Objects Applications on GPUs](https://arxiv.org/pdf/1809.07444) .[J] arXiv preprint arXiv:1809.07444.
- 【Structure】Huasong Zhong, Xianggen Liu, Yihui He, Yuchun Ma .[Shift-based Primitives for Efficient Convolutional Neural Networks](https://arxiv.org/pdf/1809.08458) .[J] arXiv preprint arXiv:1809.08458
- Jeffrey L Mckinstry, Davis R. Barch, Deepika Bablani, Michael V. Debole, Steven K. Esser, Jeffrey A. Kusnitz, John V. Arthur, Dharmendra S. Modha .[Low Precision Policy Distillation with Application to Low-Power, Real-time Sensation-Cognition-Action Loop with Neuromorphic Computing](https://arxiv.org/pdf/1809.09260) .[J] arXiv preprint arXiv:1809.09260.
- Raphael Tang, Jimmy Lin .[Adaptive Pruning of Neural Language Models for Mobile Devices](https://arxiv.org/pdf/1809.10282) .[J] arXiv preprint arXiv:1809.10282.
- 【other】Oyallon E, Belilovsky E, Zagoruyko S, et al. [Compressing the Input for CNNs with the First-Order Scattering Transform](https://arxiv.org/abs/1809.10200)[C]//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 301-316.
- Jacob R. Gardner, Geoff Pleiss, David Bindel, Kilian Q. Weinberger, Andrew Gordon Wilson .[GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration](https://arxiv.org/pdf/1809.11165) .[J] arXiv preprint arXiv:1809.11165.
- Chaim Baskin, Natan Liss, Yoav Chai, Evgenii Zheltonozhskii, Eli Schwartz, Raja Giryes, Avi Mendelson, Alexander M. Bronstein .[NICE: Noise Injection and Clamping Estimation for Neural Network Quantization](https://arxiv.org/pdf/1810.00162) .[J] arXiv preprint arXiv:1810.00162.
- Simon Alford, Ryan Robinett, Lauren Milechin, Jeremy Kepner .[Pruned and Structurally Sparse Neural Networks](https://arxiv.org/pdf/1810.00299) .[J] arXiv preprint arXiv:1810.00299.
- Marton Havasi, Robert Peharz, José Miguel Hernández-Lobato .[Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters](https://arxiv.org/pdf/1810.00440) .[J] arXiv preprint arXiv:1810.00440.
- Ting-Wu Chin, Cha Zhang, Diana Marculescu .[Layer-compensated Pruning for Resource-constrained Convolutional Neural Networks](https://arxiv.org/pdf/1810.00518) .[J] arXiv preprint arXiv:1810.00518.
- 【Pruning】Liu L, Deng L, Hu X, et al. [Dynamic sparse graph for efficient deep learning](https://arxiv.org/pdf/1810.00859)[J]. arXiv preprint arXiv:1810.00859, 2018.
【code:[mtcrawshaw/dynamic-sparse-graph](https://github.com/mtcrawshaw/dynamic-sparse-graph)】
- Bai Y, Wang Y X, Liberty E. [Proxquant: Quantized neural networks via proximal operators](https://arxiv.org/pdf/1810.00861)[J]. arXiv preprint arXiv:1810.00861, 2018.
【code:[allenbai01/ProxQuant](https://github.com/allenbai01/ProxQuant)】
- Christos Louizos, Matthias Reisser, Tijmen Blankevoort, Efstratios Gavves, Max Welling .[Relaxed Quantization for Discretized Neural Networks](https://arxiv.org/pdf/1810.01875) .[J] arXiv preprint arXiv:1810.01875.
- Animesh Koratana, Daniel Kang, Peter Bailis, Matei Zaharia .[LIT: Block-wise Intermediate Representation Training for Model Compression](https://arxiv.org/pdf/1810.01937) .[J] arXiv preprint arXiv:1810.01937.
- 【Quantization】Fu C, Zhu S, Su H, et al. [Towards fast and energy-efficient binarized neural network inference on fpga](https://arxiv.org/pdf/1810.02068)[J]. arXiv preprint arXiv:1810.02068, 2018.
- Anna T. Thomas, Albert Gu, Tri Dao, Atri Rudra, Christopher Ré .[Learning Compressed Transforms with Low Displacement Rank](https://arxiv.org/pdf/1810.02309) .[J] arXiv preprint arXiv:1810.02309.
- 【Pruning】Namhoon Lee, Thalaiyasingam Ajanthan, Philip H. S. Torr .[SNIP: Single-shot Network Pruning based on Connection Sensitivity](https://arxiv.org/pdf/1810.02340) .[J] arXiv preprint arXiv:1810.02340.
【code:[namhoonlee/snip-public](https://github.com/namhoonlee/snip-public)】
- Lukas Cavigelli, Luca Benini .[Extended Bit-Plane Compression for Convolutional Neural Network Accelerators](https://arxiv.org/pdf/1810.03979) .[J] arXiv preprint arXiv:1810.03979.
- Kyle D. Julian, Mykel J. Kochenderfer, Michael P. Owen .[Deep Neural Network Compression for Aircraft Collision Avoidance Systems](https://arxiv.org/pdf/1810.04240) .[J] arXiv preprint arXiv:1810.04240.
- Elliot J. Crowley, Jack Turner, Amos Storkey, Michael O'Boyle .[Pruning neural networks: is it time to nip it in the bud?](https://arxiv.org/pdf/1810.04622) .[J] arXiv preprint arXiv:1810.04622.
- 【Pruning】Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, Trevor Darrell .[Rethinking the Value of Network Pruning](https://arxiv.org/pdf/1810.05270) .[J] arXiv preprint arXiv:1810.05270.
【code:[Eric-mingjie/rethinking-network-pruning](https://github.com/Eric-mingjie/rethinking-network-pruning)】
- 【Pruning】Xitong Gao, Yiren Zhao, Lukasz Dudziak, Robert Mullins, Cheng-zhong Xu .[Dynamic Channel Pruning: Feature Boosting and Suppression](https://arxiv.org/pdf/1810.05331) .[J] arXiv preprint arXiv:1810.05331.
【code:[deep-fry/mayo](https://github.com/deep-fry/mayo)】
- Ke Song, Chun Yuan, Peng Gao, Yunxu Sun .[FPGA-based Acceleration System for Visual Tracking](https://arxiv.org/pdf/1810.05367) .[J] arXiv preprint arXiv:1810.05367.
- Jun Haeng Lee, Sangwon Ha, Saerom Choi, Won-Jo Lee, Seungwon Lee .[Quantization for Rapid Deployment of Deep Neural Networks](https://arxiv.org/pdf/1810.05488) .[J] arXiv preprint arXiv:1810.05488.
- Ron Banner, Yury Nahshan, Elad Hoffer, Daniel Soudry .[ACIQ: Analytical Clipping for Integer Quantization of neural networks](https://arxiv.org/pdf/1810.05723) .[J] arXiv preprint arXiv:1810.05723.
- Weihao Gao, Chong Wang, Sewoong Oh .[Rate Distortion For Model Compression: From Theory To Practice](https://arxiv.org/pdf/1810.06401) .[J] arXiv preprint arXiv:1810.06401.
- 【Pruning】Zhuwei Qin, Fuxun Yu, Chenchen Liu, Liang Zhao, Xiang Chen .[Interpretable Convolutional Filter Pruning](https://arxiv.org/pdf/1810.07322) .[J] arXiv preprint arXiv:1810.07322.
- 【Pruning】Shaokai Ye, Tianyun Zhang, Kaiqi Zhang, Jiayu Li, Kaidi Xu, Yunfei Yang, Fuxun Yu, Jian Tang, Makan Fardad, Sijia Liu, Xiang Chen, Xue Lin, Yanzhi Wang .[Progressive Weight Pruning of Deep Neural Networks using ADMM](https://arxiv.org/pdf/1810.07378) .[J] arXiv preprint arXiv:1810.07378.
- Artur Jordao, Fernando Yamada, William Robson Schwartz .[Pruning Deep Neural Networks using Partial Least Squares](https://arxiv.org/pdf/1810.07610) .[J] arXiv preprint arXiv:1810.07610.
- D.Babin, I.Mazurenko, D.Parkhomenko, A.Voloshko .[CNN inference acceleration using dictionary of centroids](https://arxiv.org/pdf/1810.08612) .[J] arXiv preprint arXiv:1810.08612.
- Qing Qin, Jie Ren, Jialong Yu, Ling Gao, Hai Wang, Jie Zheng, Yansong Feng, Jianbin Fang, Zheng Wang .[To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference](https://arxiv.org/pdf/1810.08899) .[J] arXiv preprint arXiv:1810.08899.
- Joris Roels, Jonas De Vylder, Jan Aelterman, Yvan Saeys, Wilfried Philips .[Convolutional Neural Network Pruning to Accelerate Membrane Segmentation in Electron Microscopy](https://arxiv.org/pdf/1810.09735) .[J] arXiv preprint arXiv:1810.09735.
- Hsin-Pai Cheng, Yuanjun Huang, Xuyang Guo, Yifei Huang, Feng Yan, Hai Li, Yiran Chen .[Differentiable Fine-grained Quantization for Deep Neural Network Compression](https://arxiv.org/pdf/1810.10351) .[J] arXiv preprint arXiv:1810.10351.
- Jack Turner, Elliot J. Crowley, Valentin Radu, José Cano, Amos Storkey, Michael O'Boyle .[HAKD: Hardware Aware Knowledge Distillation](https://arxiv.org/pdf/1810.10460) .[J] arXiv preprint arXiv:1810.10460.
- Nadezhda Chirkova, Ekaterina Lobacheva, Dmitry Vetrov .[Bayesian Compression for Natural Language Processing](https://arxiv.org/pdf/1810.10927) .[J] arXiv preprint arXiv:1810.10927.
- Amichai Painsky, Saharon Rosset .[Lossless (and Lossy) Compression of Random Forests](https://arxiv.org/pdf/1810.11197) .[J] arXiv preprint arXiv:1810.11197.
- 【Pruning】Zhuangwei Zhuang, Mingkui Tan, Bohan Zhuang, Jing Liu, Yong Guo, Qingyao Wu, Junzhou Huang, Jinhui Zhu .[Discrimination-aware Channel Pruning for Deep Neural Networks](https://arxiv.org/pdf/1810.11809) .[J] arXiv preprint arXiv:1810.11809.
【code:[SCUT-AILab/DCP](https://github.com/SCUT-AILab/DCP)】
- 【other】Dongsoo Lee, Parichay Kapoor, Byeongwook Kim .[DeepTwist: Learning Model Compression via Occasional Weight Distortion](https://arxiv.org/pdf/1810.12823) .[J] arXiv preprint arXiv:1810.12823.
- 【Distillation】Akhilesh Gotmare, Nitish Shirish Keskar, Caiming Xiong, Richard Socher .[A Closer Look at Deep Learning Heuristics: Learning rate restarts, Warmup and Distillation](https://arxiv.org/pdf/1810.13243) .[J] arXiv preprint arXiv:1810.13243.
- Doyun Kim, Han Young Yim, Sanghyuck Ha, Changgwun Lee, Inyup Kang .[Convolutional Neural Network Quantization using Generalized Gamma Distribution](https://arxiv.org/pdf/1810.13329) .[J] arXiv preprint arXiv:1810.13329.
- 【Pruning】Yang He, Ping Liu, Ziwei Wang, Yi Yang .[Pruning Filter via Geometric Median for Deep Convolutional Neural Networks Acceleration](https://arxiv.org/pdf/1811.00250) .[J] arXiv preprint arXiv:1811.00250.
【code:[he-y/filter-pruning-geometric-median](https://github.com/he-y/filter-pruning-geometric-median)】
- Xiaofan Xu, Mi Sun Park, Cormac Brick .[Hybrid Pruning: Thinner Sparse Networks for Fast Inference on Edge Devices](https://arxiv.org/pdf/1811.00482) .[J] arXiv preprint arXiv:1811.00482.
- Anish Acharya, Rahul Goel, Angeliki Metallinou, Inderjit Dhillon .[Online Embedding Compression for Text Classification using Low Rank Matrix Factorization](https://arxiv.org/pdf/1811.00641) .[J] arXiv preprint arXiv:1811.00641.
- Ahmed T. Elthakeb, Prannoy Pilligundla, Amir Yazdanbakhsh, Sean Kinzer, Hadi Esmaeilzadeh .[ReLeQ: A Reinforcement Learning Approach for Deep Quantization of Neural Networks](https://arxiv.org/pdf/1811.01704) .[J] arXiv preprint arXiv:1811.01704.
- Shaokai Ye, Tianyun Zhang, Kaiqi Zhang, Jiayu Li, Jiaming Xie, Yun Liang, Sijia Liu, Xue Lin, Yanzhi Wang .[A Unified Framework of DNN Weight Pruning and Weight Clustering/Quantization Using ADMM](https://arxiv.org/pdf/1811.01907) .[J] arXiv preprint arXiv:1811.01907.
- Yulhwa Kim, Hyungjun Kim, Jae-Joon Kim .[Neural Network-Hardware Co-design for Scalable RRAM-based BNN Accelerators](https://arxiv.org/pdf/1811.02187) .[J] arXiv preprint arXiv:1811.02187.
- Zhuwei Qin, Fuxun Yu, ChenChen Liu, Xiang Chen .[Demystifying Neural Network Filter Pruning](https://arxiv.org/pdf/1811.02639) .[J] arXiv preprint arXiv:1811.02639.
- 【Distillation】[Learning to Steer by Mimicking Features from Heterogeneous Auxiliary Networks](https://arxiv.org/abs/1811.02759), Yuenan Hou, Zheng Ma, Chunxiao Liu, Chen Change Loy, 2018
- 【Distillation】Fuxun Yu, Zhuwei Qin, Xiang Chen .[Distilling Critical Paths in Convolutional Neural Networks](https://arxiv.org/pdf/1811.02643) .[J] arXiv preprint arXiv:1811.02643.
- 【Distillation】[YASENN: Explaining Neural Networks via Partitioning Activation Sequences](https://arxiv.org/abs/1811.02783), Yaroslav Zharov, Denis Korzhenkov, Pavel Shvechikov, Alexander Tuzhilin, 2018
- Byeongho Heo, Minsik Lee, Sangdoo Yun, Jin Young Choi .[Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons](https://arxiv.org/pdf/1811.03233) .[J] arXiv preprint arXiv:1811.03233.
- 【Quantization】Mingchao Yu, Zhifeng Lin, Krishna Narra, Songze Li, Youjie Li, Nam Sung Kim, Alexander Schwing, Murali Annavaram, Salman Avestimehr .[GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training](https://arxiv.org/pdf/1811.03617) .[J] arXiv preprint arXiv:1811.03617.
- Ching-Yun Ko, Cong Chen, Yuke Zhang, Kim Batselier, Ngai Wong .[Deep Compression of Sum-Product Networks on Tensor Networks](https://arxiv.org/pdf/1811.03963) .[J] arXiv preprint arXiv:1811.03963.
- Raden Mu'az Mun'im, Nakamasa Inoue, Koichi Shinoda .[Sequence-Level Knowledge Distillation for Model Compression of Attention-based Sequence-to-Sequence Speech Recognition](https://arxiv.org/pdf/1811.04531) .[J] arXiv preprint arXiv:1811.04531.
- Samyak Parajuli, Aswin Raghavan, Sek Chai .[Generalized Ternary Connect: End-to-End Learning and Compression of Multiplication-Free Deep Neural Networks](https://arxiv.org/pdf/1811.04985) .[J] arXiv preprint arXiv:1811.04985.
- Ji Wang, Weidong Bao, Lichao Sun, Xiaomin Zhu, Bokai Cao, Philip S. Yu .[Private Model Compression via Knowledge Distillation](https://arxiv.org/pdf/1811.05072) .[J] arXiv preprint arXiv:1811.05072.
- Fabien Cardinaux, Stefan Uhlich, Kazuki Yoshiyama, Javier Alonso García, Stephen Tiedemann, Thomas Kemp, Akira Nakamura .[Iteratively Training Look-Up Tables for Network Quantization](https://arxiv.org/pdf/1811.05355) .[J] arXiv preprint arXiv:1811.05355.
- 【Distillation】[Fast Human Pose Estimation](https://arxiv.org/abs/1811.05419), Feng Zhang, Xiatian Zhu, Mao Ye, 2019
- Miguel de Prado, Maurizio Denna, Luca Benini, Nuria Pazos .[QUENN: QUantization Engine for low-power Neural Networks](https://arxiv.org/pdf/1811.05896) .[J] arXiv preprint arXiv:1811.05896.
- Hang Lu, Xin Wei, Ning Lin, Guihai Yan, and Xiaowei Li .[Tetris: Re-architecting Convolutional Neural Network Computation for Machine Learning Accelerators](https://arxiv.org/pdf/1811.06841) .[J] arXiv preprint arXiv:1811.06841.
【Pruning】aditya Prakash, James Storer, Dinei Florencio, Cha Zhang .[RePr: Improved Training of Convolutional Filters](https://arxiv.org/pdf/1811.07275) .[J] arXiv preprint arXiv:1811.07275
- Georgios Tsitsikas, Evangelos E. Papalexakis .[The core consistency of a compressed tensor](https://arxiv.org/pdf/1811.07428) .[J] arXiv preprint arXiv:1811.07428.
- Yu Pan, Jing Xu, Maolin Wang, Jinmian Ye, Fei Wang, Kun Bai, Zenglin Xu .[Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition](https://arxiv.org/pdf/1811.07503) .[J] arXiv preprint arXiv:1811.07503.
- Yuxin Zhang, Huan Wang, Yang Luo, Roland Hu .[Three Dimensional Convolutional Neural Network Pruning with Regularization-Based Method](https://arxiv.org/pdf/1811.07555) .[J] arXiv preprint arXiv:1811.07555.
- Pengyuan Ren, Jianmin Li .[Factorized Distillation: Training Holistic Person Re-identification Model by Distilling an Ensemble of Partial ReID Models](https://arxiv.org/pdf/1811.08073) .[J] arXiv preprint arXiv:1811.08073.
- Travis Desell .[Accelerating the Evolution of Convolutional Neural Networks with Node-Level Mutations and Epigenetic Weight Initialization](https://arxiv.org/pdf/1811.08286) .[J] arXiv preprint arXiv:1811.08286.
- Pravendra Singh, Vinay Sameer Raja Kadi, Nikhil Verma, Vinay P. Namboodiri .[Stability Based Filter Pruning for Accelerating Deep CNNs](https://arxiv.org/pdf/1811.08321) .[J] arXiv preprint arXiv:1811.08321.
- Pravendra Singh, Manikandan R, Neeraj Matiyali, Vinay P. Namboodiri .[Multi-layer Pruning Framework for Compressing Single Shot MultiBox Detector](https://arxiv.org/pdf/1811.08342) .[J] arXiv preprint arXiv:1811.08342.
- Huan Wang, Qiming Zhang, Yuehai Wang, Haoji Hu .[Structured Pruning for Efficient ConvNets via Incremental Regularization](https://arxiv.org/pdf/1811.08390) .[J] arXiv preprint arXiv:1811.08390.
- Mengdi Wang, Qing Zhang, Jun Yang, Xiaoyuan Cui, Wei Lin .[Graph-Adaptive Pruning for Efficient Inference of Convolutional Neural Networks](https://arxiv.org/pdf/1811.08589) .[J] arXiv preprint arXiv:1811.08589.
- Yifan Yang, Qijing Huang, Bichen Wu, Tianjun Zhang, Liang Ma, Giulio Gambardella, Michaela Blott, Luciano Lavagno, Kees Vissers, John Wawrzynek, Kurt Keutzer .[Synetgy: Algorithm-hardware Co-design for ConvNet Accelerators on Embedded FPGAs](https://arxiv.org/pdf/1811.08634) .[J] arXiv preprint arXiv:1811.08634.
- Kuan Wang, Zhijian Liu, Yujun Lin, Ji Lin, Song Han .[HAQ: Hardware-Aware Automated Quantization](https://arxiv.org/pdf/1811.08886) .[J] arXiv preprint arXiv:1811.08886.
- 【Pruning】Carl Lemaire, Andrew Achkar, Pierre-Marc Jodoin .[Structured Pruning of Neural Networks with Budget-Aware Regularization](https://arxiv.org/pdf/1811.09332) .[J] arXiv preprint arXiv:1811.09332.
- Yukang Chen, Gaofeng Meng, Qian Zhang, Xinbang Zhang, Liangchen Song, Shiming Xiang, Chunhong Pan .[Joint Neural Architecture Search and Quantization](https://arxiv.org/pdf/1811.09426) .[J] arXiv preprint arXiv:1811.09426.
- Maxim Naumov, Utku Diril, Jongsoo Park, Benjamin Ray, Jedrzej Jablonski, Andrew Tulloch .[On Periodic Functions as Regularizers for Quantization of Neural Networks](https://arxiv.org/pdf/1811.09862) .[J] arXiv preprint arXiv:1811.09862.
- Shiming Ge, Shengwei Zhao, Chenyu Li, Jia Li .[Low-resolution Face Recognition in the Wild via Selective Knowledge Distillation](https://arxiv.org/pdf/1811.09998) .[J] arXiv preprint arXiv:1811.09998.
- Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, Alexei A. Efros[Dataset Distillation](https://arxiv.org/pdf/1811.10959) .[J] arXiv preprint arXiv:1811.09998.
- Pravendra Singh, Vinay Kumar Verma, Piyush Rai, Vinay P. Namboodiri .[Leveraging Filter Correlations for Deep Model Compression](https://arxiv.org/pdf/1811.10559) .[J] arXiv preprint arXiv:1811.10559.
- 【Structure】Mehta S, Rastegari M, Shapiro L, et al. [Espnetv2: A light-weight, power efficient, and general purpose convolutional neural network](https://arxiv.org/pdf/1811.11431.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 9190-9200.
【code:[sacmehta/ESPNetv2](https://github.com/sacmehta/ESPNetv2)】
- Luna M. Zhang .[Effective, Fast, and Memory-Efficient Compressed Multi-function Convolutional Neural Networks for More Accurate Medical Image Classification](https://arxiv.org/pdf/1811.11996) .[J] arXiv preprint arXiv:1811.11996.
- 【Low rank】Hyeji Kim, Muhammad Umar Karim, Chong-Min Kyung .[A Framework for Fast and Efficient Neural Network Compression](https://arxiv.org/pdf/1811.12781) .[J] arXiv preprint arXiv:1811.12781.
【code:[Hyeji-Kim/ENC](https://github.com/Hyeji-Kim/ENC)】
- Bichen Wu, Yanghan Wang, Peizhao Zhang, Yuandong Tian, Peter Vajda, Kurt Keutzer .[Mixed Precision Quantization of ConvNets via Differentiable Neural Architecture Search](https://arxiv.org/pdf/1812.00090) .[J] arXiv preprint arXiv:1812.00090.
- Chenglin Yang, Lingxi Xie, Chi Su, Alan L. Yuille .[Snapshot Distillation: Teacher-Student Optimization in One Generation](https://arxiv.org/pdf/1812.00123) .[J] arXiv preprint arXiv:1812.00123.
- 【Structure】Cai H, Zhu L, Han S. [Proxylessnas: Direct neural architecture search on target task and hardware](https://arxiv.org/pdf/1812.00332)[J]. arXiv preprint arXiv:1812.00332, 2018.
- Yuefu Zhou, Ya Zhang, Yanfeng Wang, Qi Tian .[Network Compression via Recursive Bayesian Pruning](https://arxiv.org/pdf/1812.00353) .[J] arXiv preprint arXiv:1812.00353.
- Wei-Chun Chen, Chia-Che Chang, Chien-Yu Lu, Che-Rung Lee .[Knowledge Distillation with Feature Maps for Image Classification](https://arxiv.org/pdf/1812.00660) .[J] arXiv preprint arXiv:1812.00660.
- Christian Pinto, Yiannis Gkoufas, Andrea Reale, Seetharami Seelam, Steven Eliuk .[Hoard: A Distributed Data Caching System to Accelerate Deep Learning Training on the Cloud](https://arxiv.org/pdf/1812.00669) .[J] arXiv preprint arXiv:1812.00669.
- Minghan Li, Tanli Zuo, Ruicheng Li, Martha White, Weishi Zheng .[Accelerating Large Scale Knowledge Distillation via Dynamic Importance Sampling](https://arxiv.org/pdf/1812.00914) .[J] arXiv preprint arXiv:1812.00914.
- Ahmed Abdelatty, Pracheta Sahoo, Chiradeep Roy .[Structure Learning Using Forced Pruning](https://arxiv.org/pdf/1812.00975) .[J] arXiv preprint arXiv:1812.00975.
- Sourya Dey, Kuan-Wen Huang, Peter A. Beerel, Keith M. Chugg .[Pre-Defined Sparse Neural Networks with Hardware Acceleration](https://arxiv.org/pdf/1812.01164) .[J] arXiv preprint arXiv:1812.01164.
- Sascha Saralajew, Lars Holdijk, Maike Rees, Thomas Villmann .[Prototype-based Neural Network Layers: Incorporating Vector Quantization](https://arxiv.org/pdf/1812.01214) .[J] arXiv preprint arXiv:1812.01214.
- KouZi Xing .[Training for 'Unstable' CNN Accelerator:A Case Study on FPGA](https://arxiv.org/pdf/1812.01689) .[J] arXiv preprint arXiv:1812.01689.
- Haichuan Yang, Yuhao Zhu, Ji Liu .[ECC: Energy-Constrained Deep Neural Network Compression via a Bilinear Regression Model](https://arxiv.org/pdf/1812.01803) .[J] arXiv preprint arXiv:1812.01803.
- 【Distillation】Tianhong Li, Jianguo Li, Zhuang Liu, Changshui Zhang .[Knowledge Distillation from Few Samples](https://arxiv.org/pdf/1812.01839) .[J] arXiv preprint arXiv:1812.01839.
- Haipeng Jia, Xueshuang Xiang, Da Fan, Meiyu Huang, Changhao Sun, Qingliang Meng, Yang He, Chen Chen .[DropPruning for Model Compression](https://arxiv.org/pdf/1812.02035) .[J] arXiv preprint arXiv:1812.02035.
- Ruishan Liu, Nicolo Fusi, Lester Mackey .[Model Compression with Generative Adversarial Networks](https://arxiv.org/pdf/1812.02271) .[J] arXiv preprint arXiv:1812.02271.
- Yuhui Xu, Shuai Zhang, Yingyong Qi, Jiaxian Guo, Weiyao Lin, Hongkai Xiong .[DNQ: Dynamic Network Quantization](https://arxiv.org/pdf/1812.02375) .[J] arXiv preprint arXiv:1812.02375.
- Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong .[Trained Rank Pruning for Efficient Deep Neural Networks](https://arxiv.org/pdf/1812.02402) .[J] arXiv preprint arXiv:1812.02402.
- 【Distillation】[MEAL: Multi-Model Ensemble via Adversarial Learning](https://arxiv.org/abs/1812.02425), Zhiqiang Shen, Zhankui He, Xiangyang Xue, 2019
- Ravi Teja Mullapudi, Steven Chen, Keyi Zhang, Deva Ramanan, Kayvon Fatahalian .[Online Model Distillation for Efficient Video Inference](https://arxiv.org/pdf/1812.02699) .[J] arXiv preprint arXiv:1812.02699.
- Idoia Ruiz, Bogdan Raducanu, Rakesh Mehta, Jaume Amores .[Optimizing Speed/Accuracy Trade-Off for Person Re-identification via Knowledge Distillation](https://arxiv.org/pdf/1812.02937) .[J] arXiv preprint arXiv:1812.02937.
- Wei Wang, Liqiang Zhu .[Reliable Identification of Redundant Kernels for Convolutional Neural Network Compression](https://arxiv.org/pdf/1812.03608) .[J] arXiv preprint arXiv:1812.03608.
- Somak Aditya, Rudra Saha, Yezhou Yang, Chitta Baral .[Spatial Knowledge Distillation to aid Visual Reasoning](https://arxiv.org/pdf/1812.03631) .[J] arXiv preprint arXiv:1812.03631.
- Salonik Resch, S. Karen Khatamifard, Zamshed Iqbal Chowdhury, Masoud Zabihi, Zhengyang Zhao, Jian-Ping Wang, Sachin S. Sapatnekar, Ulya R. Karpuzcu .[Exploiting Processing in Non-Volatile Memory for Binary Neural Network Accelerators](https://arxiv.org/pdf/1812.03989) .[J] arXiv preprint arXiv:1812.03989.
- Georgios Georgiadis .[Accelerating Convolutional Neural Networks via Activation Map Compression](https://arxiv.org/pdf/1812.04056) .[J] arXiv preprint arXiv:1812.04056.
- Thalaiyasingam Ajanthan, Puneet K. Dokania, Richard Hartley, Philip H. S. Torr .[Proximal Mean-field for Neural Network Quantization](https://arxiv.org/pdf/1812.04353) .[J] arXiv preprint arXiv:1812.04353.
- Yuchao Li, Shaohui Lin, Baochang Zhang, Jianzhuang Liu, David Doermann, Yongjian Wu, Feiyue Huang, Rongrong Ji .[Exploiting Kernel Sparsity and Entropy for Interpretable CNN Compression](https://arxiv.org/pdf/1812.04368) .[J] arXiv preprint arXiv:1812.04368.
- Weijie Chen, Yuan Zhang, Di Xie, Shiliang Pu .[A Layer Decomposition-Recomposition Framework for Neuron Pruning towards Accurate Lightweight Networks](https://arxiv.org/pdf/1812.06611) .[J] arXiv preprint arXiv:1812.06611.
- Alexey Kruglov .[Channel-wise pruning of neural networks with tapering resource constraint](https://arxiv.org/pdf/1812.07060) .[J] arXiv preprint arXiv:1812.07060.
- Mohammad Motamedi, Felix Portillo, Daniel Fong, Soheil Ghiasi .[Distill-Net: Application-Specific Distillation of Deep Convolutional Neural Networks for Resource-Constrained IoT Platforms](https://arxiv.org/pdf/1812.07390) .[J] arXiv preprint arXiv:1812.07390.
- Mohammad Hossein Samavatian, Anys Bacha, Li Zhou, Radu Teodorescu .[RNNFast: An Accelerator for Recurrent Neural Networks Using Domain Wall Memory](https://arxiv.org/pdf/1812.07609) .[J] arXiv preprint arXiv:1812.07609.
- Alexander Goncharenko, Andrey Denisov, Sergey Alyamkin, Evgeny Terentev .[Fast Adjustable Threshold For Uniform Neural Network Quantization](https://arxiv.org/pdf/1812.07872) .[J] arXiv preprint arXiv:1812.07872.
- Xin Li, Shuai Zhang, Bolan Jiang, Yingyong Qi, Mooi Choo Chuah, Ning Bi .[DAC: Data-free Automatic Acceleration of Convolutional Networks](https://arxiv.org/pdf/1812.08374) .[J] arXiv preprint arXiv:1812.08374.
- 【Structured】【SlimmableNet】Yu J, Yang L, Xu N, et al. [Slimmable neural networks](https://arxiv.org/pdf/1812.08928)[J]. arXiv preprint arXiv:1812.08928, 2018.
【code:[JiahuiYu/slimmable_networks](https://github.com/JiahuiYu/slimmable_networks)】
- Eunhyeok Park, Dongyoung Kim, Sungjoo Yoo, Peter Vajda .[Precision Highway for Ultra Low-Precision Quantization](https://arxiv.org/pdf/1812.09818) .[J] arXiv preprint arXiv:1812.09818.
- Tailin Liang, Lei Wang, Shaobo Shi, John Glossner .[Dynamic Runtime Feature Map Pruning](https://arxiv.org/pdf/1812.09922) .[J] arXiv preprint arXiv:1812.09922.
- Darabi S, Belbahri M, Courbariaux M, et al. [BNN+: Improved binary network training](https://arxiv.org/pdf/1812.11800)[J]. arXiv preprint arXiv:1812.11800, 2018.
- Deepak Mittal, Shweta Bhardwaj, Mitesh M. Khapra, Balaraman Ravindran .[Studying the Plasticity in Deep Convolutional Neural Networks using Random Pruning](https://arxiv.org/pdf/1812.10240) .[J] arXiv preprint arXiv:1812.10240.
- Xuan Liu, Xiaoguang Wang, Stan Matwin .[Improving the Interpretability of Deep Neural Networks with Knowledge Distillation](https://arxiv.org/pdf/1812.10924) .[J] arXiv preprint arXiv:1812.10924.
- Ghouthi Boukli Hacene (ELEC), Vincent Gripon, Matthieu Arzel (ELEC), Nicolas Farrugia (ELEC), Yoshua Bengio (DIRO) .[Quantized Guided Pruning for Efficient Hardware Implementations of Convolutional Neural Networks](https://arxiv.org/pdf/1812.11337) .[J] arXiv preprint arXiv:1812.11337.
- Charbel Sakr, Naresh Shanbhag .[Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm](https://arxiv.org/pdf/1812.11732) .[J] arXiv preprint arXiv:1812.11732.

### 2019
- 【Structured】【EfficientNet】Tan M, Le Q V. [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/pdf/1905.11946)[J]. arXiv preprint arXiv:1905.11946, 2019.
【code:[tensorflow/tpu](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet)】
- 【Low rank】Chen T, Lin J, Lin T, et al. [Adaptive mixture of low-rank factorizations for compact neural modeling](https://openreview.net/pdf?id=r1xFE3Rqt7)[J]. 2018.
【code:[zuenko/ALRF](https://github.com/zuenko/ALRF)】
- 【Pruning】Mehta D, Kim K I, Theobalt C. [On implicit filter level sparsity in convolutional neural networks](https://arxiv.org/abs/1811.12495)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 520-528.
【code:[mehtadushy/SelecSLS-Pytorch](https://github.com/mehtadushy/SelecSLS-Pytorch)】
- 【Pruning】Peng H, Wu J, Chen S, et al. [Collaborative Channel Pruning for Deep Networks](http://proceedings.mlr.press/v97/peng19c/peng19c.pdf)[C]//International Conference on Machine Learning. 2019: 5113-5122.
- 【Pruning】Zhao C, Ni B, Zhang J, et al. [Variational Convolutional Neural Network Pruning](http://openaccess.thecvf.com/content_CVPR_2019/papers/Zhao_Variational_Convolutional_Neural_Network_Pruning_CVPR_2019_paper.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 2780-2789.
- 【Pruning】Li J, Qi Q, Wang J, et al. [OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural Networks](https://arxiv.org/abs/1905.11664)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 7046-7055.
- Alizadeh M, Fernández-Marqués J, Lane N D, et al. [An Empirical study of Binary Neural Networks' Optimisation](https://openreview.net/pdf?id=rJfUCoR5KX)[J]. 2018.
- Wang Z, Lu J, Tao C, et al. [Learning Channel-Wise Interactions for Binary Convolutional Neural Networks](http://openaccess.thecvf.com/content_CVPR_2019/papers/Wang_Learning_Channel-Wise_Interactions_for_Binary_Convolutional_Neural_Networks_CVPR_2019_paper.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 568-577.
- Xu Y, Dong X, Li Y, et al. [A Main/Subsidiary Network Framework for Simplifying Binary Neural Networks](http://openaccess.thecvf.com/content_CVPR_2019/papers/Xu_A_MainSubsidiary_Network_Framework_for_Simplifying_Binary_Neural_Networks_CVPR_2019_paper.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 7154-7162.
- Ding R, Chin T W, Liu Z, et al. [Regularizing activation distribution for training binarized deep networks](http://openaccess.thecvf.com/content_CVPR_2019/papers/Ding_Regularizing_Activation_Distribution_for_Training_Binarized_Deep_Networks_CVPR_2019_paper.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 11408-11417.
【code:[ruizhoud/DistributionLoss](https://github.com/ruizhoud/DistributionLoss)】
- .[Quantization Networks](http://openaccess.thecvf.com/content_CVPR_2019/html/Yang_Quantization_Networks_CVPR_2019_paper.html)
- Liu C, Ding W, Xia X, et al. Circulant [Binary Convolutional Networks: Enhancing the Performance of 1-bit DCNNs with Circulant Back Propagation](http://openaccess.thecvf.com/content_CVPR_2019/papers/Liu_Circulant_Binary_Convolutional_Networks_Enhancing_the_Performance_of_1-Bit_DCNNs_CVPR_2019_paper.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 2691-2699.
- Zhuang B, Shen C, Tan M, et al. [Structured Binary Neural Networks for Accurate Image Classification and Semantic Segmentation](http://openaccess.thecvf.com/content_CVPR_2019/papers/Zhuang_Structured_Binary_Neural_Networks_for_Accurate_Image_Classification_and_Semantic_CVPR_2019_paper.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 413-422.
- Accurate and Efficient 2-bit Quantized Neural Networks, SysML 2019, [[paper]](https://www.sysml.cc/doc/2019/168.pdf)
- Jung S, Son C, Lee S, et al. [Learning to quantize deep networks by optimizing quantization intervals with task loss](http://openaccess.thecvf.com/content_CVPR_2019/papers/Jung_Learning_to_Quantize_Deep_Networks_by_Optimizing_Quantization_Intervals_With_CVPR_2019_paper.pdf)[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 4350-4359.
- Ahmad Shawahna, Sadiq M. Sait, Aiman El-Maleh .[FPGA-based Accelerators of Deep Learning Networks for Learning and Classification: A Review](https://arxiv.org/pdf/1901.00121) .[J] arXiv preprint arXiv:1901.00121.
- Shitao Tang, Litong Feng, Wenqi Shao, Zhanghui Kuang, Wei Zhang, Yimin Chen .[Learning Efficient Detector with Semi-supervised Adaptive Distillation](https://arxiv.org/pdf/1901.00366) .[J] arXiv preprint arXiv:1901.00366.
- Tong Geng, Tianqi Wang, Ang Li, Xi Jin, Martin Herbordt .[A Scalable Framework for Acceleration of CNN Training on Deeply-Pipelined FPGA Clusters with Weight and Workload Balancing](https://arxiv.org/pdf/1901.01007) .[J] arXiv preprint arXiv:1901.01007.
- Zehua Cheng, Zhenghua Xu .[Bandwidth Reduction using Importance Weighted Pruning on Ring AllReduce](https://arxiv.org/pdf/1901.01544) .[J] arXiv preprint arXiv:1901.01544.
- Suraj Mishra, Peixian Liang, Adam Czajka, Danny Z. Chen, X. Sharon Hu .[CC-Net: Image Complexity Guided Network Compression for Biomedical Image Segmentation](https://arxiv.org/pdf/1901.01578) .[J] arXiv preprint arXiv:1901.01578.
- Xue Geng, Jie Fu, Bin Zhao, Jie Lin, Mohamed M. Sabry Aly, Christopher Pal, Vijay Chandrasekhar .[Dataflow-based Joint Quantization of Weights and Activations for Deep Neural Networks](https://arxiv.org/pdf/1901.02064) .[J] arXiv preprint arXiv:1901.02064.
- Jiecao Yu, Jongsoo Park, Maxim Naumov .[Spatial-Winograd Pruning Enabling Sparse Winograd Convolution](https://arxiv.org/pdf/1901.02132) .[J] arXiv preprint arXiv:1901.02132.
- Hyun-Joo Jung, Jaedeok Kim, Yoonsuck Choe .[How Compact?: Assessing Compactness of Representations through Layer-Wise Pruning](https://arxiv.org/pdf/1901.02757) .[J] arXiv preprint arXiv:1901.02757.
- Mohammad Samragh, Mojan Javaheripi, Farinaz Koushanfar .[CodeX: Bit-Flexible Encoding for Streaming-based FPGA Acceleration of DNNs](https://arxiv.org/pdf/1901.05582) .[J] arXiv preprint arXiv:1901.05582.
【code:[MohammadSamragh/CodeX)](https://github.com/MohammadSamragh/CodeX)】
- Jiemin Fang, Yukang Chen, Xinbang Zhang, Qian Zhang, Chang Huang, Gaofeng Meng, Wenyu Liu, Xinggang Wang .[EAT-NAS: Elastic Architecture Transfer for Accelerating Large-scale Neural Architecture Search](https://arxiv.org/pdf/1901.05884) .[J] arXiv preprint arXiv:1901.05884.
【code:[JaminFong/EAT-NAS](https://github.com/JaminFong/EAT-NAS)】
- Saeed Karimi-Bidhendi, Jun Guo, Hamid Jafarkhani .[Using Quantization to Deploy Heterogeneous Nodes in Two-Tier Wireless Sensor Networks](https://arxiv.org/pdf/1901.06742) .[J] arXiv preprint arXiv:1901.06742.
- Jinrong Guo, Wantao Liu, Wang Wang, Qu Lu, Songlin Hu, Jizhong Han, Ruixuan Li .[AccUDNN: A GPU Memory Efficient Accelerator for Training Ultra-deep Deep Neural Networks](https://arxiv.org/pdf/1901.06773) .[J] arXiv preprint arXiv:1901.06773.
- Zhiwen Zuo, Lei Zhao, Liwen Zuo, Feng Jiang, Wei Xing, Dongming Lu .[On Compression of Unsupervised Neural Nets by Pruning Weak Connections](https://arxiv.org/pdf/1901.07066) .[J] arXiv preprint arXiv:1901.07066.
- Shaohui Lin, Rongrong Ji, Yuchao Li, Cheng Deng, Xuelong Li .[Towards Compact ConvNets via Structure-Sparsity Regularized Filter Pruning](https://arxiv.org/pdf/1901.07827) .[J] arXiv preprint arXiv:1901.07827.
【code:[ShaohuiLin/SSR](https://github.com/ShaohuiLin/SSR)】
- Sam Green, Craig M. Vineyard, Çetin Kaya Koç .[Distillation Strategies for Proximal Policy Optimization](https://arxiv.org/pdf/1901.08128) .[J] arXiv preprint arXiv:1901.08128.
- Li Yue, Zhao Weibin, Shang Lin .[Really should we pruning after model be totally trained? Pruning based on a small amount of training](https://arxiv.org/pdf/1901.08455) .[J] arXiv preprint arXiv:1901.08455.
- Sian Jin, Sheng Di, Xin Liang, Jiannan Tian, Dingwen Tao, Franck Cappello .[DeepSZ: A Novel Framework to Compress Deep Neural Networks by Using Error-Bounded Lossy Compression](https://arxiv.org/pdf/1901.09124) .[J] arXiv preprint arXiv:1901.09124.
- Sangkug Lym, Esha Choukse, Siavash Zangeneh, Wei Wen, Mattan Erez, Sujay Shanghavi .[PruneTrain: Gradual Structured Pruning from Scratch for Faster Neural Network Training](https://arxiv.org/pdf/1901.09290) .[J] arXiv preprint arXiv:1901.09290.
- Yuheng Bu, Weihao Gao, Shaofeng Zou, Venugopal V. Veeravalli .[Information-Theoretic Understanding of Population Risk Improvement with Model Compression](https://arxiv.org/pdf/1901.09421) .[J] arXiv preprint arXiv:1901.09421.
【code:[aaron-xichen/pytorch-playgroun](https://github.com/aaron-xichen/pytorch-playgroun)】
- Ritchie Zhao, Yuwei Hu, Jordan Dotzel, Christopher De Sa, Zhiru Zhang .[Improving Neural Network Quantization without Retraining using Outlier Channel Splitting](https://arxiv.org/pdf/1901.09504) .[J] arXiv preprint arXiv:1901.09504.
【code:[cornell-zhang/dnn-quant-ocs](https://github.com/cornell-zhang/dnn-quant-ocs)】
- Valentin Khrulkov, Oleksii Hrinchuk, Leyla Mirvakhabova, Ivan Oseledets .[Tensorized Embedding Layers for Efficient Model Compression](https://arxiv.org/pdf/1901.10787) .[J] arXiv preprint arXiv:1901.10787.
- Sina Shahhosseini, Ahmad Albaqsami, Masoomeh Jasemi, Shaahin Hessabi, Nader Bagherzadeh .[Partition Pruning: Parallelization-Aware Pruning for Deep Neural Networks](https://arxiv.org/pdf/1901.11391) .[J] arXiv preprint arXiv:1901.11391.
- Bin Liu, Yue Cao, Mingsheng Long, Jianmin Wang, Jingdong Wang .[Deep Triplet Quantization](https://arxiv.org/pdf/1902.00153) .[J] arXiv preprint arXiv:1902.00153.
- Angeline Aguinaldo, Ping-Yeh Chiang, Alex Gain, Ameya Patil, Kolten Pearson, Soheil Feizi .[Compressing GANs using Knowledge Distillation](https://arxiv.org/pdf/1902.00159) .[J] arXiv preprint arXiv:1902.00159.
- Shengcao Cao, Xiaofang Wang, Kris M. Kitani .[Learnable Embedding Space for Efficient Neural Architecture Compression](https://arxiv.org/pdf/1902.00383) .[J] arXiv preprint arXiv:1902.00383.
- Jie Zhang, Xiaolong Wang, Dawei Li, Shalini Ghosh, Abhishek Kolagunda, Yalin Wang .[MICIK: MIning Cross-Layer Inherent Similarity Knowledge for Deep Model Compression](https://arxiv.org/pdf/1902.00918) .[J] arXiv preprint arXiv:1902.00918.
- Alberto Marchisio, Muhammad Shafique .[CapStore: Energy-Efficient Design and Management of the On-Chip Memory for CapsuleNet Inference Accelerators](https://arxiv.org/pdf/1902.01151) .[J] arXiv preprint arXiv:1902.01151.
- Eldad Meller, Alexander Finkelstein, Uri Almog, Mark Grobman .[Same, Same But Different - Recovering Neural Network Quantization Error Through Weight Factorization](https://arxiv.org/pdf/1902.01917) .[J] arXiv preprint arXiv:1902.01917.
- Wojciech Marian Czarnecki, Razvan Pascanu, Simon Osindero, Siddhant M. Jayakumar, Grzegorz Swirszcz, Max Jaderberg .[Distilling Policy Distillation](https://arxiv.org/pdf/1902.02186) .[J] arXiv preprint arXiv:1902.02186.
- Artem M. Grachev, Dmitry I. Ignatov, Andrey V. Savchenko .[Compression of Recurrent Neural Networks for Efficient Language Modeling](https://arxiv.org/pdf/1902.02380) .[J] arXiv preprint arXiv:1902.02380.
- Panagiotis G. Mousouliotis, Loukas P. Petrou .[Software-Defined FPGA Accelerator Design for Mobile Deep Learning Applications](https://arxiv.org/pdf/1902.03192) .[J] arXiv preprint arXiv:1902.03192.
- Yingzhen Yang, Nebojsa Jojic, Jun Huan .[FSNet: Compression of Deep Convolutional Neural Networks by Filter Summary](https://arxiv.org/pdf/1902.03264) .[J] arXiv preprint arXiv:1902.03264.
- Anubhav Ashok .[Architecture Compression](https://arxiv.org/pdf/1902.03326) .[J] arXiv preprint arXiv:1902.03326.
- 【Distillation】Seyed-Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, Hassan Ghasemzadeh .[Improved Knowledge Distillation via Teacher Assistant: Bridging the Gap Between Student and Teacher](https://arxiv.org/pdf/1902.03393) .[J] arXiv preprint arXiv:1902.03393.
【code:[imirzadeh/Teacher-Assistant-Knowledge-Distillation](https://github.com/imirzadeh/Teacher-Assistant-Knowledge-Distillation)】
- Shupeng Gui (1), Haotao Wang (2), Chen Yu (1), Haichuan Yang (1), Zhangyang Wang (2), Ji Liu (1) ((1) University of Rochester, (2) Texas A&M University) .[Adversarially Trained Model Compression: When Robustness Meets Efficiency](https://arxiv.org/pdf/1902.03538) .[J] arXiv preprint arXiv:1902.03538.
- Dae-Woong Jeong, Jaehun Kim, Youngseok Kim, Tae-Ho Kim, Myungsu Chae .[Effective Network Compression Using Simulation-Guided Iterative Pruning](https://arxiv.org/pdf/1902.04224) .[J] arXiv preprint arXiv:1902.04224.
- Sijia Chen, Bin Song, Xiaojiang Du, Nadra Guizani .[Structured Bayesian Compression for Deep models in mobile enabled devices for connected healthcare](https://arxiv.org/pdf/1902.05429) .[J] arXiv preprint arXiv:1902.05429.
- Qian Lou, Lantao Liu, Minje Kim, Lei Jiang .[AutoQB: AutoML for Network Quantization and Binarization on Mobile Devices](https://arxiv.org/pdf/1902.05690) .[J] arXiv preprint arXiv:1902.05690.
- Michael M. Saint-Antoine, Abhyudai Singh .[Evaluating Pruning Methods in Gene Network Inference](https://arxiv.org/pdf/1902.06028) .[J] arXiv preprint arXiv:1902.06028.
- Chengcheng Li, Zi Wang, Xiangyang Wang, Hairong Qi .[Single-shot Channel Pruning Based on Alternating Direction Method of Multipliers](https://arxiv.org/pdf/1902.06382) .[J] arXiv preprint arXiv:1902.06382.
- Zi Wang, Chengcheng Li, Dali Wang, Xiangyang Wang, Hairong Qi .[Speeding up convolutional networks pruning with coarse ranking](https://arxiv.org/pdf/1902.06385) .[J] arXiv preprint arXiv:1902.06385.
- Yoni Choukroun, Eli Kravchik, Pavel Kisilev .[Low-bit Quantization of Neural Networks for Efficient Inference](https://arxiv.org/pdf/1902.06822) .[J] arXiv preprint arXiv:1902.06822.
- Steven K. Esser, Jeffrey L. McKinstry, Deepika Bablani, Rathinakumar Appuswamy, Dharmendra S. Modha .[Learned Step Size Quantization](https://arxiv.org/pdf/1902.08153) .[J] arXiv preprint arXiv:1902.08153.
- Hojjat Salehinejad, Shahrokh Valaee .[Ising-Dropout: A Regularization Method for Training and Compression of Deep Neural Networks](https://arxiv.org/pdf/1902.08673) .[J] arXiv preprint arXiv:1902.08673.
- Ivan Chelombiev, Conor Houghton, Cian O'Donnell .[Adaptive Estimators Show Information Compression in Deep Neural Networks](https://arxiv.org/pdf/1902.09037) .[J] arXiv preprint arXiv:1902.09037.
- Yiming Hu, Siyang Sun, Jianquan Li, Jiagang Zhu, Xingang Wang, Qingyi Gu .[Multi-loss-aware Channel Pruning of Deep Networks](https://arxiv.org/pdf/1902.10364) .[J] arXiv preprint arXiv:1902.10364.
- Yiming Hu, Jianquan Li, Xianlei Long, Shenhua Hu, Jiagang Zhu, Xingang Wang, Qingyi Gu .[Cluster Regularized Quantization for Deep Networks Compression](https://arxiv.org/pdf/1902.10370) .[J] arXiv preprint arXiv:1902.10370.
- Mohammad Farhadi, Yezhou Yang .[TKD: Temporal Knowledge Distillation for Active Perception](https://arxiv.org/pdf/1903.01522) .[J] arXiv preprint arXiv:1903.01522.
- Xiaowei Xu .[On the Quantization of Cellular Neural Networks for Cyber-Physical Systems](https://arxiv.org/pdf/1903.02048) .[J] arXiv preprint arXiv:1903.02048.
- Jiasong Wu, Hongshan Ren, Youyong Kong, Chunfeng Yang, Lotfi Senhadji, Huazhong Shu .[Compressing complex convolutional neural network based on an improved deep compression algorithm](https://arxiv.org/pdf/1903.02358) .[J] arXiv preprint arXiv:1903.02358.
- Yiren Zhao, Xitong Gao, Daniel Bates, Robert Mullins, Cheng-Zhong Xu .[Efficient and Effective Quantization for Sparse DNNs](https://arxiv.org/pdf/1903.03046) .[J] arXiv preprint arXiv:1903.03046.
- Weiran Wang .[Everything old is new again: A multi-view learning approach to learning using privileged information and distillation](https://arxiv.org/pdf/1903.03694) .[J] arXiv preprint arXiv:1903.03694.
- 【Pruning】Xin Li, Yiming Zhou, Zheng Pan, Jiashi Feng .[Partial Order Pruning: for Best Speed/Accuracy Trade-off in Neural Architecture Search](https://arxiv.org/pdf/1903.03777) .[J] arXiv preprint arXiv:1903.03777.
【code:[lixincn2015/Partial-Order-Pruning](https://github.com/lixincn2015/Partial-Order-Pruning)】
- 【Distillation】Yifan Liu, Ke Chen, Chris Liu, Zengchang Qin, Zhenbo Luo, Jingdong Wang .[Structured Knowledge Distillation for Semantic Segmentation](https://arxiv.org/pdf/1903.04197) .[J] arXiv preprint arXiv:1903.04197.
【code:[irfanICMLL/structure knowledge distillation)](https://github.com/irfanICMLL/structure_knowledge_distillation)】
- Siavash Golkar, Michael Kagan, Kyunghyun Cho .[Continual Learning via Neural Pruning](https://arxiv.org/pdf/1903.04476) .[J] arXiv preprint arXiv:1903.04476.
- 【Distillation】[Knowledge Adaptation for Efficient Semantic Segmentation](https://arxiv.org/abs/1903.04688), Tong He, Chunhua Shen, Zhi Tian, Dong Gong, Changming Sun, Youliang Yan, 2019
- Breton Minnehan, Andreas Savakis .[Cascaded Projection: End-to-End Network Compression and Acceleration](https://arxiv.org/pdf/1903.04988) .[J] arXiv preprint arXiv:1903.04988.
- 【Structured】【FE-Net】Chen W, Xie D, Zhang Y, et al. [All You Need is a Few Shifts: Designing Efficient Convolutional Neural Networks for Image Classification](https://arxiv.org/pdf/1903.05285)[J]. arXiv preprint arXiv:1903.05285, 2019.
- Chen Feng, Tao Sheng, Zhiyu Liang, Shaojie Zhuo, Xiaopeng Zhang, Liang Shen, Matthew Ardi, Alexander C. Berg, Yiran Chen, Bo Chen, Kent Gauen, Yung-Hsiang Lu .[Low Power Inference for On-Device Visual Recognition with a Quantization-Friendly Solution](https://arxiv.org/pdf/1903.06791) .[J] arXiv preprint arXiv:1903.06791.
- 【Pruning】Shaohui Lin, Rongrong Ji, Chenqian Yan, Baochang Zhang, Liujuan Cao, Qixiang Ye, Feiyue Huang, David Doermann .[Towards Optimal Structured CNN Pruning via Generative Adversarial Learning](https://arxiv.org/pdf/1903.09291) .[J] arXiv preprint arXiv:1903.09291.
【code:[ShaohuiLin/GAL](https://github.com/ShaohuiLin/GAL)】
- Shaokai Ye, Xiaoyu Feng, Tianyun Zhang, Xiaolong Ma, Sheng Lin, Zhengang Li, Kaidi Xu, Wujie Wen, Sijia Liu, Jian Tang, Makan Fardad, Xue Lin, Yongpan Liu, Yanzhi Wang .[Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM](https://arxiv.org/pdf/1903.09769) .[J] arXiv preprint arXiv:1903.09769.
- 【Low rank】Julia Gusak, Maksym Kholyavchenko, Evgeny Ponomarev, Larisa Markeeva, Ivan Oseledets, Andrzej Cichocki, .[One time is not enough: iterative tensor decomposition for neural network compression](https://arxiv.org/pdf/1903.09973) .[J] arXiv preprint arXiv:1903.09973.
【code:[juliagusak/musco](https://github.com/juliagusak/musco)】
- Zechun Liu, Haoyuan Mu, Xiangyu Zhang, Zichao Guo, Xin Yang, Tim Kwang-Ting Cheng, Jian Sun .[MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning](https://arxiv.org/pdf/1903.10258) .[J] arXiv preprint arXiv:1903.10258.
【code:[liuzechun/MetaPruning](https://github.com/liuzechun/MetaPruning)】
- Abhishek Murthy, Himel Das, Md Ariful Islam .[Robustness of Neural Networks to Parameter Quantization](https://arxiv.org/pdf/1903.10672) .[J] arXiv preprint arXiv:1903.10672.
- Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, Jimmy Lin .[Distilling Task-Specific Knowledge from BERT into Simple Neural Networks](https://arxiv.org/pdf/1903.12136) .[J] arXiv preprint arXiv:1903.12136.
【code:[goo.gl/Frmwqe](https://goo.gl/Frmwqe)】
- Shaokai Ye, Kaidi Xu, Sijia Liu, Hao Cheng, Jan-Henrik Lambrechts, Huan Zhang, Aojun Zhou, Kaisheng Ma, Yanzhi Wang, Xue Lin .[Second Rethinking of Network Pruning in the Adversarial Setting](https://arxiv.org/pdf/1903.12561) .[J] arXiv preprint arXiv:1903.12561.
- Xijun Wang, Meina Kan, Shiguang Shan, Xilin Chen .[Fully Learnable Group Convolution for Acceleration of Deep Neural Networks](https://arxiv.org/pdf/1904.00346) .[J] arXiv preprint arXiv:1904.00346.
- Peng Zhou, Long Mai, Jianming Zhang, Ning Xu, Zuxuan Wu, Larry S. Davis .[M2KD: Multi-model and Multi-level Knowledge Distillation for Incremental Learning](https://arxiv.org/pdf/1904.01769) .[J] arXiv preprint arXiv:1904.01769.
- Baoyun Peng, Xiao Jin, Jiaheng Liu, Shunfeng Zhou, Yichao Wu, Yu Liu, Dongsheng Li, Zhaoning Zhang .[Correlation Congruence for Knowledge Distillation](https://arxiv.org/pdf/1904.01802) .[J] arXiv preprint arXiv:1904.01802.
- 【Distillation】Byeongho Heo, Jeesoo Kim, Sangdoo Yun, Hyojin Park, Nojun Kwak, Jin Young Choi .[A Comprehensive Overhaul of Feature Distillation](https://arxiv.org/pdf/1904.01866) .[J] arXiv preprint arXiv:1904.01866.
【code:[byeongho-heo/overhaul](https://sites.google.com/view/byeongho-heo/overhaul)】
- David Hartmann, Michael Wand .[Progressive Stochastic Binarization of Deep Networks](https://arxiv.org/pdf/1904.02205) .[J] arXiv preprint arXiv:1904.02205.
【code:[qubvel/classification_models](https://github.com/qubvel/classification_models)】
- Yotam Gil, Yoav Chai, Or Gorodissky, Jonathan Berant .[White-to-Black: Efficient Distillation of Black-Box Adversarial Attacks](https://arxiv.org/pdf/1904.02405) .[J] arXiv preprint arXiv:1904.02405.
- Chaohui Yu, Jindong Wang, Yiqiang Chen, Zijing Wu .[Accelerating Deep Unsupervised Domain Adaptation with Transfer Channel Pruning](https://arxiv.org/pdf/1904.02654) .[J] arXiv preprint arXiv:1904.02654.
- Miao Liu, Xin Chen, Yun Zhang, Yin Li, James M. Rehg .[Paying More Attention to Motion: Attention Distillation for Learning Video Representations](https://arxiv.org/pdf/1904.03249) .[J] arXiv preprint arXiv:1904.03249.
- Chih-Yao Chiu, Hwann-Tzong Chen, Tyng-Luh Liu .[C2S2: Cost-aware Channel Sparse Selection for Progressive Network Pruning](https://arxiv.org/pdf/1904.03508) .[J] arXiv preprint arXiv:1904.03508.
- Hiroki Tomoe, Tanaka Kanji .[Long-Term Vehicle Localization by Recursive Knowledge Distillation](https://arxiv.org/pdf/1904.03551) .[J] arXiv preprint arXiv:1904.03551.
- 【Pruning】Xiaohan Ding, Guiguang Ding, Yuchen Guo, Jungong Han .[Centripetal SGD for Pruning Very Deep Convolutional Networks with Complicated Structure](https://arxiv.org/pdf/1904.03837) .[J] arXiv preprint arXiv:1904.03837.
【code:[ShawnDing1994/Centripetal-SGD]( https://github.com/ShawnDing1994/Centripetal-SGD)】
- Yang He, Ping Liu, Linchao Zhu, Yi Yang .[Meta Filter Pruning to Accelerate Deep Convolutional Neural Networks](https://arxiv.org/pdf/1904.03961) .[J] arXiv preprint arXiv:1904.03961.
- Asaf Noy, Niv Nayman, Tal Ridnik, Nadav Zamir, Sivan Doveh, Itamar Friedman, Raja Giryes, Lihi Zelnik-Manor .[ASAP: Architecture Search, Anneal and Prune](https://arxiv.org/pdf/1904.04123) .[J] arXiv preprint arXiv:1904.04123.
- Yangyang Shi, Mei-Yuh Hwang, Xin Lei, Haoyu Sheng .[Knowledge Distillation For Recurrent Neural Network Language Modeling With Trust Regularization](https://arxiv.org/pdf/1904.04163) .[J] arXiv preprint arXiv:1904.04163.
- Rod Burns, John Lawson, Duncan McBain, Daniel Soutar .[Accelerated Neural Networks on OpenCL Devices Using SYCL-DNN](https://arxiv.org/pdf/1904.04174) .[J] arXiv preprint arXiv:1904.04174.
- Kui Fu, Jia Li, Yafei Song, Yu Zhang, Shiming Ge, Yonghong Tian .[Ultrafast Video Attention Prediction with Coupled Knowledge Distillation](https://arxiv.org/pdf/1904.04449) .[J] arXiv preprint arXiv:1904.04449.
- Vinh Tran, Yang Wang, Minh Hoai .[Back to the Future: Knowledge Distillation for Human Action Anticipation](https://arxiv.org/pdf/1904.04868) .[J] arXiv preprint arXiv:1904.04868.
- Jia Li, Kui Fu, Shengwei Zhao, Shiming Ge .[Spatiotemporal Knowledge Distillation for Efficient Estimation of Aerial Video Saliency](https://arxiv.org/pdf/1904.04992) .[J] arXiv preprint arXiv:1904.04992.
- 【Distillation】Wonpyo Park, Dongju Kim, Yan Lu, Minsu Cho .[Relational Knowledge Distillation](https://arxiv.org/pdf/1904.05068) .[J] arXiv preprint arXiv:1904.05068.
- Shu Changyong, Li Peng, Xie Yuan, Qu Yanyun, Dai Longquan, Ma Lizhuang .[Knowledge Squeezed Adversarial Network Compression](https://arxiv.org/pdf/1904.05100) .[J] arXiv preprint arXiv:1904.05100.
- Sungsoo Ahn, Shell Xu Hu, Andreas Damianou, Neil D. Lawrence, Zhenwen Dai .[Variational Information Distillation for Knowledge Transfer](https://arxiv.org/pdf/1904.05835) .[J] arXiv preprint arXiv:1904.05835.
- Jon Hoffman .[Cramnet: Layer-wise Deep Neural Network Compression with Knowledge Transfer from a Teacher Network](https://arxiv.org/pdf/1904.05982) .[J] arXiv preprint arXiv:1904.05982.
- Bulat A, Kossaifi J, Tzimiropoulos G, et al. [Matrix and tensor decompositions for training binary neural networks](https://arxiv.org/pdf/1904.07852)[J]. arXiv preprint arXiv:1904.07852, 2019.
- Arman Roohi, Shaahin Angizi, Deliang Fan, Ronald F DeMara .[Processing-In-Memory Acceleration of Convolutional Neural Networks for Energy-Efficiency, and Power-Intermittency Resilience](https://arxiv.org/pdf/1904.07864) .[J] arXiv preprint arXiv:1904.07864.
- Yuchen Liu, Hao Xiong, Zhongjun He, Jiajun Zhang, Hua Wu, Haifeng Wang, Chengqing Zong .[End-to-End Speech Translation with Knowledge Distillation](https://arxiv.org/pdf/1904.08075) .[J] arXiv preprint arXiv:1904.08075.
- Ji Lin, Chuang Gan, Song Han .[Defensive Quantization: When Efficiency Meets Robustness](https://arxiv.org/pdf/1904.08444) .[J] arXiv preprint arXiv:1904.08444.
- Jangho Kim, Minsung Hyun, Inseop Chung, Nojun Kwak .[Feature Fusion for Online Mutual Knowledge Distillation](https://arxiv.org/pdf/1904.09058) .[J] arXiv preprint arXiv:1904.09058.
- Xiao Jin, Baoyun Peng, Yichao Wu, Yu Liu, Jiaheng Liu, Ding Liang, Junjie Yan, Xiaolin Hu .[Knowledge Distillation via Route Constrained Optimization](https://arxiv.org/pdf/1904.09149) .[J] arXiv preprint arXiv:1904.09149.
- Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao .[Improving Multi-Task Deep Neural Networks via Knowledge Distillation for Natural Language Understanding](https://arxiv.org/pdf/1904.09482) .[J] arXiv preprint arXiv:1904.09482.
- Ze Yang, Linjun Shou, Ming Gong, Wutao Lin, Daxin Jiang .[Model Compression with Multi-Task Knowledge Distillation for Web-scale Question Answering System](https://arxiv.org/pdf/1904.09636) .[J] arXiv preprint arXiv:1904.09636.
- Yochai Zur, Chaim Baskin, Evgenii Zheltonozhskii, Brian Chmiel, Itay Evron, Alex M. Bronstein, Avi Mendelson .[Towards Learning of Filter-Level Heterogeneous Compression of Convolutional Neural Networks](https://arxiv.org/pdf/1904.09872) .[J] arXiv preprint arXiv:1904.09872.
- Jaedeok Kim, Chiyoun Park, Hyun-Joo Jung, Yoonsuck Choe .[Differentiable Pruning Method for Neural Networks](https://arxiv.org/pdf/1904.10921) .[J] arXiv preprint arXiv:1904.10921.
- Daniel Alabi, Adam Tauman Kalai, Katrina Ligett, Cameron Musco, Christos Tzamos, Ellen Vitercik .[Learning to Prune: Speeding up Repeated Computations](https://arxiv.org/pdf/1904.11875) .[J] arXiv preprint arXiv:1904.11875.
- Ting-Wu Chin, Ruizhou Ding, Cha Zhang, Diana Marculescu .[LeGR: Filter Pruning via Learned Global Ranking](https://arxiv.org/pdf/1904.12368) .[J] arXiv preprint arXiv:1904.12368.
- Nathan Wycoff, Prasanna Balaprakash, Fangfang Xia .[Neuromorphic Acceleration for Approximate Bayesian Inference on Neural Networks via Permanent Dropout](https://arxiv.org/pdf/1904.12904) .[J] arXiv preprint arXiv:1904.12904.
- Andrey Malinin, Bruno Mlodozeniec, Mark Gales .[Ensemble Distribution Distillation](https://arxiv.org/pdf/1905.00076) .[J] arXiv preprint arXiv:1905.00076.
- Xiaolong Ma, Geng Yuan, Sheng Lin, Zhengang Li, Hao Sun, Yanzhi Wang .[ResNet Can Be Pruned 60x: Introducing Network Purification and Unused Path Removal (P-RM) after Weight Pruning](https://arxiv.org/pdf/1905.00136) .[J] arXiv preprint arXiv:1905.00136.
- Bradley McDanel, Sai Qian Zhang, H. T. Kung, Xin Dong .[Full-stack Optimization for Accelerating CNNs with FPGA Validation](https://arxiv.org/pdf/1905.00462) .[J] arXiv preprint arXiv:1905.00462.
- Bowen Shi, Ming Sun, Chieh-Chi Kao, Viktor Rozgic, Spyros Matsoukas, Chao Wang .[Compression of Acoustic Event Detection Models with Low-rank Matrix Factorization and Quantization Training](https://arxiv.org/pdf/1905.00855) .[J] arXiv preprint arXiv:1905.00855.
- Yiwu Yao, Weiqiang Yang, Haoqi Zhu .[Creating Lightweight Object Detectors with Model Compression for Deployment on Edge Devices](https://arxiv.org/pdf/1905.01787) .[J] arXiv preprint arXiv:1905.01787.
- 【Structured】【mobilenetv3】Howard A, Sandler M, Chu G, et al. [Searching for mobilenetv3](https://arxiv.org/pdf/1905.02244)[J]. arXiv preprint arXiv:1905.02244, 2019.
- Bin Yang, Lin Yang, Xiaochun Li, Wenhan Zhang, Hua Zhou, Yequn Zhang, Yongxiong Ren, Yinbo Shi .[2-bit Model Compression of Deep Convolutional Neural Network on ASIC Engine for Image Retrieval](https://arxiv.org/pdf/1905.03362) .[J] arXiv preprint arXiv:1905.03362.
- Jiong Zhang, Hsiang-fu Yu, Inderjit S. Dhillon .[AutoAssist: A Framework to Accelerate Training of Deep Neural Networks](https://arxiv.org/pdf/1905.03381) .[J] arXiv preprint arXiv:1905.03381.
- Gael Kamdem De Teyou .[Deep Learning Acceleration Techniques for Real Time Mobile Vision Applications](https://arxiv.org/pdf/1905.03418) .[J] arXiv preprint arXiv:1905.03418.
- Zhen Dong, Zhewei Yao, Amir Gholami, Michael Mahoney, Kurt Keutzer .[HAWQ: Hessian AWare Quantization of Neural Networks with Mixed-Precision](https://arxiv.org/pdf/1905.03696) .[J] arXiv preprint arXiv:1905.03696.
- Pravendra Singh, Vinay Kumar Verma, Piyush Rai, Vinay P. Namboodiri .[Play and Prune: Adaptive Filter Pruning for Deep Model Compression](https://arxiv.org/pdf/1905.04446) .[J] arXiv preprint arXiv:1905.04446.
- Yushu Feng, Huan Wang, Daniel T. Yi, Roland Hu .[Triplet Distillation for Deep Face Recognition](https://arxiv.org/pdf/1905.04457) .[J] arXiv preprint arXiv:1905.04457.
- 【Pruning】Xiaohan Ding, Guiguang Ding, Yuchen Guo, Jungong Han, Chenggang Yan .[Approximated Oracle Filter Pruning for Destructive CNN Width Optimization](https://arxiv.org/pdf/1905.04748) .[J] arXiv preprint arXiv:1905.04748.
- Sara Elkerdawy, Hong Zhang, Nilanjan Ray .[Lightweight Monocular Depth Estimation Model by Joint End-to-End Filter pruning](https://arxiv.org/pdf/1905.05212) .[J] arXiv preprint arXiv:1905.05212.
- Dongsoo Lee, Se Jung Kwon, Byeongwook Kim, Parichay Kapoor, Gu-Yeon Wei .[Network Pruning for Low-Rank Binary Indexing](https://arxiv.org/pdf/1905.05686) .[J] arXiv preprint arXiv:1905.05686.
- Youhei Akimoto, Nikolaus Hansen .[Diagonal Acceleration for Covariance Matrix Adaptation Evolution Strategies](https://arxiv.org/pdf/1905.05885) .[J] arXiv preprint arXiv:1905.05885.
- 【Pruning】Chaoqi Wang, Roger Grosse, Sanja Fidler, Guodong Zhang .[EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis](https://arxiv.org/pdf/1905.05934) .[J] arXiv preprint arXiv:1905.05934.
【code:[alecwangcq/EigenDamage-Pytorch](https://github.com/alecwangcq/EigenDamage-Pytorch)】
- Corey Lammie, Wei Xiang, Mostafa Rahimi Azghadi .[Accelerating Deterministic and Stochastic Binarized Neural Networks on FPGAs Using OpenCL](https://arxiv.org/pdf/1905.06105) .[J] arXiv preprint arXiv:1905.06105.
- Chengcheng Li, Zi Wang, Dali Wang, Xiangyang Wang, Hairong Qi .[Investigating Channel Pruning through Structural Redundancy Reduction - A Statistical Study](https://arxiv.org/pdf/1905.06498) .[J] arXiv preprint arXiv:1905.06498.
- Kartikeya Bhardwaj, Naveen Suda, Radu Marculescu .[Dream Distillation: A Data-Independent Model Compression Framework](https://arxiv.org/pdf/1905.07072) .[J] arXiv preprint arXiv:1905.07072.
- Francesco Sovrano .[Combining Experience Replay with Exploration by Random Network Distillation](https://arxiv.org/pdf/1905.07579) .[J] arXiv preprint arXiv:1905.07579.
- Linfeng Zhang, Jiebo Song, Anni Gao, Jingwei Chen, Chenglong Bao, Kaisheng Ma .[Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation](https://arxiv.org/pdf/1905.08094) .[J] arXiv preprint arXiv:1905.08094.
- Gaurav Kumar Nayak, Konda Reddy Mopuri, Vaisakh Shaj, R. Venkatesh Babu, Anirban Chakraborty .[Zero-Shot Knowledge Distillation in Deep Networks](https://arxiv.org/pdf/1905.08114) .[J] arXiv preprint arXiv:1905.08114.
- Shashank Singh, Ashish Khetan, Zohar Karnin .[DARC: Differentiable ARchitecture Compression](https://arxiv.org/pdf/1905.08170) .[J] arXiv preprint arXiv:1905.08170.
- Simon Wiedemann, Heiner Kirchhoffer, Stefan Matlage, Paul Haase, Arturo Marban, Talmaj Marinc, David Neumann, Ahmed Osman, Detlev Marpe, Heiko Schwarz, Thomas Wiegand, Wojciech Samek .[DeepCABAC: Context-adaptive binary arithmetic coding for deep neural network compression](https://arxiv.org/pdf/1905.08318) .[J] arXiv preprint arXiv:1905.08318.
- Kameron Decker Harris, Aleksandr Aravkin, Rajesh Rao, Bingni Wen Brunton .[Time-varying Autoregression with Low Rank Tensors](https://arxiv.org/pdf/1905.08389) .[J] arXiv preprint arXiv:1905.08389.
- Konstantinos Pitas, Mike Davies, Pierre Vandergheynst .[Revisiting hard thresholding for DNN pruning](https://arxiv.org/pdf/1905.08793) .[J] arXiv preprint arXiv:1905.08793.
- Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, Ivan Titov .[Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned](https://arxiv.org/pdf/1905.09418) .[J] arXiv preprint arXiv:1905.09418.
- Xiaoxi He, Dawei Gao, Zimu Zhou, Yongxin Tong, Lothar Thiele .[Disentangling Redundancy for Multi-Task Pruning](https://arxiv.org/pdf/1905.09676) .[J] arXiv preprint arXiv:1905.09676.
- Xuanyi Dong, Yi Yang .[Network Pruning via Transformable Architecture Search](https://arxiv.org/pdf/1905.09717) .[J] arXiv preprint arXiv:1905.09717.
- Micah Goldblum, Liam Fowl, Soheil Feizi, Tom Goldstein .[Adversarially Robust Distillation](https://arxiv.org/pdf/1905.09747) .[J] arXiv preprint arXiv:1905.09747.
- Se Jung Kwon, Dongsoo Lee, Byeongwook Kim, Parichay Kapoor, Baeseong Park, Gu-Yeon Wei .[Structured Compression by Unstructured Pruning for Sparse Quantized Neural Networks](https://arxiv.org/pdf/1905.10138) .[J] arXiv preprint arXiv:1905.10138.
- Alberto Marchisio, Beatrice Bussolino, Alessio Colucci, Muhammad Abdullah Hanif, Maurizio Martina, Guido Masera, Muhammad Shafique .[X-TrainCaps: Accelerated Training of Capsule Nets through Lightweight Software Optimizations](https://arxiv.org/pdf/1905.10142) .[J] arXiv preprint arXiv:1905.10142.
- Yash Akhauri .[HadaNets: Flexible Quantization Strategies for Neural Networks](https://arxiv.org/pdf/1905.10759) .[J] arXiv preprint arXiv:1905.10759.
- Hanyang Kong, Jian Zhao, Xiaoguang Tu, Junliang Xing, Shengmei Shen, Jiashi Feng .[Cross-Resolution Face Recognition via Prior-Aided Face Hallucination and Residual Knowledge Distillation](https://arxiv.org/pdf/1905.10777) .[J] arXiv preprint arXiv:1905.10777.
- Xiaoliang Dai, Hongxu Yin, Niraj K. Jha .[Incremental Learning Using a Grow-and-Prune Paradigm with Efficient Neural Networks](https://arxiv.org/pdf/1905.10952) .[J] arXiv preprint arXiv:1905.10952.
- Samuel Horvath, Chen-Yu Ho, Ludovit Horvath, Atal Narayan Sahu, Marco Canini, Peter Richtarik .[Natural Compression for Distributed Deep Learning](https://arxiv.org/pdf/1905.10988) .[J] arXiv preprint arXiv:1905.10988.
- Hanwei Wu, Ather Gattami, Markus Flierl .[Quantization-Based Regularization for Autoencoders](https://arxiv.org/pdf/1905.11062) .[J] arXiv preprint arXiv:1905.11062.
- Stefan Uhlich, Lukas Mauch, Kazuki Yoshiyama, Fabien Cardinaux, Javier Alonso Garcia, Stephen Tiedemann, Thomas Kemp, Akira Nakamura .[Differentiable Quantization of Deep Neural Networks](https://arxiv.org/pdf/1905.11452) .[J] arXiv preprint arXiv:1905.11452.
- Annie Cherkaev, Waiming Tai, Jeff Phillips, Vivek Srikumar .[Learning In Practice: Reasoning About Quantization](https://arxiv.org/pdf/1905.11478) .[J] arXiv preprint arXiv:1905.11478.
- Xiaocong Du, Zheng Li, Yu Cao .[CGaP: Continuous Growth and Pruning for Efficient Deep Learning](https://arxiv.org/pdf/1905.11533) .[J] arXiv preprint arXiv:1905.11533.
- Ankit Jalan, Purushottam Kar .[Accelerating Extreme Classification via Adaptive Feature Agglomeration](https://arxiv.org/pdf/1905.11769) .[J] arXiv preprint arXiv:1905.11769.
- Zhengguang Zhou, Wengang Zhou, Richang Hong, Houqiang Li .[Online Filter Clustering and Pruning for Efficient Convnets](https://arxiv.org/pdf/1905.11787) .[J] arXiv preprint arXiv:1905.11787.
- Gonçalo Mordido, Matthijs Van Keirsbilck, Alexander Keller .[Instant Quantization of Neural Networks using Monte Carlo Methods](https://arxiv.org/pdf/1905.12253) .[J] arXiv preprint arXiv:1905.12253.
- Ghouthi Boukli Hacene (IMT Atlantique - ELEC), Carlos Lassance, Vincent Gripon (IMT Atlantique - ELEC), Matthieu Courbariaux, Yoshua Bengio (DIRO) .[Attention Based Pruning for Shift Networks](https://arxiv.org/pdf/1905.12300) .[J] arXiv preprint arXiv:1905.12300.
- Manuele Rusci, Alessandro Capotondi, Luca Benini .[Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers](https://arxiv.org/pdf/1905.13082) .[J] arXiv preprint arXiv:1905.13082.
- Xiawu Zheng, Rongrong Ji, Lang Tang, Yan Wan, Baochang Zhang, Yongjian Wu, Yunsheng Wu, Ling Shao .[Dynamic Distribution Pruning for Efficient Network Architecture Search](https://arxiv.org/pdf/1905.13543) .[J] arXiv preprint arXiv:1905.13543.
- Kunping Li .[Quantization Loss Re-Learning Method](https://arxiv.org/pdf/1905.13568) .[J] arXiv preprint arXiv:1905.13568.
- S. Asim Ahmed .[L0 Regularization Based Neural Network Design and Compression](https://arxiv.org/pdf/1905.13652) .[J] arXiv preprint arXiv:1905.13652.
- Thijs Vogels, Sai Praneeth Karimireddy, Martin Jaggi .[PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization](https://arxiv.org/pdf/1905.13727) .[J] arXiv preprint arXiv:1905.13727.
- Bonggun Shin, Hao Yang, Jinho D. Choi .[The Pupil Has Become the Master: Teacher-Student Model-Based Word Embedding Distillation with Ensemble Learning](https://arxiv.org/pdf/1906.00095) .[J] arXiv preprint arXiv:1906.00095.
- Chuanguang Yang, Zhulin An, Chao Li, Boyu Diao, Yongjun Xu .[Multi-objective Pruning for CNNs using Genetic Algorithm](https://arxiv.org/pdf/1906.00399) .[J] arXiv preprint arXiv:1906.00399.
- Stefano Recanatesi, Matthew Farrell, Madhu Advani, Timothy Moore, Guillaume Lajoie, Eric Shea-Brown .[Dimensionality compression and expansion in Deep Neural Networks](https://arxiv.org/pdf/1906.00443) .[J] arXiv preprint arXiv:1906.00443.
- Aishwarya Bhandare, Vamsi Sripathi, Deepthi Karkada, Vivek Menon, Sun Choi, Kushal Datta, Vikram Saletore .[Efficient 8-Bit Quantization of Transformer Neural Machine Language Translation Model](https://arxiv.org/pdf/1906.00532) .[J] arXiv preprint arXiv:1906.00532.
- 【Distillation】Jayashree Karlekar, Jiashi Feng, Zi Sian Wong, Sugiri Pranata .[Deep Face Recognition Model Compression via Knowledge Transfer and Distillation](https://arxiv.org/pdf/1906.00619) .[J] arXiv preprint arXiv:1906.00619.
- Yaniv Blumenfeld, Dar Gilboa, Daniel Soudry .[A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth Trade-Off](https://arxiv.org/pdf/1906.00771) .[J] arXiv preprint arXiv:1906.00771.
- Jyun-Yi Wu, Cheng Yu, Szu-Wei Fu, Chih-Ting Liu, Shao-Yi Chien, Yu Tsao .[Increasing Compactness Of Deep Learning Based Speech Enhancement Models With Parameter Pruning And Quantization Techniques](https://arxiv.org/pdf/1906.01078) .[J] arXiv preprint arXiv:1906.01078.
- Marc Riera, Jose-Maria Arnau, Antonio Gonzalez .[(Pen-) Ultimate DNN Pruning](https://arxiv.org/pdf/1906.02535) .[J] arXiv preprint arXiv:1906.02535.
- Alexander Finkelstein, Uri Almog, Mark Grobman .[Fighting Quantization Bias With Bias](https://arxiv.org/pdf/1906.03193) .[J] arXiv preprint arXiv:1906.03193.
- Waldyn Martinez .[Ensemble Pruning via Margin Maximization](https://arxiv.org/pdf/1906.03247) .[J] arXiv preprint arXiv:1906.03247.
- Tao Wang, Li Yuan, Xiaopeng Zhang, Jiashi Feng .[Distilling Object Detectors with Fine-grained Feature Imitation](https://arxiv.org/pdf/1906.03609) .[J] arXiv preprint arXiv:1906.03609.
- Brian R. Bartoldson, Ari S. Morcos, Adrian Barbu, Gordon Erlebacher .[The Generalization-Stability Tradeoff in Neural Network Pruning](https://arxiv.org/pdf/1906.03728) .[J] arXiv preprint arXiv:1906.03728.
- Yasutoshi Ida, Yasuhiro Fujiwara .[Network Implosion: Effective Model Compression for ResNets via Static Layer Pruning and Retraining](https://arxiv.org/pdf/1906.03826) .[J] arXiv preprint arXiv:1906.03826.
- Jack Turner, Elliot J. Crowley, Gavin Gray, Amos Storkey, Michael O'Boyle .[BlockSwap: Fisher-guided Block Substitution for Network Compression](https://arxiv.org/pdf/1906.04113) .[J] arXiv preprint arXiv:1906.04113.
- Kaveena Persand, Andrew Anderson, David Gregg .[A Taxonomy of Channel Pruning Signals in CNNs](https://arxiv.org/pdf/1906.04675) .[J] arXiv preprint arXiv:1906.04675.
- Markus Nagel, Mart van Baalen, Tijmen Blankevoort, Max Welling .[Data-Free Quantization through Weight Equalization and Bias Correction](https://arxiv.org/pdf/1906.04721) .[J] arXiv preprint arXiv:1906.04721.
- Urmish Thakker, Jesse Beu, Dibakar Gope, Ganesh Dasika, Matthew Mattina .[Run-Time Efficient RNN Compression for Inference on Edge Devices](https://arxiv.org/pdf/1906.04886) .[J] arXiv preprint arXiv:1906.04886.
- Sam Shleifer, Eric Prokop .[Using Small Proxy Datasets to Accelerate Hyperparameter Search](https://arxiv.org/pdf/1906.04887) .[J] arXiv preprint arXiv:1906.04887.
- Guenther Schindler, Wolfgang Roth, Franz Pernkopf, Holger Froening .[Parameterized Structured Pruning for Deep Neural Networks](https://arxiv.org/pdf/1906.05180) .[J] arXiv preprint arXiv:1906.05180.
- Jian-Feng Cai, Lizhang Miao, Yang Wang, Yin Xian .[Optimal low rank tensor recovery](https://arxiv.org/pdf/1906.05346) .[J] arXiv preprint arXiv:1906.05346.
- Erik Englesson, Hossein Azizpour .[Efficient Evaluation-Time Uncertainty Estimation by Improved Distillation](https://arxiv.org/pdf/1906.05419) .[J] arXiv preprint arXiv:1906.05419.
- Arip Asadulaev, Igor Kuznetsov, Andrey Filchenkov .[Linear Distillation Learning](https://arxiv.org/pdf/1906.05431) .[J] arXiv preprint arXiv:1906.05431.
- Namhoon Lee, Thalaiyasingam Ajanthan, Stephen Gould, Philip H. S. Torr .[A Signal Propagation Perspective for Pruning Neural Networks at Initialization](https://arxiv.org/pdf/1906.06307) .[J] arXiv preprint arXiv:1906.06307.
- Adhiguna Kuncoro, Chris Dyer, Laura Rimell, Stephen Clark, Phil Blunsom .[Scalable Syntax-Aware Language Models Using Knowledge Distillation](https://arxiv.org/pdf/1906.06438) .[J] arXiv preprint arXiv:1906.06438.
- Deniz Oktay, Johannes Ballé, Saurabh Singh, Abhinav Shrivastava .[Model Compression by Entropy Penalized Reparameterization](https://arxiv.org/pdf/1906.06624) .[J] arXiv preprint arXiv:1906.06624.
- Jingkuan Song, Xiaosu Zhu, Lianli Gao, Xin-Shun Xu, Wu Liu, Heng Tao Shen .[Deep Recurrent Quantization for Generating Sequential Binary Codes](https://arxiv.org/pdf/1906.06699) .[J] arXiv preprint arXiv:1906.06699.
- Liangjiang Wen, Xueyang Zhang, Haoli Bai, Zenglin Xu .[Structured Pruning of Recurrent Neural Networks through Neuron Selection](https://arxiv.org/pdf/1906.06847) .[J] arXiv preprint arXiv:1906.06847.
- Dong Wang, Lei Zhou, Xiao Bai, Jun Zhou .[A One-step Pruning-recovery Framework for Acceleration of Convolutional Neural Networks](https://arxiv.org/pdf/1906.07488) .[J] arXiv preprint arXiv:1906.07488.
- Kevin Alexander Laube, Andreas Zell .[Prune and Replace NAS](https://arxiv.org/pdf/1906.07528) .[J] arXiv preprint arXiv:1906.07528.
- Qing Yang, Wei Wen, Zuoguan Wang, Hai Li .[Joint Pruning on Activations and Weights for Efficient Neural Networks](https://arxiv.org/pdf/1906.07875) .[J] arXiv preprint arXiv:1906.07875.
- Zhuo Chen, Jiyuan Zhang, Ruizhou Ding, Diana Marculescu .[ViP: Virtual Pooling for Accelerating CNN-based Image Classification and Object Detection](https://arxiv.org/pdf/1906.07912) .[J] arXiv preprint arXiv:1906.07912.
- Maryam Parsa, Aayush Ankit, Amirkoushyar Ziabari, Kaushik Roy .[PABO: Pseudo Agent-Based Multi-Objective Bayesian Hyperparameter Optimization for Efficient Neural Accelerator Design](https://arxiv.org/pdf/1906.08167) .[J] arXiv preprint arXiv:1906.08167.
- Wei Hong, Jinke Yu Fan Zong .[GAN-Knowledge Distillation for one-stage Object Detection](https://arxiv.org/pdf/1906.08467) .[J] arXiv preprint arXiv:1906.08467.
- Bethge J, Yang H, Bornstein M, et al. [Back to Simplicity: How to Train Accurate BNNs from Scratch?](https://arxiv.org/pdf/1906.08637)[J]. arXiv preprint arXiv:1906.08637, 2019.
- Le Thanh Nguyen-Meidine, Eric Granger, Madhu Kiran, Louis-Antoine Blais-Morin .[An Improved Trade-off Between Accuracy and Complexity with Progressive Gradient Pruning](https://arxiv.org/pdf/1906.08746) .[J] arXiv preprint arXiv:1906.08746.
- Wenxiao Wang, Cong Fu, Jishun Guo, Deng Cai, Xiaofei He .[COP: Customized Deep Model Compression via Regularized Correlation-Based Filter-Level Pruning](https://arxiv.org/pdf/1906.10337) .[J] arXiv preprint arXiv:1906.10337.
- 【Pruning】Pavlo Molchanov, Arun Mallya, Stephen Tyree, Iuri Frosio, Jan Kautz .[Importance Estimation for Neural Network Pruning](https://arxiv.org/pdf/1906.10771) .[J] arXiv preprint arXiv:1906.10771.
【code:[NVlabs/Taylor_pruning](https://github.com/NVlabs/Taylor_pruning)】
- Zhenchuan Yang, Chun Zhang, Weibin Zhang, Jianxiu Jin, Dongpeng Chen .[Essence Knowledge Distillation for Speech Recognition](https://arxiv.org/pdf/1906.10834) .[J] arXiv preprint arXiv:1906.10834.
- Linguang Zhang, Maciej Halber, Szymon Rusinkiewicz .[Accelerating Large-Kernel Convolution Using Summed-Area Tables](https://arxiv.org/pdf/1906.11367) .[J] arXiv preprint arXiv:1906.11367.
- Jonathan Frankle, David Bau .[Dissecting Pruned Neural Networks](https://arxiv.org/pdf/1907.00262) .[J] arXiv preprint arXiv:1907.00262.
- Wen-Pu Cai, Wu-Jun Li .[Weight Normalization based Quantization for Deep Neural Network Compression](https://arxiv.org/pdf/1907.00593) .[J] arXiv preprint arXiv:1907.00593.
- Bowen Shi, Ming Sun, Chieh-Chi Kao, Viktor Rozgic, Spyros Matsoukas, Chao Wang .[Compression of Acoustic Event Detection Models With Quantized Distillation](https://arxiv.org/pdf/1907.00873) .[J] arXiv preprint arXiv:1907.00873.
- Kaijie Tu .[Accelerating Deconvolution on Unmodified CNN Accelerators for Generative Adversarial Networks -- A Software Approach](https://arxiv.org/pdf/1907.01773) .[J] arXiv preprint arXiv:1907.01773.
- Yanzhi Wang, Shaokai Ye, Zhezhi He, Xiaolong Ma, Linfeng Zhang, Sheng Lin, Geng Yuan, Sia Huat Tan, Zhengang Li, Deliang Fan, Xuehai Qian, Xue Lin, Kaisheng Ma .[Non-structured DNN Weight Pruning Considered Harmful](https://arxiv.org/pdf/1907.02124) .[J] arXiv preprint arXiv:1907.02124.
- 【Distillation】Seunghyun Lee, Byung Cheol Song .[Graph-based Knowledge Distillation by Multi-head Attention Network](https://arxiv.org/pdf/1907.02226) .[J] arXiv preprint arXiv:1907.02226.
- Hugo Masson, Amran Bhuiyan, Le Thanh Nguyen-Meidine, Mehrsan Javan, Parthipan Siva, Ismail Ben Ayed, Eric Granger .[A Survey of Pruning Methods for Efficient Person Re-identification Across Domains](https://arxiv.org/pdf/1907.02547) .[J] arXiv preprint arXiv:1907.02547.
- Xiaopeng Sun, Wen Lu, Rui Wang, Furui Bai .[Distilling with Residual Network for Single Image Super Resolution](https://arxiv.org/pdf/1907.02843) .[J] arXiv preprint arXiv:1907.02843.
- Ning Liu, Xiaolong Ma, Zhiyuan Xu, Yanzhi Wang, Jian Tang, Jieping Ye .[AutoSlim: An Automatic DNN Structured Pruning Framework for Ultra-High Compression Rates](https://arxiv.org/pdf/1907.03141) .[J] arXiv preprint arXiv:1907.03141.
- Łukasz Dudziak, Mohamed S. Abdelfattah, Ravichander Vipperla, Stefanos Laskaridis, Nicholas D. Lane .[ShrinkML: End-to-End ASR Model Compression Using Reinforcement Learning](https://arxiv.org/pdf/1907.03540) .[J] arXiv preprint arXiv:1907.03540.
- Ben Mussay, Samson Zhou, Vladimir Braverman, Dan Feldman .[On Activation Function Coresets for Network Pruning](https://arxiv.org/pdf/1907.04018) .[J] arXiv preprint arXiv:1907.04018.
- Biao Qian, Yang Wang .[A Targeted Acceleration and Compression Framework for Low bit Neural Networks](https://arxiv.org/pdf/1907.05271) .[J] arXiv preprint arXiv:1907.05271.
- Daquan Zhou, Xiaojie Jin, Kaixin Wang, Jianchao Yang, Jiashi Feng .[Deep Model Compression via Filter Auto-sampling](https://arxiv.org/pdf/1907.05642) .[J] arXiv preprint arXiv:1907.05642.
- 【Quantization】Pierre Stock, Armand Joulin, Rémi Gribonval, Benjamin Graham, Hervé Jégou .[And the Bit Goes Down: Revisiting the Quantization of Neural Networks](https://arxiv.org/pdf/1907.05686) .[J] arXiv preprint arXiv:1907.05686.
【code:[facebookresearch/kill-the-bits](https://github.com/facebookresearch/kill-the-bits)】
- Kang-Ho Lee, JoonHyun Jeong, Sung-Ho Bae .[An Inter-Layer Weight Prediction and Quantization for Deep Neural Networks based on a Smoothly Varying Weight Hypothesis](https://arxiv.org/pdf/1907.06835) .[J] arXiv preprint arXiv:1907.06835.
- Zhenhui Xu, Guolin Ke, Jia Zhang, Jiang Bian, Tie-Yan Liu .[Light Multi-segment Activation for Model Compression](https://arxiv.org/pdf/1907.06870) .[J] arXiv preprint arXiv:1907.06870.
- Vojtech Mrazek, Zdenek Vasicek, Lukas Sekanina, Muhammad Abdullah Hanif, Muhammad Shafique .[ALWANN: Automatic Layer-Wise Approximation of Deep Neural Network Accelerators without Retraining](https://arxiv.org/pdf/1907.07229) .[J] arXiv preprint arXiv:1907.07229.
- Besher Alhalabi, Mohamed Medhat Gaber, Shadi Basurra .[EnSyth: A Pruning Approach to Synthesis of Deep Learning Ensembles](https://arxiv.org/pdf/1907.09286) .[J] arXiv preprint arXiv:1907.09286.
- Haoran Zhao, Xin Sun, Junyu Dong, Changrui Chen, Zihe Dong .[Highlight Every Step: Knowledge Distillation via Collaborative Teaching](https://arxiv.org/pdf/1907.09643) .[J] arXiv preprint arXiv:1907.09643.
- Frederick Tung, Greg Mori .[Similarity-Preserving Knowledge Distillation](https://arxiv.org/pdf/1907.09682) .[J] arXiv preprint arXiv:1907.09682.
- Shivangi Srivastava, Maxim Berman, Matthew B. Blaschko, Devis Tuia .[Adaptive Compression-based Lifelong Learning](https://arxiv.org/pdf/1907.09695) .[J] arXiv preprint arXiv:1907.09695.
- Ning Wang, Wengang Zhou, Yibing Song, Chao Ma, Houqiang Li .[Real-Time Correlation Tracking via Joint Model Compression and Transfer](https://arxiv.org/pdf/1907.09831) .[J] arXiv preprint arXiv:1907.09831.
- Yuanpei Liu, Xingping Dong, Wenguan Wang, Jianbing Shen .[Teacher-Students Knowledge Distillation for Siamese Trackers](https://arxiv.org/pdf/1907.10586) .[J] arXiv preprint arXiv:1907.10586.
- Kartikeya Bhardwaj, Chingyi Lin, Anderson Sartor, Radu Marculescu .[Memory- and Communication-Aware Model Compression for Distributed Deep Learning Inference on IoT](https://arxiv.org/pdf/1907.11804) .[J] arXiv preprint arXiv:1907.11804.
- Chuanjian Liu, Yunhe Wang, Kai Han, Chunjing Xu, Chang Xu .[Learning Instance-wise Sparsity for Accelerating Deep Models](https://arxiv.org/pdf/1907.11840) .[J] arXiv preprint arXiv:1907.11840.
- Simon Wiedemann, Heiner Kirchoffer, Stefan Matlage, Paul Haase, Arturo Marban, Talmaj Marinc, David Neumann, Tung Nguyen, Ahmed Osman, Detlev Marpe, Heiko Schwarz, Thomas Wiegand, Wojciech Samek .[DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks](https://arxiv.org/pdf/1907.11900) .[J] arXiv preprint arXiv:1907.11900.
- Jiajia Guo, Jinghe Wang, Chao-Kai Wen, Shi Jin, Geoffrey Ye Li .[Compression and Acceleration of Neural Networks for Communications](https://arxiv.org/pdf/1907.13269) .[J] arXiv preprint arXiv:1907.13269.
- Xucheng Ye, Jianlei Yang, Pengcheng Dai, Yiran Chen, Weisheng Zhao .[Accelerating CNN Training by Sparsifying Activation Gradients](https://arxiv.org/pdf/1908.00173) .[J] arXiv preprint arXiv:1908.00173.
- 【Distillation】Yuenan Hou, Zheng Ma, Chunxiao Liu, Chen Change Loy .[Learning Lightweight Lane Detection CNNs by Self Attention Distillation](https://arxiv.org/pdf/1908.00821) .[J] arXiv preprint arXiv:1908.00821.
- Muhamad Risqi U. Saputra, Pedro P. B. de Gusmao, Yasin Almalioglu, Andrew Markham, Niki Trigoni .[Distilling Knowledge From a Deep Pose Regressor Network](https://arxiv.org/pdf/1908.00858) .[J] arXiv preprint arXiv:1908.00858.
- MohammadHossein AskariHemmat, Sina Honari, Lucas Rouhier, Christian S. Perone, Julien Cohen-Adad, Yvon Savaria, Jean-Pierre David .[U-Net Fixed-Point Quantization for Medical Image Segmentation](https://arxiv.org/pdf/1908.01073) .[J] arXiv preprint arXiv:1908.01073.
- Haibao Yu, Tuopu Wen, Guangliang Cheng, Jiankai Sun, Qi Han, Jianping Shi .[GDRQ: Group-based Distribution Reshaping for Quantization](https://arxiv.org/pdf/1908.01477) .[J] arXiv preprint arXiv:1908.01477.
- Wei-Ting Wang, Han-Lin Li, Wei-Shiang Lin, Cheng-Ming Chiang, Yi-Min Tsai .[Architecture-aware Network Pruning for Vision Quality Applications](https://arxiv.org/pdf/1908.02125) .[J] arXiv preprint arXiv:1908.02125.
- Yunxiang Zhang, Chenglong Zhao, Bingbing Ni, Jian Zhang, Haoran Deng .[Exploiting Channel Similarity for Accelerating Deep Convolutional Neural Networks](https://arxiv.org/pdf/1908.02620) .[J] arXiv preprint arXiv:1908.02620.
- Boyu Zhang, Azadeh Davoodi, Yu Hen Hu .[Efficient Inference of CNNs via Channel Pruning](https://arxiv.org/pdf/1908.03266) .[J] arXiv preprint arXiv:1908.03266.
- Pierre Humbert (CMLA), Julien Audiffren (CMLA), Laurent Oudre (L2TI), Nicolas Vayatis (CMLA) .[Multivariate Convolutional Sparse Coding with Low Rank Tensor](https://arxiv.org/pdf/1908.03367) .[J] arXiv preprint arXiv:1908.03367.
- Chaithanya Kumar Mummadi, Tim Genewein, Dan Zhang, Thomas Brox, Volker Fischer .[Group Pruning using a Bounded-Lp norm for Group Gating and Regularization](https://arxiv.org/pdf/1908.03463) .[J] arXiv preprint arXiv:1908.03463.
- Stanislav Morozov, Artem Babenko .[Unsupervised Neural Quantization for Compressed-Domain Similarity Search](https://arxiv.org/pdf/1908.03883) .[J] arXiv preprint arXiv:1908.03883.
- Jogendra Nath Kundu, Nishank Lakkakula, R. Venkatesh Babu .[UM-Adapt: Unsupervised Multi-Task Adaptation Using Adversarial Cross-Task Distillation](https://arxiv.org/pdf/1908.03884) .[J] arXiv preprint arXiv:1908.03884.
- Divyam Madaan, Sung Ju Hwang .[Adversarial Neural Pruning](https://arxiv.org/pdf/1908.04355) .[J] arXiv preprint arXiv:1908.04355.
- Ruihao Gong, Xianglong Liu, Shenghu Jiang, Tianxiang Li, Peng Hu, Jiazhen Lin, Fengwei Yu, Junjie Yan .[Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks](https://arxiv.org/pdf/1908.05033) .[J] arXiv preprint arXiv:1908.05033.
- Oren Barkan, Noam Razin, Itzik Malkiel, Ori Katz, Avi Caciularu, Noam Koenigstein .[Scalable Attentive Sentence-Pair Modeling via Distilled Sentence Embedding](https://arxiv.org/pdf/1908.05161) .[J] arXiv preprint arXiv:1908.05161.
- Ziheng Wang, Sree Harsha Nelaturu .[Accelerated CNN Training Through Gradient Approximation](https://arxiv.org/pdf/1908.05460) .[J] arXiv preprint arXiv:1908.05460.
- Chaoyang Wang, Chen Kong, Simon Lucey .[Distill Knowledge from NRSfM for Weakly Supervised 3D Pose Learning](https://arxiv.org/pdf/1908.06377) .[J] arXiv preprint arXiv:1908.06377.
- Shreyas Kolala Venkataramanaiah, Yufei Ma, Shihui Yin, Eriko Nurvithadhi, Aravind Dasu, Yu Cao, Jae-sun Seo .[Automatic Compiler Based FPGA Accelerator for CNN Training](https://arxiv.org/pdf/1908.06724) .[J] arXiv preprint arXiv:1908.06724.
- Yasuo Yamane, Kenichi Kobayashi .[A New Fast Weighted All-pairs Shortest Path Search Algorithm Based on Pruning by Shortest Path Trees](https://arxiv.org/pdf/1908.06798) .[J] arXiv preprint arXiv:1908.06798.
- Yasuo Yamane, Kenichi Kobayashi .[A New Fast Unweighted All-pairs Shortest Path Search Algorithm Based on Pruning by Shortest Path Trees](https://arxiv.org/pdf/1908.06806) .[J] arXiv preprint arXiv:1908.06806.
- Mauricio Orbes-Arteaga, Jorge Cardoso, Lauge Sørensen, Christian Igel, Sebastien Ourselin, Marc Modat, Mads Nielsen, Akshay Pai .[Knowledge distillation for semi-supervised domain adaptation](https://arxiv.org/pdf/1908.07355) .[J] arXiv preprint arXiv:1908.07355.
- Zhiqiang Shen, Zhankui He, Wanyun Cui, Jiahui Yu, Yutong Zheng, Chenchen Zhu, Marios Savvides .[Adversarial-Based Knowledge Distillation for Multi-Model Ensemble and Noisy Data Refinement](https://arxiv.org/pdf/1908.08520) .[J] arXiv preprint arXiv:1908.08520.
- Sunwoo Kim, Mrinmoy Maity, Minje Kim .[Incremental Binarization On Recurrent Neural Networks For Single-Channel Source Separation](https://arxiv.org/pdf/1908.08898) .[J] arXiv preprint arXiv:1908.08898.
- Yawei Li, Shuhang Gu, Luc Van Gool, Radu Timofte .[Learning Filter Basis for Convolutional Neural Network Compression](https://arxiv.org/pdf/1908.08932) .[J] arXiv preprint arXiv:1908.08932.
- Udit Gupta, Brandon Reagen, Lillian Pentecost, Marco Donato, Thierry Tambe, Alexander M. Rush, Gu-Yeon Wei, David Brooks .[MASR: A Modular Accelerator for Sparse RNNs](https://arxiv.org/pdf/1908.08976) .[J] arXiv preprint arXiv:1908.08976.
- Xuecheng Nie, Yuncheng Li, Linjie Luo, Ning Zhang, Jiashi Feng .[Dynamic Kernel Distillation for Efficient Pose Estimation in Videos](https://arxiv.org/pdf/1908.09216) .[J] arXiv preprint arXiv:1908.09216.
- Jiajun Deng, Yingwei Pan, Ting Yao, Wengang Zhou, Houqiang Li, Tao Mei .[Relation Distillation Networks for Video Object Detection](https://arxiv.org/pdf/1908.09511) .[J] arXiv preprint arXiv:1908.09511.
- Ting Chen, Yizhou Sun .[Differentiable Product Quantization for End-to-End Embedding Compression](https://arxiv.org/pdf/1908.09756) .[J] arXiv preprint arXiv:1908.09756.
- Xiaolong Ma, Geng Yuan, Sheng Lin, Caiwen Ding, Fuxun Yu, Tao Liu, Wujie Wen, Xiang Chen, Yanzhi Wang .[Tiny but Accurate: A Pruned, Quantized and Optimized Memristor Crossbar Framework for Ultra Efficient DNN Implementation](https://arxiv.org/pdf/1908.10017) .[J] arXiv preprint arXiv:1908.10017.
- Saurabh Kumar, Biplab Banerjee, Subhasis Chaudhuri .[Online Sensor Hallucination via Knowledge Distillation for Multimodal Image Classification](https://arxiv.org/pdf/1908.10559) .[J] arXiv preprint arXiv:1908.10559.
- Tong Geng, Ang Li, Tianqi Wang, Chunshu Wu, Yanfei Li, Antonino Tumeo, Martin Herbordt .[UWB-GCN: Hardware Acceleration of Graph-Convolution-Network through Runtime Workload Rebalancing](https://arxiv.org/pdf/1908.10834) .[J] arXiv preprint arXiv:1908.10834.
- Angelo Garofalo, Manuele Rusci, Francesco Conti, Davide Rossi, Luca Benini .[PULP-NN: Accelerating Quantized Neural Networks on Parallel Ultra-Low-Power RISC-V Processors](https://arxiv.org/pdf/1908.11263) .[J] arXiv preprint arXiv:1908.11263.
- Albert Reuther, Peter Michaleas, Michael Jones, Vijay Gadepally, Siddharth Samsi, Jeremy Kepner .[Survey and Benchmarking of Machine Learning Accelerators](https://arxiv.org/pdf/1908.11348) .[J] arXiv preprint arXiv:1908.11348.
- Lukas Cavigelli, Georg Rutishauser, Luca Benini .[EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerators](https://arxiv.org/pdf/1908.11645) .[J] arXiv preprint arXiv:1908.11645.
- Geng Yuan, Xiaolong Ma, Caiwen Ding, Sheng Lin, Tianyun Zhang, Zeinab S. Jalali, Yilong Zhao, Li Jiang, Sucheta Soundarajan, Yanzhi Wang .[An Ultra-Efficient Memristor-Based DNN Framework with Structured Weight Pruning and Quantization Using ADMM](https://arxiv.org/pdf/1908.11691) .[J] arXiv preprint arXiv:1908.11691.
- Yuke Wang, Boyuan Feng, Gushu Li, Lei Deng, Yuan Xie, Yufei Ding .[AccD: A Compiler-based Framework for Accelerating Distance-related Algorithms on CPU-FPGA Platforms](https://arxiv.org/pdf/1908.11781) .[J] arXiv preprint arXiv:1908.11781.
- Sudarshan Srinivasan, Pradeep Janedula, Saurabh Dhoble, Sasikanth Avancha, Dipankar Das, Naveen Mellempudi, Bharat Daga, Martin Langhammer, Gregg Baeckler, Bharat Kaul .[High Performance Scalable FPGA Accelerator for Deep Neural Networks](https://arxiv.org/pdf/1908.11809) .[J] arXiv preprint arXiv:1908.11809.
- Amey Agrawal, Rohit Karlupia .[Learning Digital Circuits: A Journey Through Weight Invariant Self-Pruning Neural Networks](https://arxiv.org/pdf/1909.00052) .[J] arXiv preprint arXiv:1909.00052.
- Lei He .[EnGN: A High-Throughput and Energy-Efficient Accelerator for Large Graph Neural Networks](https://arxiv.org/pdf/1909.00155) .[J] arXiv preprint arXiv:1909.00155.
- Ye Yu, Niraj K. Jha .[SPRING: A Sparsity-Aware Reduced-Precision Monolithic 3D CNN Accelerator Architecture for Training and Inference](https://arxiv.org/pdf/1909.00557) .[J] arXiv preprint arXiv:1909.00557.
- Bharti Munjal, Fabio Galasso, Sikandar Amin .[Knowledge Distillation for End-to-End Person Search](https://arxiv.org/pdf/1909.01058) .[J] arXiv preprint arXiv:1909.01058.
- Yang Li, Thomas Strohmer .[What Happens on the Edge, Stays on the Edge: Toward Compressive Deep Learning](https://arxiv.org/pdf/1909.01539) .[J] arXiv preprint arXiv:1909.01539.
- Sungho Shin, Yoonho Boo, Wonyong Sung .[Empirical Analysis of Knowledge Distillation Technique for Optimization of Quantized Deep Neural Networks](https://arxiv.org/pdf/1909.01688) .[J] arXiv preprint arXiv:1909.01688.
- Chengyi Wang, Shuangzhi Wu, Shujie Liu .[Accelerating Transformer Decoding via a Hybrid of Self-attention and Recurrent Neural Network](https://arxiv.org/pdf/1909.02279) .[J] arXiv preprint arXiv:1909.02279.
- Yew Ken Chia, Sam Witteveen, Martin Andrews .[Transformer to CNN: Label-scarce distillation for efficient text classification](https://arxiv.org/pdf/1909.03508) .[J] arXiv preprint arXiv:1909.03508.
- Wenming Yang, Xuechen Zhang, Yapeng Tian, Wei Wang, Jing-Hao Xue, Qingmin Liao .[LCSCNet: Linear Compressing Based Skip-Connecting Network for Image Super-Resolution](https://arxiv.org/pdf/1909.03573) .[J] arXiv preprint arXiv:1909.03573.
【code:[XuechenZhang123/LCSC](https://github.com/XuechenZhang123/LCSC)】
- Shuang Gao, Xin Liu, Lung-Sheng Chien, William Zhang, Jose M. Alvarez .[VACL: Variance-Aware Cross-Layer Regularization for Pruning Deep Residual Networks](https://arxiv.org/pdf/1909.04485) .[J] arXiv preprint arXiv:1909.04485.
- Ramchalam Kinattinkara Ramakrishnan, Eyyüb Sari, Vahid Partovi Nia .[Differentiable Mask Pruning for Neural Networks](https://arxiv.org/pdf/1909.04567) .[J] arXiv preprint arXiv:1909.04567.
- Zhaoyang Zeng, Bei Liu, Jianlong Fu, Hongyang Chao, Lei Zhang .[WSOD^2: Learning Bottom-up and Top-down Objectness Distillation for Weakly-supervised Object Detection](https://arxiv.org/pdf/1909.04972) .[J] arXiv preprint arXiv:1909.04972.
- Xiaolong Ma, Fu-Ming Guo, Wei Niu, Xue Lin, Jian Tang, Kaisheng Ma, Bin Ren, Yanzhi Wang .[PCONV: The Missing but Desirable Sparsity in DNN Weight Pruning for Real-time Execution on Mobile Devices](https://arxiv.org/pdf/1909.05073) .[J] arXiv preprint arXiv:1909.05073.
- Jiancheng Lyu, Spencer Sheen .[A Channel-Pruned and Weight-Binarized Convolutional Neural Network for Keyword Spotting](https://arxiv.org/pdf/1909.05623) .[J] arXiv preprint arXiv:1909.05623.
- Mostafa Elhoushi, Ye Henry Tian, Zihao Chen, Farhan Shafiq, Joey Yiwei Li .[Accelerating Training using Tensor Decomposition](https://arxiv.org/pdf/1909.05675) .[J] arXiv preprint arXiv:1909.05675.
- Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W. Mahoney, Kurt Keutzer .[Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT](https://arxiv.org/pdf/1909.05840) .[J] arXiv preprint arXiv:1909.05840.
- Shubham Jain, Sumeet Kumar Gupta, Anand Raghunathan .[TiM-DNN: Ternary in-Memory accelerator for Deep Neural Networks](https://arxiv.org/pdf/1909.06892) .[J] arXiv preprint arXiv:1909.06892.
- Hyoukjun Kwon, Liangzhen Lai, Tushar Krishna, Vikas Chandra .[HERALD: Optimizing Heterogeneous DNN Accelerators for Edge Devices](https://arxiv.org/pdf/1909.07437) .[J] arXiv preprint arXiv:1909.07437.
- Xiaoyu Yu, Yuwei Wang, Jie Miao, Ephrem Wu, Heng Zhang, Yu Meng, Bo Zhang, Biao Min, Dewei Chen, Jianlin Gao .[A Data-Center FPGA Acceleration Platform for Convolutional Neural Networks](https://arxiv.org/pdf/1909.07973) .[J] arXiv preprint arXiv:1909.07973.
- Umar Asif, Jianbin Tang, Stefan Harrer .[Ensemble Knowledge Distillation for Learning Improved and Efficient Networks](https://arxiv.org/pdf/1909.08097) .[J] arXiv preprint arXiv:1909.08097.
- Zhonghui You, Kun Yan, Jinmian Ye, Meng Ma, Ping Wang .[Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks](https://arxiv.org/pdf/1909.08174) .[J] arXiv preprint arXiv:1909.08174.
【code:[youzhonghui/gate-decorator-pruning](https://github.com/youzhonghui/gate-decorator-pruning)】
- Rui Chen, Haizhou Ai, Chong Shang, Long Chen, Zijie Zhuang .[Learning Lightweight Pedestrian Detector with Hierarchical Knowledge Distillation](https://arxiv.org/pdf/1909.09325) .[J] arXiv preprint arXiv:1909.09325.
- Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, Qun Liu .[TinyBERT: Distilling BERT for Natural Language Understanding](https://arxiv.org/pdf/1909.10351) .[J] arXiv preprint arXiv:1909.10351.
- Rahim Entezari, Olga Saukh .[Class-dependent Compression of Deep Neural Networks](https://arxiv.org/pdf/1909.10364) .[J] arXiv preprint arXiv:1909.10364.
- SeongUk Park, Nojun Kwak .[FEED: Feature-level Ensemble for Knowledge Distillation](https://arxiv.org/pdf/1909.10754) .[J] arXiv preprint arXiv:1909.10754.
- Taiji Suzuki .[Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network](https://arxiv.org/pdf/1909.11274) .[J] arXiv preprint arXiv:1909.11274.
- Chun Quan, Jun-Gi Jang, Hyun Dong Lee, U Kang .[FALCON: Fast and Lightweight Convolution for Compressing and Accelerating CNN](https://arxiv.org/pdf/1909.11321) .[J] arXiv preprint arXiv:1909.11321.
- Zhe Xu, Ray C. C. Cheung .[Accurate and Compact Convolutional Neural Networks with Trained Binarization](https://arxiv.org/pdf/1909.11366) .[J] arXiv preprint arXiv:1909.11366.
- Li Yuan, Francis E.H.Tay, Guilin Li, Tao Wang, Jiashi Feng .[Revisit Knowledge Distillation: a Teacher-free Framework](https://arxiv.org/pdf/1909.11723) .[J] arXiv preprint arXiv:1909.11723.
【code:[/yuanli2333/Teacher-free-Knowledge-Distillation](https://github.com/yuanli2333/Teacher-free-Knowledge-Distillation)】
- Zheng Hui, Xinbo Gao, Yunchu Yang, Xiumei Wang .[Lightweight Image Super-Resolution with Information Multi-distillation Network](https://arxiv.org/pdf/1909.11856) .[J] arXiv preprint arXiv:1909.11856.
- Grégoire Morin, Ryan Razani, Vahid Partovi Nia, Eyyüb Sari .[Smart Ternary Quantization](https://arxiv.org/pdf/1909.12205) .[J] arXiv preprint arXiv:1909.12205.
- Yuang Jiang, Shiqiang Wang, Bong Jun Ko, Wei-Han Lee, Leandros Tassiulas .[Model Pruning Enables Efficient Federated Learning on Edge Devices](https://arxiv.org/pdf/1909.12326) .[J] arXiv preprint arXiv:1909.12326.
- Yulong Wang, Xiaolu Zhang, Lingxi Xie, Jun Zhou, Hang Su, Bo Zhang, Xiaolin Hu .[Pruning from Scratch](https://arxiv.org/pdf/1909.12579) .[J] arXiv preprint arXiv:1909.12579.
- Xiaohan Ding, Guiguang Ding, Xiangxin Zhou, Yuchen Guo, Ji Liu, Jungong Han .[Global Sparse Momentum SGD for Pruning Very Deep Neural Networks](https://arxiv.org/pdf/1909.12778) .[J] arXiv preprint arXiv:1909.12778.
【code:[DingXiaoH/GSM-SGD](https://github.com/DingXiaoH/GSM-SGD)】
- Jiao Xie, Shaohui Lin, Yichen Zhang, Linkai Luo .[Training convolutional neural networks with cheap convolutions and online distillation](https://arxiv.org/pdf/1909.13063) .[J] arXiv preprint arXiv:1909.13063.
【code:[EthanZhangYC/OD-cheap-convolution](https://github.com/EthanZhangYC/OD-cheap-convolution)】
- Yuhang Li, Xin Dong, Wei Wang .[Additive Powers-of-Two Quantization: A Non-uniform Discretization for Neural Networks](https://arxiv.org/pdf/1909.13144) .[J] arXiv preprint arXiv:1909.13144.
- Caiwen Ding, Shuo Wang, Ning Liu, Kaidi Xu, Yanzhi Wang, Yun Liang .[REQ-YOLO: A Resource-Aware, Efficient Quantization Framework for Object Detection on FPGAs](https://arxiv.org/pdf/1909.13396) .[J] arXiv preprint arXiv:1909.13396.

### 2020

### 2021

### 2022

### 2023

---
## BOOKS

---
## BLOGS & ATRICLES
- [All The Ways You Can Compress Transformers](https://www.kaggle.com/code/rhtsingh/all-the-ways-you-can-compress-transformers)
- [Hardware-Efficient Automatic Tensor Decomposition for Transformer Compression](https://research.nvidia.com/publication/2022-12_heat-hardware-efficient-automatic-tensor-decomposition-transformer-compression)

---
## LIBRARIES
- [OpenBMB/BMCook](https://github.com/OpenBMB/BMCook):Model Compression for Big Models
- [NVIDIA TensorRT](https://developer.nvidia.com/tensorrt):  Programmable Inference Accelerator;  
- [Tencent/PocketFlow](https://github.com/Tencent/PocketFlow):  An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications;
- [dmlc/tvm](https://github.com/dmlc/tvm):  Open deep learning compiler stack for cpu, gpu and specialized accelerators;
- [Tencent/ncnn](https://github.com/Tencent/ncnn):  ncnn is a high-performance neural network inference framework optimized for the mobile platform;

---
## PROJECTS

- [pytorch/glow](https://github.com/pytorch/glow):  Compiler for Neural Network hardware accelerators;
- [NervanaSystems/neon](https://github.com/NervanaSystems/neon):  Intel® Nervana™ reference deep learning framework committed to best performance on all hardware;
- [NervanaSystems/distiller](https://github.com/NervanaSystems/distiller):  Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research;
- [MUSCO](https://github.com/juliagusak/musco) - framework for model compression using tensor decompositions (PyTorch)
- [OAID/Tengine](https://github.com/OAID/Tengine):  Tengine is a lite, high performance, modular inference engine for embedded device;
- [fpeder/espresso](https://github.com/fpeder/espresso):  Efficient forward propagation for BCNNs;
- [Tensorflow lite](https://tensorflow.google.cn/lite):  TensorFlow Lite is an open source deep learning framework for on-device inference.;  
- [Core ML](https://developer.apple.com/documentation/coreml/reducing_the_size_of_your_core_ml_app):  Reduce the storage used by the Core ML model inside your app bundle;
- [pytorch-tensor-decompositions](https://github.com/jacobgil/pytorch-tensor-decompositions):  PyTorch implementation of [1412.6553] and [1511.06530] tensor decomposition methods for convolutional layers;
- [tensorflow/quantize](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/quantize#quantized-accuracy-results):  
- [mxnet/quantization](https://github.com/apache/incubator-mxnet/tree/master/example/quantization):  This folder contains examples of quantizing a FP32 model with Intel® MKL-DNN or CUDNN.
- [TensoRT4-Example](https://github.com/YunYang1994/TensoRT4-Example):  
- [NAF-tensorflow](https://github.com/carpedm20/NAF-tensorflow):  "Continuous Deep Q-Learning with Model-based Acceleration" in TensorFlow;
- [Mayo](https://github.com/deep-fry/mayo) - deep learning framework with fine- and coarse-grained pruning, network slimming, and quantization methods
- [Keras compressor](https://github.com/DwangoMediaVillage/keras_compressor) - compression using low-rank approximations, SVD for matrices, Tucker for tensors.
- [Caffe compressor](https://github.com/yuanyuanli85/CaffeModelCompression) K-means based quantization

---
## OTHERS
- [bhavanajain/research-paper-summaries](https://github.com/bhavanajain/research-paper-summaries)

---
## REFERENCE
some papers and links collected from below, they are all awesome resources:
- [x][papers][sun254/awesome-model-compression-and-acceleration](https://github.com/sun254/awesome-model-compression-and-acceleration)
- [x][papers&projects][dkozlov/awesome-knowledge-distillation](https://github.com/dkozlov/awesome-knowledge-distillation)
- [x][papers&reading listi&blogs][memoiry/Awesome-model-compression-and-acceleration](https://github.com/memoiry/Awesome-model-compression-and-acceleration)
- [x][papers][chester256/Model-Compression-Papers](https://github.com/chester256/Model-Compression-Papers)
- [x][papers&blogs][cedrickchee/awesome-ml-model-compression](https://github.com/cedrickchee/awesome-ml-model-compression)
- [x][papers][jnjaby/Model-Compression-Acceleration](https://github.com/jnjaby/Model-Compression-Acceleration)
- [x][papers&codes][htqin/model-quantization](https://github.com/htqin/model-quantization)
- [x][papers][mrgloom/Network-Speed-and-Compression](https://github.com/mrgloom/Network-Speed-and-Compression)
- [others&papers&codes&projects&blogs][guan-yuan/awesome-AutoML-and-Lightweight-Models](https://github.com/guan-yuan/awesome-AutoML-and-Lightweight-Models)
- [papers&codes&projects&blogs][handong1587/cnn-compression-acceleration](https://handong1587.github.io/deep_learning/2015/10/09/cnn-compression-acceleration.html)
- [papers&hardware][ZhishengWang/Embedded-Neural-Network](https://github.com/ZhishengWang/Embedded-Neural-Network)
- [papers&hardware][fengbintu/Neural-Networks-on-Silicon](https://github.com/fengbintu/Neural-Networks-on-Silicon)
- [others&papers][ljk628/ML-Systems](https://github.com/ljk628/ML-Systems)
- [x][papers&codes][juliagusak/model-compression-and-acceleration-progress](https://github.com/juliagusak/model-compression-and-acceleration-progress)
- [x][papers&codes][Hyungjun-K1m/Neural-Network-Compression](https://github.com/Hyungjun-K1m/Neural-Network-Compression)
- [x][papers&codes][he-y/Awesome-Pruning](https://github.com/he-y/Awesome-Pruning)
- [x][papers][lhyfst/knowledge-distillation-papers](https://github.com/lhyfst/knowledge-distillation-papers)
- [x][papers&codes][AojunZhou/Efficient-Deep-Learning](https://github.com/AojunZhou/Efficient-Deep-Learning)
- [intro&papers&projects][Tianyu-Hua/ModelCompression](https://github.com/Tianyu-Hua/ModelCompression)
- [intro&papers&codes&blogs][Ewenwan/MVision/CNN/Deep_Compression](https://github.com/Ewenwan/MVision/tree/master/CNN/Deep_Compression)
- [intro&zhihu&papers][jyhengcoder/Model-Compression](https://github.com/jyhengcoder/Model-Compression)
- [/][intro&papers][mapleam/model-compression-and-acceleration-4-DNN](https://github.com/mapleam/model-compression-and-acceleration-4-DNN)
- [x][little papers][tejalal/awesome-deep-model-compression](https://github.com/tejalal/awesome-deep-model-compression)
- [x][papers&2years ago][Xreki/ModelCompression](https://github.com/Xreki/ModelCompression/tree/master/papers)
- [x][Ref][clhne/model-compression-and-acceleration](https://github.com/clhne/model-compression-and-acceleration)
- [][github topics][topics/model-compression](https://github.com/topics/model-compression)
- [][github topics][topics/pruning](https://github.com/topics/pruning)
- [][github topics][topics/channel-pruning](https://github.com/topics/channel-pruning)
- [][github topics][topics/quantization](https://github.com/topics/quantization)
- [][github topics][topics/knowledge-distilliation](https://github.com/topics/knowledge-distillation)
- [][github topics][topics/distilliation](https://github.com/topics/distillation)
- [][github topics][topics/efficient-model](https://github.com/topics/efficient-model)
- [][google search][transformer compression](https://www.google.com/search?q=transformer+commpression)

keyword:compress; prun; accelera; distill; binarization; "low rank"; quantization; "efficient comput";
NLP大模型压缩加速:蒸馏;量化;低秩分解;剪枝;权重层共享(参数共享);算子融合;检索增强;存储外挂;MoE;自适应深度解码;
```
grep -i -f key.txt arxiv-20* >compress20.txt
grep -i -f whitekey_limit.txt compress20.txt > compress_good.txt
grep -i -v -f blackkey.txt compress_good.txt > compress_good_bad.txt

```