Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-knowledge-distillation
Awesome Knowledge Distillation
https://github.com/eric-erki/awesome-knowledge-distillation
Last synced: 4 days ago
JSON representation
-
Uncategorized
-
Uncategorized
- Deep Face Recognition Model Compression via Knowledge Transfer and Distillation
- Relational Knowledge Distillation
- Graph-based Knowledge Distillation by Multi-head Attention Network
- Knowledge Adaptation for Efficient Semantic Segmentation
- Structured Knowledge Distillation for Semantic Segmentation
- Fast Human Pose Estimation
- MEAL: Multi-Model Ensemble via Adversarial Learning
- Learning Lightweight Lane Detection CNNs by Self Attention Distillation
- Improved Knowledge Distillation via Teacher Assistant: Bridging the Gap Between Student and Teacher - Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, Hassan Ghasemzadeh, 2019
- A Comprehensive Overhaul of Feature Distillation
- Contrastive Representation Distillation
- Learning Transferable Architectures for Scalable Image Recognition
- Revisiting knowledge transfer for training object class detectors
- Knowledge Distillation via Route Constrained Optimization
- Rocket Launching: A Universal and Efficient Framework for Training Well-performing Light Net
- Data Distillation: Towards Omni-Supervised Learning
- Learning from Noisy Labels with Distillation - Jia Li, 2017
- Interpreting Deep Classifiers by Visual Distillation of Dark Knowledge
- Similarity-Preserving Knowledge Distillation
- Distilling Object Detectors with Fine-grained Feature Imitation
- Knowledge Squeezed Adversarial Network Compression
- Stagewise Knowledge Distillation
- Knowledge Distillation from Internal Representations
- Knowledge Flow: Improve Upon Your Teachers - Jen Liu, Jian Peng, Alexander G. Schwing, 2019
- Graph Representation Learning via Multi-task Knowledge Distillation
- Deep geometric knowledge distillation with graphs
- Correlation Congruence for Knowledge Distillation
- Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
- Transparent Model Distillation
- Multimodal Recurrent Neural Networks with Information Transfer Layers for Indoor Scene Labeling - Pui Chau, Gang Wang, 2018
- Born Again Neural Networks
- YASENN: Explaining Neural Networks via Partitioning Activation Sequences
- Knowledge Distillation with Adversarial Samples Supporting Decision Boundary
- Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons
- Multi-Label Image Classification via Knowledge Distillation from Weakly-Supervised Detection
- Learning to Steer by Mimicking Features from Heterogeneous Auxiliary Networks
- A Generalized Meta-loss function for regression and classification using privileged information
- BAM! Born-Again Multi-Task Networks for Natural Language Understanding - Thang Luong, Urvashi Khandelwal, Christopher D. Manning, Quoc V. Le, 2019
- Self-Knowledge Distillation in Natural Language Processing
- Rethinking Data Augmentation: Self-Supervision and Self-Distillation
- MSD: Multi-Self-Distillation Learning via Multi-classifiers within Deep Neural Networks
- Efficient Video Classification Using Fewer Frames
- Retaining Privileged Information for Multi-Task Learning - Wei Lehman
- Data-Free Learning of Student Networks
- Positive-Unlabeled Compression on the Cloud
- Dark knowledge
- Model Compression
- Large scale distributed neural network training through online distillation
- Learning Metrics from Teachers: Compact Networks for Image Embedding
- On the Efficacy of Knowledge Distillation
- Revisit Knowledge Distillation: a Teacher-free Framework
- Ensemble Distribution Distillation
- Improving Generalization and Robustness with Noisy Collaboration in Knowledge Distillation
- Self-training with Noisy Student improves ImageNet classification - Thang Luong, Quoc V. Le, 2019
- Variational Student: Learning Compact and Sparser Networks in Knowledge Distillation Framework
- Preparing Lessons: Improve Knowledge Distillation with Better Supervision
- Positive-Unlabeled Compression on the Cloud
- Variational Information Distillation for Knowledge Transfer
- Neural Network Ensembles
- Combining labeled and unlabeled data with co-training
- Model Compression
- Learning with Pseudo-Ensembles
- Heterogeneous Knowledge Transfer in Video Emotion Recognition, Attribution and Summarization - Gang Jiang, Boyang Li, Leonid Sigal, 2015
- Distilling Model Knowledge
- Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
- Do deep convolutional nets really need to be deep and convolutional?
- MobileID: Face Model Compression by Distilling Knowledge from Neurons
- Recurrent Neural Network Training with Dark Knowledge Transfer
- Adapting Models to Signal Degradation using Distillation - Chyi Su, Subhransu Maji,2016
- Data-Free Knowledge Distillation For Deep Neural Networks
- Local Affine Approximators for Improving Knowledge Transfer
- Cross Modal Distillation for Supervision Transfer
- Learning with Pseudo-Ensembles
- Do deep convolutional nets really need to be deep and convolutional?
- Recurrent Neural Network Training with Dark Knowledge Transfer
- Revisiting knowledge transfer for training object class detectors
- Data Distillation: Towards Omni-Supervised Learning
- Deep Model Compression: Distilling Knowledge from Noisy Teachers
- DarkRank: Accelerating Deep Metric Learning via Cross Sample Similarities Transfer
- Unifying distillation and privileged information - Paz, Léon Bottou, Bernhard Schölkopf, Vladimir Vapnik, 2015
- Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
- FitNets: Hints for Thin Deep Nets
- Knowledge Distillation for Small-footprint Highway Networks
- Sequence-Level Knowledge Distillation - papernotes](https://github.com/dennybritz/deeplearning-papernotes/blob/master/notes/seq-knowledge-distillation.md), Yoon Kim, Alexander M. Rush, 2016
- Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
- Like What You Like: Knowledge Distill via Neuron Selectivity Transfer
- Learning Loss for Knowledge Distillation with Conditional Adversarial Networks - Chang Hsu, Jiawei Huang, 2017
- Knowledge Projection for Deep Neural Networks
- Moonshine: Distilling with Cheap Convolutions
- Model Distillation with Knowledge Transfer from Face Classification to Alignment and Verification
- Efficient Neural Architecture Search via Parameters Sharing
- Defensive Collaborative Multi-task Training - Defending against Adversarial Attack towards Deep Neural Networks
- Deep Co-Training for Semi-Supervised Image Recognition
- Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples
- Neural Network Ensembles, Cross Validation, and Active Learning
- Contrastive Representation Distillation
- Positive-Unlabeled Compression on the Cloud
- Rocket Launching: A Universal and Efficient Framework for Training Well-performing Light Net
- Multimodal Recurrent Neural Networks with Information Transfer Layers for Indoor Scene Labeling - Pui Chau, Gang Wang, 2018
- Data-Free Learning of Student Networks
-
-
MXNet
-
PyTorch
-
Theano
-
Tensorflow
Categories
Sub Categories