Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

https://github.com/bhavanajain/research-paper-summaries

A directory with some interesting research paper summaries in the field of Deep Learning
https://github.com/bhavanajain/research-paper-summaries

adversarial-networks deep-learning knowledge-distillation model-compression

Last synced: 2 months ago
JSON representation

A directory with some interesting research paper summaries in the field of Deep Learning

Lists

README

        

As a part of CS6480: Topics in Vision and Learning course taught by [Dr Vineeth Balasubramanian](http://www.iith.ac.in/~vineethnb/index.html), I will upload a research paper summary every week. They are mostly related to my research area of Knowledge Distillation for Model Compression.

# Table of contents
* Week 1: StackGAN: Text to Photo-realistic image synthesis with Stacked Generative Adversarial Networks [[paper](https://arxiv.org/abs/1612.03242)] [[summary](StackGAN_Summary.pdf)]
* Week 2: Distilling the Knowledge in a Neural Network [[paper](https://arxiv.org/abs/1503.02531)] [[summary](Distilling_Knowledge_Neural_Network_Summary.pdf)]
* Week 3: A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning [[paper](http://openaccess.thecvf.com/content_cvpr_2017/papers/Yim_A_Gift_From_CVPR_2017_paper.pdf)] [[summary](knowledge_distillation_summary.pdf)]
* Week 4: Learning Efficient Object Detection Models with Knowledge Distillation [[paper](https://papers.nips.cc/paper/6676-learning-efficient-object-detection-models-with-knowledge-distillation.pdf)] [[summary](Knowledge_Distillation_for_Object_Detection.pdf)]
* Week 5: GRAD-CAM: Visual Explanations from Deep Networks via Gradient-based Localizaiton [[paper](https://arxiv.org/abs/1610.02391)] [[summary](GRAD-CAM.pdf)]
* Week 6: Paying more attention to Attention: Improving the performance of (student) CNNs via Attention Transfer [[paper](https://arxiv.org/abs/1612.03928)] [[summary](attention_transfer.pdf)]
* Week 7: FitNets: Hints For Thin Deep Nets [[paper](https://arxiv.org/abs/1412.6550)] [[summary](FitNets.pdf)]
* Week 8: Deep Model Compression: Distilling Knowledge from Noisy Teachers [[paper](https://arxiv.org/abs/1610.09650)] [[summary](Distilling_knowledge_noisy_teachers.pdf)]
* Week 9: Data-free Knowledge Distillation for Deep Neural Networks [[paper](https://arxiv.org/abs/1710.07535)] [[summary](Data-free_knowlegde_distillation.pdf)]
* Week 10: Model Distillation with Knowledge Transfer from Face Classification to Alignment and Verificatio [[paper](https://arxiv.org/pdf/1709.02929.pdf)] [[summary](knowledge_transfer_face_classification_alignment_verification.pdf)]

Note: The course is over.