Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/sseung0703/zero-shot_knowledge_distillation
Zero-Shot Knowledge Distillation in Deep Networks in ICML2019
https://github.com/sseung0703/zero-shot_knowledge_distillation
knowledge-distillation tensorflow zero-shot-learning
Last synced: 26 days ago
JSON representation
Zero-Shot Knowledge Distillation in Deep Networks in ICML2019
- Host: GitHub
- URL: https://github.com/sseung0703/zero-shot_knowledge_distillation
- Owner: sseung0703
- Created: 2019-06-18T06:10:38.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2019-06-20T15:17:46.000Z (over 5 years ago)
- Last Synced: 2024-10-03T12:16:16.343Z (about 2 months ago)
- Topics: knowledge-distillation, tensorflow, zero-shot-learning
- Language: Python
- Homepage:
- Size: 1.5 MB
- Stars: 49
- Watchers: 3
- Forks: 9
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Zero-shot Knowledge Distillation
This is the code for [Zero-shot Knowledge Distillation](https://arxiv.org/abs/1905.08114) which is accepted on ICML2019 oral paper.
Note that it is not exactly the same as the author's algorithm due to a lack of detail in the paper.
So I had to estimate some of the hyper-parameters and implemented detail.
I got better results than the paper's in case of no augmentation. But I fail to increase performance by augmentation.## Abstract
ZSKD is the knowledge distillation algorithm which not required real data. So they generate "Data Impression" samples which contain the teacher network's knowledge and train student network by it. It means that we don't have to care about privacy or safety. So the pros and cons I think are as follows.- Pros
- Simple but powerful algorithm.
- Data-free training algorithm- Cons
- Lack of experiments results of a large dataset such as ImageNet
- Only focused on a classification task
- The experiment set is not fair. they use augmentation for ZSKD only. So I think all of the rest performance can be increased.## Requirements
This code requires- Tensorflow 1.13
- Python 3.5
- Opencv 3.2.0
- Scipy 1.1.0
- Numpy 1.15.0
- Matplotlib 3.0.0each version is not important except Tensorflow, I think.
## How to run
Zero-shot knowledge distillation's procedure is composed with
- Training teacher network, → train_w_distill.py
- Generate data impression samples, → Data_Impressions.py
- Training student network by samples". → train_w_distill.pyBut if you want to follow the author's configuration, just run the
"autotrain.py".When you train teacher network and generate some samples, you can visualize "Concentration matrix" and generated samples by
"visualization.py"
below images and tables are my experiment results.## Experiment results
I tested all the things by running the autotrain. And all of the numerical value is mean of 5 results.|Rate|Teacher|Student|Soft-logits|ZSKD |
| --:| :-: | :-: | :-: | :-: |
| 100| 99.06 | 98.52 | - | - |
| 40| - | - | 89.78 |97.92|
| 25| - | - | 91.64 |97.35|
| 10| - | - | 93.08 |96.69|
| 5| - | - | 88.43 |91.31|
| 1| - | - | 70.02 |92.15|
Concentrate matrix of teacher network
Expamples of the generated smaples