Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/jacobgil/pytorch-pruning
PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference
https://github.com/jacobgil/pytorch-pruning
deep-learning pruning pytorch
Last synced: 8 days ago
JSON representation
PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference
- Host: GitHub
- URL: https://github.com/jacobgil/pytorch-pruning
- Owner: jacobgil
- Created: 2017-06-23T11:43:52.000Z (over 7 years ago)
- Default Branch: master
- Last Pushed: 2019-07-12T20:39:41.000Z (over 5 years ago)
- Last Synced: 2024-11-20T03:38:05.747Z (23 days ago)
- Topics: deep-learning, pruning, pytorch
- Language: Python
- Homepage:
- Size: 11.7 KB
- Stars: 875
- Watchers: 22
- Forks: 202
- Open Issues: 30
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-pytorch - pytorch-pruning
- awesome-pytorch - pytorch-pruning
- awesome-AutoML-and-Lightweight-Models - jacobgil/pytorch-pruning
README
## PyTorch implementation of [\[1611.06440 Pruning Convolutional Neural Networks for Resource Efficient Inference\]](https://arxiv.org/abs/1611.06440) ##
This demonstrates pruning a VGG16 based classifier that classifies a small dog/cat dataset.
This was able to reduce the CPU runtime by x3 and the model size by x4.
For more details you can read the [blog post](https://jacobgil.github.io/deeplearning/pruning-deep-learning).
At each pruning step 512 filters are removed from the network.
Usage
-----This repository uses the PyTorch ImageFolder loader, so it assumes that the images are in a different directory for each category.
Train
......... dogs
......... cats
Test
......... dogs
......... cats
The images were taken from [here](https://www.kaggle.com/c/dogs-vs-cats) but you should try training this on your own data and see if it works!
Training:
`python finetune.py --train`Pruning:
`python finetune.py --prune`TBD
---- Change the pruning to be done in one pass. Currently each of the 512 filters are pruned sequentually.
`
for layer_index, filter_index in prune_targets:
model = prune_vgg16_conv_layer(model, layer_index, filter_index)
`This is inefficient since allocating new layers, especially fully connected layers with lots of parameters, is slow.
In principle this can be done in a single pass.- Change prune_vgg16_conv_layer to support additional architectures.
The most immediate one would be VGG with batch norm.