Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ethanhe42/channel-pruning
Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)
https://github.com/ethanhe42/channel-pruning
acceleration channel-pruning deep-neural-networks image-classification image-recognition model-compression object-detection
Last synced: 5 days ago
JSON representation
Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)
- Host: GitHub
- URL: https://github.com/ethanhe42/channel-pruning
- Owner: ethanhe42
- License: mit
- Created: 2017-08-21T09:19:53.000Z (over 7 years ago)
- Default Branch: master
- Last Pushed: 2024-05-02T06:37:43.000Z (7 months ago)
- Last Synced: 2024-10-29T23:23:08.214Z (about 1 month ago)
- Topics: acceleration, channel-pruning, deep-neural-networks, image-classification, image-recognition, model-compression, object-detection
- Language: Python
- Homepage: https://arxiv.org/abs/1707.06168
- Size: 548 KB
- Stars: 1,078
- Watchers: 47
- Forks: 310
- Open Issues: 21
-
Metadata Files:
- Readme: README.md
- Funding: .github/FUNDING.yml
- License: LICENSE
Awesome Lists containing this project
- awesome-AutoML-and-Lightweight-Models - yihui-he/channel-pruning
README
# Channel Pruning for Accelerating Very Deep Neural Networks
![image](https://github.com/ethanhe42/channel-pruning/assets/10027339/8a7793a8-927f-49bc-84cf-0d33f76d3ae4)
[GitHub - yihui-he/channel-pruning: Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)](https://github.com/yihui-he/channel-pruning)
[Channel Pruning for Accelerating Very Deep Neural Networks](https://arxiv.org/abs/1707.06168)
**ICCV 2017**, by [Yihui He](http://yihui-he.github.io/), [Xiangyu Zhang](https://scholar.google.com/citations?user=yuB-cfoAAAAJ&hl=en&oi=ao) and [Jian Sun](http://jiansun.org/)
Please have a look our new works on compressing deep models:
- [AMC: AutoML for Model Compression and Acceleration on Mobile Devices](http://openaccess.thecvf.com/content_ECCV_2018/html/Yihui_He_AMC_Automated_Model_ECCV_2018_paper.html) **ECCV’18**, which combines channel pruning and reinforcement learning to further accelerate CNN. [code](https://github.com/mit-han-lab/amc-release) and [models](https://github.com/mit-han-lab/amc-compressed-models) are available!
- [AddressNet: Shift-Based Primitives for Efficient Convolutional Neural Networks](https://arxiv.org/abs/1809.08458) **WACV’19**. We propose a family of efficient networks based on Shift operation.
- [MoBiNet: A Mobile Binary Network for Image Classification](https://arxiv.org/abs/1907.12629) **WACV’20** Binarized MobileNets.In this repository, we released code for the following models:
| model | Speed-up | Accuracy |
| --- | --- | --- |
| https://github.com/yihui-he/channel-pruning/releases/tag/channel_pruning_5x | 5x | 88.1 (Top-5), 67.8 (Top-1) |
| https://github.com/yihui-he/channel-pruning/releases/tag/VGG-16_3C4x | 4x | 89.9 (Top-5), 70.6 (Top-1) |
| https://github.com/yihui-he/channel-pruning/releases/tag/ResNet-50-2X | 2x | 90.8 (Top-5), 72.3 (Top-1) |
| https://github.com/yihui-he/channel-pruning/releases/tag/faster-RCNN-2X4X | 2x | 36.7 ([email protected]:.05:.95) |
| https://github.com/yihui-he/channel-pruning/releases/tag/faster-RCNN-2X4X | 4x | 35.1 ([email protected]:.05:.95) |3C method combined spatial decomposition ([Speeding up Convolutional Neural Networks with Low Rank Expansions](https://arxiv.org/abs/1405.3866)) and channel decomposition ([Accelerating Very Deep Convolutional Networks for Classification and Detection](https://arxiv.org/abs/1505.06798)) (mentioned in 4.1.2)
### Citation
If you find the code useful in your research, please consider citing:
```
@InProceedings{He_2017_ICCV,
author = {He, Yihui and Zhang, Xiangyu and Sun, Jian},
title = {Channel Pruning for Accelerating Very Deep Neural Networks},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}
```### requirements
1. Python3 packages you might not have: `scipy`, `sklearn`, `easydict`, use `sudo pip3 install` to install.
2. For finetuning with 128 batch size, 4 GPUs (~11G of memory)### Installation (sufficient for the demo)
1. Clone the repository
```bash
# Make sure to clone with --recursive
git clone --recursive https://github.com/yihui-he/channel-pruning.git
```
2. Build [my Caffe](https://github.com/yihui-he/caffe-pro) fork (which support bicubic interpolation and resizing image shorter side to 256 then crop to 224x224)
```bash
cd caffe
# If you're experienced with Caffe and have all of the requirements installed, then simply do:
make all -j8 && make pycaffe
# Or follow the Caffe installation instructions here:
# http://caffe.berkeleyvision.org/installation.html
# you might need to add pycaffe to PYTHONPATH, if you've already had a caffe before
```
3. Download ImageNet classification dataset http://www.image-net.org/download-images
4. Specify imagenet `source` path in `temp/vgg.prototxt` (line 12 and 36)### Channel Pruning
*For fast testing, you can directly download pruned model. See [next section](about:blank#pruned-models-for-download)* 1. Download the original VGG-16 model http://www.robots.ox.ac.uk/~vgg/software/very_deep/caffe/VGG_ILSVRC_16_layers.caffemodel
move it to `temp/vgg.caffemodel` (or create a softlink instead)1. Start Channel Pruning
```bash
python3 train.py -action c3 -caffe [GPU0]
# or log it with ./run.sh python3 train.py -action c3 -caffe [GPU0]
# replace [GPU0] with actual GPU device like 0,1 or 2
```
2. Combine some factorized layers for further compression, and calculate the acceleration ratio. Replace the ImageData layer of `temp/cb_3c_3C4x_mem_bn_vgg.prototxt` with `[temp/vgg.prototxt`’s](https://github.com/yihui-he/channel-pruning/blob/master/temp/vgg.prototxt#L1-L49) `Shell ./combine.sh | xargs ./calflop.sh`
3. Finetuning
```bash
caffe train -solver temp/solver.prototxt -weights temp/cb_3c_vgg.caffemodel -gpu [GPU0,GPU1,GPU2,GPU3]
# replace [GPU0,GPU1,GPU2,GPU3] with actual GPU device like 0,1,2,3
```
4. Testing
Though testing is done while finetuning, you can test anytime with:
```bash
caffe test -model path/to/prototxt -weights path/to/caffemodel -iterations 5000 -gpu [GPU0]
# replace [GPU0] with actual GPU device like 0,1 or 2
```
Pruned models (for download)
For fast testing, you can directly download pruned model from [release](https://github.com/yihui-he/channel-pruning/releases): [VGG-16 3C 4X](https://github.com/yihui-he/channel-pruning/releases/tag/VGG-16_3C4x), [VGG-16 5X](https://github.com/yihui-he/channel-pruning/releases/tag/channel_pruning_5x), [ResNet-50 2X](https://github.com/yihui-he/channel-pruning/releases/tag/ResNet-50-2X). Or follow Baidu Yun [Download link](https://pan.baidu.com/s/1c2evwTa)
Test with:
```
caffe test -model channel_pruning_VGG-16_3C4x.prototxt -weights channel_pruning_VGG-16_3C4x.caffemodel -iterations 5000 -gpu [GPU0]
# replace [GPU0] with actual GPU device like 0,1 or 2
```### Pruning faster RCNN
For fast testing, you can directly download pruned model from [release](https://github.com/yihui-he/channel-pruning/releases/tag/faster-RCNN-2X4X)
Or you can: 1. clone my py-faster-rcnn repo: https://github.com/yihui-he/py-faster-rcnn 2. use the [pruned models](https://github.com/yihui-he/channel-pruning/releases/tag/faster-RCNN-2X4X) from this repo to train faster RCNN 2X, 4X, solver prototxts are in https://github.com/yihui-he/py-faster-rcnn/tree/master/models/pascal_voc### FAQ
You can find answers of some commonly asked questions in our [Github wiki](https://github.com/yihui-he/channel-pruning/wiki), or just create a [new issue](https://github.com/yihui-he/channel-pruning/issues/new)