Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/daniil-osokin/fully-learnable-group-convolution.pytorch
Implementation of "Fully Learnable Group Convolution for Acceleration of Deep Neural Networks", CVPR'19
https://github.com/daniil-osokin/fully-learnable-group-convolution.pytorch
acceleration cifar-10 convolutional-neural-networks cvpr deep-learning group-convolution mobilenetv2
Last synced: 4 days ago
JSON representation
Implementation of "Fully Learnable Group Convolution for Acceleration of Deep Neural Networks", CVPR'19
- Host: GitHub
- URL: https://github.com/daniil-osokin/fully-learnable-group-convolution.pytorch
- Owner: Daniil-Osokin
- Created: 2019-06-25T20:11:22.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2020-03-23T15:12:05.000Z (over 4 years ago)
- Last Synced: 2024-11-02T12:28:06.613Z (11 days ago)
- Topics: acceleration, cifar-10, convolutional-neural-networks, cvpr, deep-learning, group-convolution, mobilenetv2
- Language: Python
- Homepage:
- Size: 383 KB
- Stars: 33
- Watchers: 5
- Forks: 10
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Code for the paper "Fully Learnable Group Convolution for Acceleration of Deep Neural Networks", CVPR 2019
## Description
Implementation of the paper ["Fully Learnable Group Convolution for Acceleration of Deep Neural Networks"](https://arxiv.org/pdf/1904.00346.pdf):Given a group number, it proposes to learn which input channels form each group and which filters work with each group. This is obtained with the proposed fully learnable group convolution (FLGC) layer.
## Results on CIFAR-10
| Method | Number of groups | MFLOPS | Accuracy, % | Model |
|:-------------------------|:-----------------|:-------|:------------|:------------|
| MobileNet V2 | N/A | 94.9 |94.43 | N/A |
| MobileNet V2-FLGC (paper)| 8 | 76 |93.09 | N/A |
| MobileNet V2-FLGC (ours) | 8 | 62.6 |93.7 | [Google Drive](https://drive.google.com/file/d/1RXFS9VQmcXvW7698UI4lWmDWRlTEAqjt/view?usp=sharing)|## Implementation Notes
As an important note (and major drawback for practical usage), there is no such built-in layer, which supports grouping with custom (not uniform) input channels/filters split. So, to see faster network inference time, one should implement it.
The follow-up paper ["Differentiable Learning-to-Group Channels via Groupable Convolutional Neural Networks"](https://arxiv.org/pdf/1908.05867.pdf) extends this work by making group number also learnable. Despite the improvement in theoretical model complexity, the lack of such inference-optimized layer makes it hard to apply for practical usage.
### Training on CIFAR-10 with standard and fully learnable group convolutions
We have obtained similar results with standard and fully learnable group convolutions in terms of accuracy for MobileNet V2. Accuracy can float up to 0.5% from run to run, which hides the benefit of using fully learnable grouping in these experiments. Possibly experiments at a large scale (ImageNet) will show the full potential of this approach.
### Model Architecture
Since there is no official MobileNet V2 for CIFAR-10 (and authors provide no pre-trained model), there are may be differences between network architectures. We use one from [pytorch-cifar](https://github.com/kuangliu/pytorch-cifar).
### FLOPS measurement
FLOPS were measured with [ptflops](https://github.com/sovrasov/flops-counter.pytorch).