Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/jaxony/ShuffleNet
ShuffleNet in PyTorch. Based on https://arxiv.org/abs/1707.01083
https://github.com/jaxony/ShuffleNet
artificial-intelligence convolution deep-learning neural-network pytorch
Last synced: about 1 month ago
JSON representation
ShuffleNet in PyTorch. Based on https://arxiv.org/abs/1707.01083
- Host: GitHub
- URL: https://github.com/jaxony/ShuffleNet
- Owner: jaxony
- License: mit
- Created: 2017-07-15T06:58:13.000Z (over 7 years ago)
- Default Branch: master
- Last Pushed: 2017-12-20T12:50:24.000Z (almost 7 years ago)
- Last Synced: 2024-08-01T22:50:06.003Z (4 months ago)
- Topics: artificial-intelligence, convolution, deep-learning, neural-network, pytorch
- Language: Python
- Homepage:
- Size: 13.4 MB
- Stars: 290
- Watchers: 10
- Forks: 90
- Open Issues: 6
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-image-classification - unofficial-pytorch : https://github.com/jaxony/ShuffleNet
- awesome-image-classification - unofficial-pytorch : https://github.com/jaxony/ShuffleNet
README
# ShuffleNet in PyTorch
An implementation of `ShuffleNet` in PyTorch. `ShuffleNet` is an efficient convolutional neural network architecture for mobile devices. According to the paper, it outperforms Google's MobileNet by a small percentage.## What is ShuffleNet?
In one sentence, `ShuffleNet` is a ResNet-like model that uses residual blocks (called `ShuffleUnits`), with the main innovation being the use of pointwise, or 1x1, *group* convolutions as opposed to normal pointwise convolutions.## Usage
Clone the repo:
```bash
git clone https://github.com/jaxony/ShuffleNet.git
```Use the model defined in `model.py`:
```python
from model import ShuffleNet# running on MNIST
net = ShuffleNet(num_classes=10, in_channels=1)
```## Performance
Trained on ImageNet (using the [PyTorch ImageNet example][imagenet]) with
`groups=3` and no channel multiplier. On the test set, got 62.2% top 1 and
84.2% top 5. Unfortunately, this isn't comparable to Table 5 of the paper,
because they don't run a network with these settings, but it is somewhere
between the network with `groups=3` and half the number of channels (42.8%
top 1) and the network with the same number of channels but `groups=8`
(32.4% top 1). The pretrained state dictionary can be found [here][tar], in
the [following
format](://github.com/pytorch/examples/blob/master/imagenet/main.py#L165-L171):```
{
'epoch': epoch + 1,
'arch': args.arch,
'state_dict': model.state_dict(),
'best_prec1': best_prec1,
'optimizer' : optimizer.state_dict()
}
```Note: trained with the default ImageNet settings, which are actually
different from the training regime described in the paper. Pending running
again with those settings (and `groups=8`).[tar]: https://drive.google.com/file/d/12oGJsyDgp51LhQ7FOzKxF9nBsutLkE6V/view?usp=sharing
[imagenet]: https://github.com/pytorch/examples/tree/master/imagenet