{"id":20936147,"url":"https://github.com/he-y/soft-filter-pruning","last_synced_at":"2025-04-06T22:07:46.908Z","repository":{"id":49387364,"uuid":"130997001","full_name":"he-y/soft-filter-pruning","owner":"he-y","description":"Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks","archived":false,"fork":false,"pushed_at":"2019-10-02T11:06:58.000Z","size":61,"stargazers_count":379,"open_issues_count":15,"forks_count":73,"subscribers_count":9,"default_branch":"master","last_synced_at":"2025-03-30T20:11:55.017Z","etag":null,"topics":["model-compression","pruning","pytorch"],"latest_commit_sha":null,"homepage":"https://arxiv.org/abs/1808.06866","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/he-y.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2018-04-25T11:35:28.000Z","updated_at":"2025-02-27T10:07:02.000Z","dependencies_parsed_at":"2022-08-27T08:00:48.164Z","dependency_job_id":null,"html_url":"https://github.com/he-y/soft-filter-pruning","commit_stats":null,"previous_names":[],"tags_count":1,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/he-y%2Fsoft-filter-pruning","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/he-y%2Fsoft-filter-pruning/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/he-y%2Fsoft-filter-pruning/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/he-y%2Fsoft-filter-pruning/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/he-y","download_url":"https://codeload.github.com/he-y/soft-filter-pruning/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247557767,"owners_count":20958047,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["model-compression","pruning","pytorch"],"created_at":"2024-11-18T22:18:06.784Z","updated_at":"2025-04-06T22:07:46.880Z","avatar_url":"https://github.com/he-y.png","language":"Python","readme":"# Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks\nThe PyTorch implementation for [our IJCAI 2018 paper](https://www.ijcai.org/proceedings/2018/0309.pdf).\nThis implementation is based on [ResNeXt-DenseNet](https://github.com/D-X-Y/ResNeXt-DenseNet).\n\n## Updates:\nThe journal version of this work [Asymptotic Soft Filter Pruning for Deep Convolutional Neural Networks](https://ieeexplore.ieee.org/document/8816678) is available now, code coming soon.\n\n\n## Table of Contents\n\n- [Requirements](#requirements)\n- [Models and log files](#models-and-log-files)\n- [Training ImageNet](#training-imagenet)\n  - [Usage of Pruning Training](#usage-of-pruning-training)\n  - [Usage of Initial with Pruned Model](#usage-of-initial-with-pruned-model)\n  - [Usage of Normal Training](#usage-of-normal-training)\n  - [Inference the pruned model with zeros](#inference-the-pruned-model-with-zeros)\n  - [Inference the pruned model without zeros](#inference-the-pruned-model-without-zeros)\n  - [Get small model](#get-small-model)\n  - [Scripts to reproduce the results in our paper](#scripts-to-reproduce-the-results-in-our-paper)\n- [Training Cifar-10](#training-cifar-10)\n- [Notes](#notes)\n  - [Torchvision Version](#torchvision-version)\n  - [Why use 100 epochs for training](#why-use-100-epochs-for-training)\n  - [Process of ImageNet dataset](#process-of-imagenet-dataset)\n  - [FLOPs Calculation](#flops-calculation)\n- [Citation](#citation)\n\n\n## Requirements\n- Python 3.6\n- PyTorch 0.3.1\n- TorchVision 0.2.0\n\n## Models and log files\nThe trained models with log files can be found in [Google Drive](https://drive.google.com/drive/folders/1lPhInbd7v3HjK9uOPW_VNjGWWm7kyS8e?usp=sharing).\n\nThe pruned model without zeros: [Release page](https://github.com/he-y/soft-filter-pruning/releases/tag/ResNet50_pruned).\n\n## Training ImageNet\n\n#### Usage of Pruning Training\nWe train each model from scratch by default. If you wish to train the model with pre-trained models, please use the options `--use_pretrain --lr 0.01`.\n\nRun Pruning Training ResNet (depth 152,101,50,34,18) on Imagenet:\n(the `layer_begin` and `layer_end` is the index of the first and last conv layer, `layer_inter` choose the conv layer instead of BN layer): \n```bash\npython pruning_train.py -a resnet152 --save_dir ./snapshots/resnet152-rate-0.7 --rate 0.7 --layer_begin 0 --layer_end 462 --layer_inter 3  /path/to/Imagenet2012\n\npython pruning_train.py -a resnet101 --save_dir ./snapshots/resnet101-rate-0.7 --rate 0.7 --layer_begin 0 --layer_end 309 --layer_inter 3  /path/to/Imagenet2012\n\npython pruning_train.py -a resnet50  --save_dir ./snapshots/resnet50-rate-0.7 --rate 0.7 --layer_begin 0 --layer_end 156 --layer_inter 3  /path/to/Imagenet2012\n\npython pruning_train.py -a resnet34  --save_dir ./snapshots/resnet34-rate-0.7 --rate 0.7 --layer_begin 0 --layer_end 105 --layer_inter 3  /path/to/Imagenet2012\n\npython pruning_train.py -a resnet18  --save_dir ./snapshots/resnet18-rate-0.7 --rate 0.7 --layer_begin 0 --layer_end 57 --layer_inter 3  /path/to/Imagenet2012\n```\n\n#### Usage of Initial with Pruned Model\nWe use unpruned model as initial model by default. If you wish to initial with pruned model, please use the options `--use_sparse --sparse path_to_pruned_model`.\n\n#### Usage of Normal Training\nRun resnet(100 epochs): \n```bash\npython original_train.py -a resnet50 --save_dir ./snapshots/resnet50-baseline  /path/to/Imagenet2012 --workers 36\n```\n\n#### Inference the pruned model with zeros\n```bash\nsh scripts/inference_resnet.sh\n```\n\n#### Inference the pruned model without zeros\n```bash\nsh scripts/infer_pruned.sh\n```\nThe pruned model without zeros could be downloaded at the [Release page](https://github.com/he-y/soft-filter-pruning/releases/tag/ResNet50_pruned).\n\n#### Get small model\nGet the model without zeros.\nIn the below script, change the path of the resume model to the pruned-model with zeros, then both the big model (with zero) and small model (without zero) will be saved. This script support ResNet of depth 18, 34, 50, 101.\n```bash\nsh scripts/get_small.sh\n```\n\n\n#### Scripts to reproduce the results in our paper\nTo train the ImageNet model with / without pruning, see the directory `scripts` (we use 8 GPUs for training).\n\n## Training Cifar-10\n```bash\nsh scripts/cifar10_resnet.sh\n```\nPlease be care of the hyper-parameter [`layer_end`](https://github.com/he-y/soft-filter-pruning/blob/master/scripts/cifar10_resnet.sh#L4-L9) for different layer of ResNet.\n\n## Notes\n\n#### Torch Version\nWe use the torch of 0.3.1. If the version of your torch is 0.2.0, then the `transforms.RandomResizedCrop` should be `transforms.RandomSizedCrop` and the `transforms.Resize` should be `transforms.Scale`.\n\n#### Why use 100 epochs for training\nThis can improve the accuracy slightly.\n\n#### Process of ImageNet dataset\nWe follow the [Facebook process of ImageNet](https://github.com/facebook/fb.resnet.torch/blob/master/INSTALL.md#download-the-imagenet-dataset).\nTwo subfolders (\"train\" and \"val\") are included in the \"/path/to/ImageNet2012\".\nThe correspding code is [here](https://github.com/he-y/soft-filter-pruning/blob/master/pruning_train.py#L129-L130).\n\n#### FLOPs Calculation\nRefer to the [file](https://github.com/he-y/soft-filter-pruning/blob/master/utils/cifar_resnet_flop.py).\n\n## Citation\n```\n@inproceedings{he2018soft,\n  title     = {Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks},\n  author    = {He, Yang and Kang, Guoliang and Dong, Xuanyi and Fu, Yanwei and Yang, Yi},\n  booktitle = {International Joint Conference on Artificial Intelligence (IJCAI)},\n  pages     = {2234--2240},\n  year      = {2018}\n}\n\n@article{he2019asymptotic,\n  title={Asymptotic Soft Filter Pruning for Deep Convolutional Neural Networks}, \n  author={He, Yang and Dong, Xuanyi and Kang, Guoliang and Fu, Yanwei and Yan, Chenggang and Yang, Yi}, \n  journal={IEEE Transactions on Cybernetics}, \n  year={2019}, \n  volume={}, \n  number={}, \n  pages={1-11}, \n  doi={10.1109/TCYB.2019.2933477}, \n  ISSN={2168-2267}, \n}\n```\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhe-y%2Fsoft-filter-pruning","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fhe-y%2Fsoft-filter-pruning","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhe-y%2Fsoft-filter-pruning/lists"}