{"id":26988801,"url":"https://github.com/jack-willturner/batchnorm-pruning","last_synced_at":"2025-09-02T11:36:13.560Z","repository":{"id":49799630,"uuid":"121409473","full_name":"jack-willturner/batchnorm-pruning","owner":"jack-willturner","description":"Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers https://arxiv.org/abs/1802.00124","archived":false,"fork":false,"pushed_at":"2018-11-14T11:41:10.000Z","size":43612,"stargazers_count":71,"open_issues_count":9,"forks_count":16,"subscribers_count":7,"default_branch":"master","last_synced_at":"2025-04-03T20:34:58.837Z","etag":null,"topics":["batchnorm","deep-learning","lasso","pruning","sgd"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/jack-willturner.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2018-02-13T16:56:47.000Z","updated_at":"2024-06-24T07:19:15.000Z","dependencies_parsed_at":"2022-09-14T11:51:56.238Z","dependency_job_id":null,"html_url":"https://github.com/jack-willturner/batchnorm-pruning","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/jack-willturner/batchnorm-pruning","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jack-willturner%2Fbatchnorm-pruning","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jack-willturner%2Fbatchnorm-pruning/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jack-willturner%2Fbatchnorm-pruning/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jack-willturner%2Fbatchnorm-pruning/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/jack-willturner","download_url":"https://codeload.github.com/jack-willturner/batchnorm-pruning/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jack-willturner%2Fbatchnorm-pruning/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":273278877,"owners_count":25077306,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-09-02T02:00:09.530Z","response_time":77,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["batchnorm","deep-learning","lasso","pruning","sgd"],"created_at":"2025-04-03T20:28:17.464Z","updated_at":"2025-09-02T11:36:13.505Z","avatar_url":"https://github.com/jack-willturner.png","language":"Python","readme":"# [Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers](https://arxiv.org/abs/1802.00124)\n\nTensorflow implementation from original author [here](https://github.com/bobye/batchnorm_prune).\n\nA PyTorch implementation of [this paper](https://arxiv.org/abs/1802.00124). \n\nTo do list:\n- [ ] Extend to MobileNet and VGG\n- [ ] Fix MAC op calculation for strided convolution\n- [ ] Include training scheme from paper\n\n## Usage\nI haven't included any code for transfer learning/ using pretrained models, so everything here must be done from scratch.\nYou will have to rewrite your models to use my extended version of batch normalization, so any occurences of `nn.BatchNorm2d`\nshould be replaced with `bn.BatchNorm2dEx`. I have included a few examples in the `models` folder. Note that in the forward pass\nyou need to provide the `weight` from the last convolution to the batchnorm (e.g. `out = self.bn1(self.conv1(x), self.conv1.weight)`.  \n\nI will add command line support for hyperparameters soon, but for now they will have to be altered in the `main` script itself. Currently the default is set to train ResNet-18; this can easily be swapped out for another model.\n\n```bash\npython main.py\n```\n\n## Results on CIFAR-10\n| Model                | Size  | MAC ops | Inf. time | Accuracy |\n|----------------------|-------|---------|-----------|----------|\n| ResNet-18            |       |         |           |          |\n| ResNet-18-Compressed |       |         |           |          |\n| VGG-16               |       |         |           |          |\n| VGG-16-Compressed    |       |         |           |          |\n| MobileNet            |       |         |           |          |\n| MobileNet-Compressed |       |         |           |          |\n\n## Citing\nNow accepted to ICLR 2018, will update bibtex soon:\n```\n@article{ye2018rethinking,\n  title={Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers},\n  author={Ye, Jianbo and Lu, Xin and Lin, Zhe and Wang, James Z},\n  journal={arXiv preprint arXiv:1802.00124},\n  year={2018}\n}\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjack-willturner%2Fbatchnorm-pruning","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fjack-willturner%2Fbatchnorm-pruning","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjack-willturner%2Fbatchnorm-pruning/lists"}