{"id":15036366,"url":"https://github.com/apple/ml-upscale","last_synced_at":"2025-10-19T22:32:13.390Z","repository":{"id":180822051,"uuid":"665387606","full_name":"apple/ml-upscale","owner":"apple","description":"Export utility for unconstrained channel pruned models","archived":false,"fork":false,"pushed_at":"2023-07-14T22:14:11.000Z","size":3440,"stargazers_count":71,"open_issues_count":0,"forks_count":11,"subscribers_count":7,"default_branch":"main","last_synced_at":"2025-01-30T07:33:04.710Z","etag":null,"topics":["deep-learning","export","machine-learning","pruning"],"latest_commit_sha":null,"homepage":"https://upscale.aaalv.in","language":"Jupyter Notebook","has_issues":false,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/apple.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-07-12T05:15:50.000Z","updated_at":"2025-01-15T11:22:08.000Z","dependencies_parsed_at":"2024-09-24T20:30:59.700Z","dependency_job_id":null,"html_url":"https://github.com/apple/ml-upscale","commit_stats":null,"previous_names":["apple/ml-upscale"],"tags_count":2,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/apple%2Fml-upscale","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/apple%2Fml-upscale/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/apple%2Fml-upscale/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/apple%2Fml-upscale/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/apple","download_url":"https://codeload.github.com/apple/ml-upscale/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":237224835,"owners_count":19275093,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","export","machine-learning","pruning"],"created_at":"2024-09-24T20:30:56.227Z","updated_at":"2025-10-19T22:32:12.797Z","avatar_url":"https://github.com/apple.png","language":"Jupyter Notebook","readme":"# Unconstrained Channel Pruning · [Paper](https://openreview.net/forum?id=25fe54GXLo)\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1vTZBWB3O2oj-g8oH5sj7j4CJ_xe6mv1p?usp=sharing)\n\n\n**UPSCALE: Unconstrained Channel Pruning** @ [ICML 2023](https://openreview.net/forum?id=25fe54GXLo)\u003cbr/\u003e\n[Alvin Wan](https://alvinwan.com), [Hanxiang Hao](https://scholar.google.com/citations?user=IMn1m2sAAAAJ\u0026hl=en\u0026oi=ao), [Kaushik Patnaik](https://openreview.net/profile?id=~Kaushik_Patnaik1), [Yueyang Xu](https://github.com/inSam), [Omer Hadad](https://scholar.google.com/citations?user=cHZBEjQAAAAJ\u0026hl=en), [David Güera](https://davidguera.com), [Zhile Ren](https://jrenzhile.com), [Qi Shan](https://scholar.google.com/citations?user=0FbnKXwAAAAJ\u0026hl=en)\n\nBy removing constraints from existing pruners, we improve ImageNet accuracy for post-training pruned models by 2.1 points on average - benefiting DenseNet (+16.9), EfficientNetV2 (+7.9), and ResNet (+6.2). Furthermore, for these unconstrained pruned models, UPSCALE improves inference speeds by up to 2x over a baseline export.\n\n## Quick Start\n\nInstall our package.\n\n```bash\npip install apple-upscale\n```\n\nMask and prune channels, using the default magnitude pruner.\n\n```python\nimport torch, torchvision\nfrom upscale import MaskingManager, PruningManager\n\nx = torch.rand((1, 3, 224, 224), device='cuda')\nmodel = torchvision.models.get_model('resnet18', pretrained=True).cuda()  # get any pytorch model\nMaskingManager(model).importance().mask()\nPruningManager(model).compute([x]).prune()\n```\n\n## Customize Pruning\n\nWe provide a number of pruning heuristics out of the box:\n\n- Magnitude ([L1](https://arxiv.org/abs/1608.08710) and [L2](https://arxiv.org/abs/1608.03665))\n- [LAMP](https://arxiv.org/abs/2010.07611)\n- [FPGM](https://arxiv.org/abs/1811.00250)\n- [HRank](https://arxiv.org/abs/2002.10179)\n\nYou can pass the desired heuristic into the `UpscaleManager.mask` method call. You can also configure the pruning ratio in `UpscaleManager.mask`. A value of `0.25` means 25% of channels are set to zero.\n\n```python\nfrom upscale.importance import LAMP\nMaskingManager(model).importance(LAMP()).mask(amount=0.25)\n```\n\nYou can also zero out channels using any method you see fit.\n\n```python\nmodel.conv0.weight[:, 24] = 0\n```\n\nThen, run our export.\n\n```python\nPruningManager(model).compute([x]).prune()\n```\n\n## Advanced\n\nYou may want direct access to network segments to build a heavily-customized pruning algorithm.\n\n```python\nfor segment in MaskingManager(model).segments():\n    # prune each segment in the network independently\n    for layer in segment.layers:\n        # layers in the segment\n```\n\n## Development\n\n\u003e **NOTE:** See [src/upscale/pruning/README.md](src/upscale/pruning/README.md) for more details on how the core export algorithm code is organized.\n\nClone and setup.\n\n```bash\ngit clone git@github.com:apple/ml-upscale.git\ncd upscale\npip install -e .\n```\n\nRun tests.\n\n```\npy.test src tests --doctest-modules\n```\n\n## Paper\n\nFollow the development installation instructions to have the paper code under `paper/` available.\n\nTo run the baseline unconstrained export, pass `baseline=True` to `PruningManager.prune`.\n\n```python\nPruningManager(model).compute([x]).prune(baseline=True)\n```\n\nTo reproduce the paper results, run\n\n```bash\npython paper/main.py resnet18\n```\n\nPlug in any model in the `torchvision.models` namespace.\n\n```\nusage: main.py [-h] [--side {input,output} [{input,output} ...]]\n               [--method {constrained,unconstrained} [{constrained,unconstrained} ...]]\n               [--amount AMOUNT [AMOUNT ...]] [--epochs EPOCHS] \n               [--heuristic {l1,l2,lamp,fpgm,hrank}] [--global] [--out OUT] \n               [--force] [--latency] [--clean]\n               model\n\npositional arguments:\n  model                 model to prune\n\noptions:\n  -h, --help            show this help message and exit\n  --side {input,output} [{input,output} ...]\n                        prune which \"side\" -- producers, or consumers\n  --method {constrained,unconstrained} [{constrained,unconstrained} ...]\n                        how to handle multiple branches\n  --amount AMOUNT [AMOUNT ...]\n                        amounts to prune by. .6 means 60 percent pruned\n  --epochs EPOCHS       number of epochs to train for\n  --heuristic {l1,l2,lamp,fpgm,hrank}\n                        pruning heuristic\n  --global              apply heuristic globally\n  --out OUT             directory to write results.csv to\n  --force               force latency rerun\n  --latency             measure latency locally\n  --clean               clean the dataframe\n```\n\n## Citation\n\nIf you find this useful for your research, please consider citing\n\n```\n@inproceedings{wan2023upscale,\n  title={UPSCALE: Unconstrained Channel Pruning},\n  author={Alvin Wan and Hanxiang Hao and Kaushik Patnaik and Yueyang Xu and Omer Hadad and David Guera and Zhile Ren and Qi Shan},\n  booktitle={ICML},\n  year={2023}\n}\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fapple%2Fml-upscale","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fapple%2Fml-upscale","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fapple%2Fml-upscale/lists"}