{"id":26988790,"url":"https://github.com/jack-willturner/deep-compression","last_synced_at":"2025-04-03T20:28:16.354Z","repository":{"id":46588646,"uuid":"109419635","full_name":"jack-willturner/deep-compression","owner":"jack-willturner","description":"Learning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626","archived":false,"fork":false,"pushed_at":"2022-11-10T11:46:00.000Z","size":2521,"stargazers_count":177,"open_issues_count":4,"forks_count":38,"subscribers_count":3,"default_branch":"master","last_synced_at":"2025-03-22T15:49:44.852Z","etag":null,"topics":["deep-learning","pruning","pytorch","sparsity"],"latest_commit_sha":null,"homepage":"","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/jack-willturner.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2017-11-03T16:42:09.000Z","updated_at":"2025-03-05T04:13:54.000Z","dependencies_parsed_at":"2023-01-21T21:15:14.849Z","dependency_job_id":null,"html_url":"https://github.com/jack-willturner/deep-compression","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jack-willturner%2Fdeep-compression","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jack-willturner%2Fdeep-compression/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jack-willturner%2Fdeep-compression/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jack-willturner%2Fdeep-compression/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/jack-willturner","download_url":"https://codeload.github.com/jack-willturner/deep-compression/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247073339,"owners_count":20879043,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","pruning","pytorch","sparsity"],"created_at":"2025-04-03T20:28:15.697Z","updated_at":"2025-04-03T20:28:16.338Z","avatar_url":"https://github.com/jack-willturner.png","language":"Jupyter Notebook","readme":"# [Learning both Weights and Connections for Efficient Neural Networks](https://arxiv.org/abs/1506.02626)\n\n[![Total alerts](https://img.shields.io/lgtm/alerts/g/jack-willturner/DeepCompression-PyTorch.svg?logo=lgtm\u0026logoWidth=18)](https://lgtm.com/projects/g/jack-willturner/DeepCompression-PyTorch/alerts/) \n[![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/jack-willturner/DeepCompression-PyTorch.svg?logo=lgtm\u0026logoWidth=18)](https://lgtm.com/projects/g/jack-willturner/DeepCompression-PyTorch/context:python)\n![GitHub](https://img.shields.io/github/license/jack-willturner/DeepCompression-PyTorch)\n\nA PyTorch implementation of [this paper](https://arxiv.org/abs/1506.02626).\n\nTo run, try:\n```bash\npython train.py --model='resnet34' --checkpoint='resnet34'\npython prune.py --model='resnet34' --checkpoint='resnet34'\n```\n\n## Usage \n\nThe core principle behind the training/pruning/finetuning algorithms is as follows:\n\n```python\nfrom models import get_model\nfrom pruners import get_pruner \n\nmodel = get_model(\"resnet18\")\npruner = get_pruner(\"L1Pruner\", \"unstructured\")\n\nfor prune_rate in [10, 40, 60, 80]:\n    pruner.prune(model, prune_rate)\n```\n\nWe can choose between structured/unstructured pruning, as well as the pruning methods which are in `pruners` (at the time of writing we have support only for magnitude-based pruning and Fisher pruning).\n\n\n## Bring your own models \nIn order to add a new model family to the repository you basically just need to do two things:\n1. Swap out the convolutional layers to use the `ConvBNReLU` class\n2. Define a `get_prunable_layers` method which returns all the instances of `ConvBNReLU` which you want to be prunable\n\n## Summary\n\nGiven a family of ResNets, we can construct a Pareto frontier of the tradeoff between accuracy and number of parameters:\n\n![alt text](./resources/resnets.png)\n\nHan et al. posit that we can beat this Pareto frontier by leaving network structures fixed, but removing individual parameters:\n\n![alt text](./resources/pareto.png)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjack-willturner%2Fdeep-compression","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fjack-willturner%2Fdeep-compression","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjack-willturner%2Fdeep-compression/lists"}