{"id":13526149,"url":"https://github.com/jacobgil/pytorch-pruning","last_synced_at":"2025-04-04T13:11:58.628Z","repository":{"id":41322086,"uuid":"95215079","full_name":"jacobgil/pytorch-pruning","owner":"jacobgil","description":"PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference","archived":false,"fork":false,"pushed_at":"2019-07-12T20:39:41.000Z","size":12,"stargazers_count":878,"open_issues_count":30,"forks_count":202,"subscribers_count":22,"default_branch":"master","last_synced_at":"2025-03-28T12:08:08.513Z","etag":null,"topics":["deep-learning","pruning","pytorch"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/jacobgil.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2017-06-23T11:43:52.000Z","updated_at":"2025-03-07T10:39:36.000Z","dependencies_parsed_at":"2022-07-19T14:42:06.376Z","dependency_job_id":null,"html_url":"https://github.com/jacobgil/pytorch-pruning","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jacobgil%2Fpytorch-pruning","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jacobgil%2Fpytorch-pruning/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jacobgil%2Fpytorch-pruning/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jacobgil%2Fpytorch-pruning/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/jacobgil","download_url":"https://codeload.github.com/jacobgil/pytorch-pruning/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247182367,"owners_count":20897380,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","pruning","pytorch"],"created_at":"2024-08-01T06:01:25.840Z","updated_at":"2025-04-04T13:11:58.609Z","avatar_url":"https://github.com/jacobgil.png","language":"Python","readme":"## PyTorch implementation of  [\\[1611.06440 Pruning Convolutional Neural Networks for Resource Efficient Inference\\]](https://arxiv.org/abs/1611.06440) ##\n\nThis demonstrates pruning a VGG16 based classifier that classifies a small dog/cat dataset.\n\n\nThis was able to reduce the CPU runtime by x3 and the model size by x4.\n\nFor more details you can read the [blog post](https://jacobgil.github.io/deeplearning/pruning-deep-learning).\n\nAt each pruning step 512 filters are removed from the network.\n\n\nUsage\n-----\n\nThis repository uses the PyTorch ImageFolder loader, so it assumes that the images are in a different directory for each category.\n\nTrain\n\n......... dogs\n\n......... cats\n\n\nTest\n\n\n......... dogs\n\n......... cats\n\n\nThe images were taken from [here](https://www.kaggle.com/c/dogs-vs-cats) but you should try training this on your own data and see if it works!\n\nTraining:\n`python finetune.py --train`\n\nPruning:\n`python finetune.py --prune`\n\nTBD\n---\n\n - Change the pruning to be done in one pass. Currently each of the 512 filters are pruned sequentually. \n\t`\n\tfor layer_index, filter_index in prune_targets:\n\t\t\tmodel = prune_vgg16_conv_layer(model, layer_index, filter_index)\n\t\t`\n\n\n \tThis is inefficient since allocating new layers, especially fully connected layers with lots of parameters, is slow.\n\t\n\tIn principle this can be done in a single pass.\n\n\n\n - Change prune_vgg16_conv_layer to support additional architectures.\n \tThe most immediate one would be VGG with batch norm.\n\n","funding_links":[],"categories":["3.) Model Compression \u0026 Acceleration","Python","Paper Implementations","Paper implementations｜论文实现","Paper implementations"],"sub_categories":["**[Papers]**","Other libraries｜其他库:","Other libraries:"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjacobgil%2Fpytorch-pruning","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fjacobgil%2Fpytorch-pruning","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjacobgil%2Fpytorch-pruning/lists"}