{"id":15293035,"url":"https://github.com/r-papso/torch-optimizer","last_synced_at":"2025-10-07T04:44:12.106Z","repository":{"id":42039795,"uuid":"434371535","full_name":"r-papso/torch-optimizer","owner":"r-papso","description":"PyTorch models optimization by neural network pruning","archived":false,"fork":false,"pushed_at":"2022-04-17T08:00:28.000Z","size":57995,"stargazers_count":2,"open_issues_count":0,"forks_count":1,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-07-25T06:27:07.977Z","etag":null,"topics":["deep-learning","model-compression","neural-network-pruning","optimization","pruning","pytorch"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/r-papso.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2021-12-02T20:52:57.000Z","updated_at":"2025-01-17T10:57:30.000Z","dependencies_parsed_at":"2022-08-12T03:10:17.659Z","dependency_job_id":null,"html_url":"https://github.com/r-papso/torch-optimizer","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/r-papso/torch-optimizer","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/r-papso%2Ftorch-optimizer","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/r-papso%2Ftorch-optimizer/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/r-papso%2Ftorch-optimizer/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/r-papso%2Ftorch-optimizer/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/r-papso","download_url":"https://codeload.github.com/r-papso/torch-optimizer/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/r-papso%2Ftorch-optimizer/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":278722724,"owners_count":26034461,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-07T02:00:06.786Z","response_time":59,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","model-compression","neural-network-pruning","optimization","pruning","pytorch"],"created_at":"2024-09-30T16:37:50.029Z","updated_at":"2025-10-07T04:44:12.083Z","avatar_url":"https://github.com/r-papso.png","language":"Python","readme":"# Torch optimizer\nTorch optimizer is a Python library for optimizing PyTorch models using techniques of neural network pruning. Neural network pruning can be formulated as an optimization problem to determine best subset of set of network's weights, i. e.:\n\n**Maximize:** Accuracy(model(W • M)) \u003cbr\u003e\n**Subject to:** Resource\u003csub\u003ej\u003c/sub\u003e(model(W • M)) \u003c= Budget\u003csub\u003ej\u003c/sub\u003e\n\nwhere W are model's weights, M is binary mask with size |M| = |W|, resource can be any resource we want to reduce (e. g. FLOPs, MACs, latency, model size, ...) and budget is our desired upper bound of the resource we want to reduce. \n\nLibrary provides several functionalities to facilitate solving given optimization problem.\n\n## Features\n\n### Objective functions\n\n[Objective](./torchopt/optim/objective.py) module provides common interface for modelling optimization objective functions. Objective function of arbitrary optimization problem can be created by implementing the interface. Module also provides several implementations of objective function in context of neural network pruning to evaluate pruned neural net's performance and efficiency, such as accuracy or relative decrease of MACs (Multiply–accumulate operations) according to number of MACs of original unpruned net.\n\n### Constraints\n\n[Constraint](./torchopt/optim/constraint.py) module provides common interface for modelling optimization constraints. Constraint of an arbitrary optimization problem can be created by the interface implementation. For neural network pruning purposes, a constraint that checks validity of the pruning (i. e. no layer can contain empty weight tensor after pruning) is provided.\n\n### Optimization algorithms\n\n[Optimizer](./torchopt/optim/optimizer.py) module contains common interface for an optimization algortihm implementations. Module also contains implementation of Genetic algorithm (GA) meta-heuristic. Two implementations of GA are provided: 1. to solve integer optimization problems and 2. to solve binary optimization problems. Detailed description of GA implementations can be found in the module.\n\n### Pruning\n\n[Pruner](./torchopt/prune/pruner.py) module provides basic functionality for structured neural network pruning. Structured pruning can be performed in different levels of granularity. For channel pruning, where individual filters / neurons are pruned, a channel pruner is provided. For module level pruning, where individual layers or blocks of the network can be pruned, library provides module pruner implementation.\n\n## Installation\n\nUse the package manager [pip](https://pip.pypa.io/en/stable/) to install Torch optimizer.\n\n```bash\npip install torch-optim\n```\n\n## Usage\n\nOne can train their own PyTorch model on arbitrary dataset and then use the library functionalities to perform structured pruning. Here is a simple example:\n\n```python\nimport torch\n\nfrom torchopt import utils\nfrom torchopt.prune.pruner import ChannelPruner\nfrom torchopt.optim.optimizer import IntegerGAOptimizer\nfrom torchopt.optim.objective import Accuracy, Macs, ObjectiveContainer\nfrom torchopt.optim.constraint import ChannelConstraint\n\nfrom thop import profile\n\n\n# Get your trained model\nmodel = torch.load('path/to/trained/model.pth')\n\n# Get dataset, on which model was trained. Dataset should be divided to training, validation \n# and testing set. Validation set will be used for measuring accuracy of pruned model.\ntrain_set, val_set, test_set = get_dataset()\n\n# Define model's input shape\ninput_shape = (1, 3, 32, 32)\n\n# Specify device, on wich optimization will be performed\ndevice = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n\n# Prunable modules are all linear and convolutional layers in the net\nnames = [name for name, _ in utils.prunable_modules(model)]\nbounds = [(0, len(module.weight) - 1) for _, module in utils.prunable_modules(model)]\npruner = ChannelPruner(names, input_shape)\n\n# Create GA optimizer\noptimizer = IntegerGAOptimizer(\n    ind_size=len(names),\n    pop_size=100,\n    elite_num=10,\n    tourn_size=10,\n    n_gen=30,\n    mutp=0.1,\n    mut_indp=0.05,\n    cx_indp=0.5,\n    bounds=bounds\n)\n\nsample = torch.randn(input_shape, device=device)\norig_acc = utils.evaluate(model, test_data, device)\norig_macs, _ = profile(model, inputs=(sample,), verbose=False)\n\n# Create composed objective function to get best trade-off between model accuracy and MACs reduction\nacc = Accuracy(model=model, pruner=pruner, weight=1.0, val_data=val_set, orig_acc=orig_acc)\nmacs = Macs(model=model, pruner=pruner, orig_macs=orig_macs, weight=1.0, in_shape=input_shape)\nobjective = ObjectiveContainer(acc, macs)\nconstraint = ChannelConstraint(model=model, pruner=pruner)\n\n# Perform optimization\nsolution = optimizer.maximize(objective, constraint)\n\n# Get pruned model according to best solution found by GA\npruned_model = pruner.prune(model=model, mask=solution)\n\n```\n\n## License\n\n[MIT](https://choosealicense.com/licenses/mit/)","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fr-papso%2Ftorch-optimizer","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fr-papso%2Ftorch-optimizer","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fr-papso%2Ftorch-optimizer/lists"}