{"id":15664343,"url":"https://github.com/huangcongqing/model-compression-optimization","last_synced_at":"2025-05-05T21:44:51.128Z","repository":{"id":107569419,"uuid":"552050930","full_name":"HuangCongQing/model-compression-optimization","owner":"HuangCongQing","description":"model compression and optimization for deployment for Pytorch, including knowledge distillation, quantization and pruning.(知识蒸馏，量化，剪枝)","archived":false,"fork":false,"pushed_at":"2024-09-10T12:05:19.000Z","size":20992,"stargazers_count":18,"open_issues_count":0,"forks_count":2,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-03-31T00:03:27.475Z","etag":null,"topics":["knowledge-distillation","model-compression","nas","pruning","pytorch","quantization","quantized-networks","sparsity","sparsity-optimization"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/HuangCongQing.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2022-10-15T17:50:03.000Z","updated_at":"2024-10-15T05:52:05.000Z","dependencies_parsed_at":null,"dependency_job_id":"fad87fcf-48ec-42c7-8bfe-7d6049ee064a","html_url":"https://github.com/HuangCongQing/model-compression-optimization","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/HuangCongQing%2Fmodel-compression-optimization","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/HuangCongQing%2Fmodel-compression-optimization/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/HuangCongQing%2Fmodel-compression-optimization/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/HuangCongQing%2Fmodel-compression-optimization/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/HuangCongQing","download_url":"https://codeload.github.com/HuangCongQing/model-compression-optimization/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":252582266,"owners_count":21771638,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["knowledge-distillation","model-compression","nas","pruning","pytorch","quantization","quantized-networks","sparsity","sparsity-optimization"],"created_at":"2024-10-03T13:42:10.454Z","updated_at":"2025-05-05T21:44:51.105Z","avatar_url":"https://github.com/HuangCongQing.png","language":"Python","readme":"\u003c!--\n * @Description: \n * @Author: HCQ\n * @Company(School): UCAS\n * @Email: 1756260160@qq.com\n * @Date: 2022-10-16 10:28:52\n * @LastEditTime: 2022-12-20 18:26:31\n * @FilePath: \\model-compression-optimization\\README.md\n--\u003e\n# model-compression-optimization\nmodel compression and optimization for deployment for Pytorch, including knowledge distillation, quantization and pruning.(知识蒸馏，量化，剪枝)\n\n\n\n## 1 Pruning(剪枝)\n\n\n#### 算法总表\n| **Pruning Method** | **Code location** | **Docs** | **Remark** |\n| --- | --- | --- | --- |\n| **01开山之作：Learning Efficient Convolutional Networks Through Network Slimming (ICCV2017)** | code: [pruning/01NetworkSlimming](pruning/01NetworkSlimming) \u003cbr\u003e code reference: \u003cbr\u003e[link1]( https://github.com/foolwood/pytorch-slimming) \u003cbr\u003e  [link2](https://github.com/Eric-mingjie/network-slimming)| [docs](https://www.yuque.com/huangzhongqing/pytorch/iar4s1) | placeholder |\n| **02【ICCV2017】ThiNet** | code: [1pruning/02ThiNet](1pruning/02ThiNet) \u003cbr\u003e code reference: \u003cbr\u003e https://github.com/SSriven/ThiNet | [docs](https://www.yuque.com/huangzhongqing/lightweight/pnzhr3tb8wfdciep#Kownj) | 1 |\n| **03【CVPR2020】HRank** | code: [1pruning/03HRank](1pruning/03HRank) \u003cbr\u003e code reference: \u003cbr\u003e[link](https://github.com/lmbxmu/HRank) | [docs](https://www.yuque.com/huangzhongqing/lightweight/xqks1lrte52moirq#dRSJK) | placeholder |\n| **Coming...** | 1 | 1 | 1 |\n\n\n#### 01 Learning Efficient Convolutional Networks Through Network Slimming (ICCV2017)\ndocs: https://www.yuque.com/huangzhongqing/pytorch/iar4s1\n\ncode: [pruning/01NetworkSlimming](pruning/01NetworkSlimming)\n\ncode reference:\n\u003e* https://github.com/foolwood/pytorch-slimming\n● support for Vgg\n\u003e* https://github.com/Eric-mingjie/network-slimming\n● We also add support for ResNet and DenseNet.\n\n\n\n#### 02 TODO\n\n\n\n\n## 2 quantization(量化)\n\n\n#### 01 TODO\n\n\n#### 算法总表\n\n| **量化 Method** | **Code location** | **Docs** | **Remark** |\n| --- | --- | --- | --- |\n| **Coming...** | 1 | 1 | 1 |\n\n\n\n## 3 knowledge distillation(知识蒸馏)\n\n#### 算法总表\n| **KD Method** | **Code location** | **Docs** | **Remark** |\n| --- | --- | --- | --- |\n| **01开山之作： Distilling the knowledge in a neural network（NIPS2014）ndom** | code: [3distillation/01Distilling the knowledge in a neural network](3distillation/01Distilling_the_knowledge_in_a_neural_network)\u003cbr\u003ecode reference: https://github.com/Eli-yu-first/Artificial_Intelligence | https://www.yuque.com/huangzhongqing/lightweight/lno6i7 | 1 |\n| **02  Channel-wise Knowledge Distillation for Dense Prediction（ICCV2021）** | code: [3distillation/02SemSeg-distill](3distillation/02SemSeg-distill) \u003cbr\u003e code reference: https://github.com/irfanICMLL/TorchDistiller/tree/main/SemSeg-distill | https://www.yuque.com/huangzhongqing/lightweight/dourdf2ogh9y1cx9#VHZBv | 1 |\n| **Coming...** | 1 | 1 | 1 |\n\n\n\n#### 01开山之作： Distilling the knowledge in a neural network（NIPS2014）\n\n\ndocs: https://www.yuque.com/huangzhongqing/lightweight/lno6i7\n\ncode: [3distillation/01Distilling the knowledge in a neural network](3distillation/01Distilling_the_knowledge_in_a_neural_network)\n\n\ncode reference: https://github.com/Eli-yu-first/Artificial_Intelligence\n\n\n#### 02  Channel-wise Knowledge Distillation for Dense Prediction（ICCV2021）\n\n\ndocs: https://www.yuque.com/huangzhongqing/lightweight/dourdf2ogh9y1cx9#VHZBv\n\ncode: [3distillation/02SemSeg-distill](3distillation/02SemSeg-distill)\n\n\ncode reference: https://github.com/irfanICMLL/TorchDistiller/tree/main/SemSeg-distill\n\n\n## 4 NAS神经网络搜索(Neural Architecture Search,简称NAS)\n\nvideo:\n* 神经网络结构搜索 Neural Architecture Search 系列:https://space.bilibili.com/1369507485/channel/collectiondetail?sid=788500\n* PPT: [4NAS/NAS基础.pptx](4NAS/NAS基础.pptx)\n\n\n\n#### 算法总表\n| **NAS Method** | **Code location** | **Docs** | **Remark** |\n| --- | --- | --- | --- |\n| **01 DARTS(ICLR'2019)【Differentiable Neural Architecture Search 可微分结构】—年轻人的第一个NAS模型** | code: [4NAS/01DARTS(ICLR2019)/pt.darts](4NAS/01DARTS(ICLR2019)/pt.darts) \u003cbr\u003e code reference: \u003cbr\u003e https://github.com/khanrc/pt.darts | hthttps://www.yuque.com/huangzhongqing/lightweight/esyutcdebpmowgi3 | video:【论文解读】Darts可微分神经网络架构搜索算法:https://www.bilibili.com/video/BV1Mm4y1R7Cw/?vd_source=617461d43c4542e4c5a3ed54434a0e55 |\n| **Coming...** | 1 | 1 | 1 |\n\n\n#### 01 DARTS(ICLR'2019)【Differentiable Neural Architecture Search 可微分结构】—年轻人的第一个NAS模型\n\ndoc：https://www.yuque.com/huangzhongqing/lightweight/esyutcdebpmowgi3\n\ncode: [4NAS/01DARTS(ICLR2019)/pt.darts](4NAS/01DARTS(ICLR2019)/pt.darts)\ncode reference:：https://github.com/khanrc/pt.darts\nvideo:【论文解读】Darts可微分神经网络架构搜索算法:https://www.bilibili.com/video/BV1Mm4y1R7Cw/?vd_source=617461d43c4542e4c5a3ed54434a0e55\n\n#### 02 TODO\n\n## TODOlist\n\n\n\n\n\n\n\n\n\n\n## License\n\nCopyright (c) [双愚](https://github.com/HuangCongQing). All rights reserved.\n\nLicensed under the [MIT](./LICENSE) License.\n\n\n\n---\n\n\n微信公众号：**【双愚】**（huang_chongqing） 聊科研技术,谈人生思考,欢迎关注~\n\n![image](https://user-images.githubusercontent.com/20675770/169835565-08fc9a49-573e-478a-84fc-d9b7c5fa27ff.png)\n\n**往期推荐：**\n1. [本文不提供职业建议，却能助你一生](https://mp.weixin.qq.com/s/rBR62qoAEeT56gGYTA0law)\n2. [聊聊我们大学生面试](https://mp.weixin.qq.com/s?__biz=MzI4OTY1MjA3Mg==\u0026mid=2247484016\u0026idx=1\u0026sn=08bc46266e00572e46f3e5d9ffb7c612\u0026chksm=ec2aae77db5d276150cde1cb1dc6a53e03eba024adfbd1b22a048a7320c2b6872fb9dfef32aa\u0026scene=178\u0026cur_album_id=2253272068899471368#rd)\n3. [清华大学刘知远：好的研究方法从哪来](https://mp.weixin.qq.com/s?__biz=MzI4OTY1MjA3Mg==\u0026mid=2247486340\u0026idx=1\u0026sn=6c5f69bb37d91a343b1a1e7f6929ddae\u0026chksm=ec2aa783db5d2e95ba4c472471267721cafafbe10c298a6d5fae9fed295f455a72f783872249\u0026scene=178\u0026cur_album_id=1855544495514140673#rd)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhuangcongqing%2Fmodel-compression-optimization","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fhuangcongqing%2Fmodel-compression-optimization","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhuangcongqing%2Fmodel-compression-optimization/lists"}