{"id":19646711,"url":"https://github.com/onnx/neural-compressor","last_synced_at":"2025-07-29T18:05:05.307Z","repository":{"id":236150050,"uuid":"791940535","full_name":"onnx/neural-compressor","owner":"onnx","description":"Model compression for ONNX","archived":false,"fork":false,"pushed_at":"2024-10-12T13:20:49.000Z","size":2452,"stargazers_count":72,"open_issues_count":10,"forks_count":9,"subscribers_count":5,"default_branch":"main","last_synced_at":"2024-10-30T02:03:54.769Z","etag":null,"topics":["deep-learning","model-compression","model-pruning","onnx","onnxruntime","quantization"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/onnx.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"docs/CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"docs/CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-04-25T17:01:59.000Z","updated_at":"2024-10-20T03:21:32.000Z","dependencies_parsed_at":"2024-06-24T06:40:53.498Z","dependency_job_id":"61358522-51e1-4a2c-a455-b4fe231e6668","html_url":"https://github.com/onnx/neural-compressor","commit_stats":null,"previous_names":["onnx/neural-compressor"],"tags_count":1,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/onnx%2Fneural-compressor","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/onnx%2Fneural-compressor/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/onnx%2Fneural-compressor/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/onnx%2Fneural-compressor/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/onnx","download_url":"https://codeload.github.com/onnx/neural-compressor/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247345850,"owners_count":20924102,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","model-compression","model-pruning","onnx","onnxruntime","quantization"],"created_at":"2024-11-11T14:39:57.970Z","updated_at":"2025-04-05T14:05:01.283Z","avatar_url":"https://github.com/onnx.png","language":"Python","readme":"\u003cdiv align=\"center\"\u003e\n\nNeural Compressor\n===========================\n\u003ch3\u003e An open-source Python library supporting popular model compression techniques for ONNX\u003c/h3\u003e\n\n[![python](https://img.shields.io/badge/python-3.8%2B-blue)](https://github.com/onnx/neural-compressor)\n[![version](https://img.shields.io/badge/release-1.0-green)](https://github.com/onnx/neural-compressor/releases)\n[![license](https://img.shields.io/badge/license-Apache%202-blue)](https://github.com/onnx/neural-compressor/blob/master/LICENSE)\n\n\n---\n\u003cdiv align=\"left\"\u003e\n\nNeural Compressor aims to provide popular model compression techniques inherited from [Intel Neural Compressor](https://github.com/intel/neural-compressor) yet focused on ONNX model quantization such as SmoothQuant, weight-only quantization through [ONNX Runtime](https://onnxruntime.ai/). In particular, the tool provides the key features, typical examples, and open collaborations as below:\n\n* Support a wide range of Intel hardware such as [Intel Xeon Scalable Processors](https://www.intel.com/content/www/us/en/products/details/processors/xeon/scalable.html) and AIPC\n\n* Validate popular LLMs such as [LLama2](./examples/nlp/huggingface_model/text_generation/), [Llama3](./examples/nlp/huggingface_model/text_generation/), [Qwen2](./examples/nlp/huggingface_model/text_generation/) and broad models such as [BERT-base](./examples/nlp/bert/quantization), and [ResNet50](./examples/image_recognition/resnet50/quantization/ptq_static) from popular model hubs such as [Hugging Face](https://huggingface.co/), [ONNX Model Zoo](https://github.com/onnx/models#models), by leveraging automatic [accuracy-driven](./docs/design.md#workflow) quantization strategies\n\n* Collaborate with software platforms such as [Microsoft Olive](https://github.com/microsoft/Olive), and open AI ecosystem such as [Hugging Face](https://huggingface.co/blog/intel), [ONNX](https://github.com/onnx/models#models) and [ONNX Runtime](https://github.com/microsoft/onnxruntime)\n\n## Installation\n\n### Install from source\n```Shell\ngit clone https://github.com/onnx/neural-compressor.git\ncd neural-compressor\npip install -r requirements.txt\npip install .\n```\n\n\u003e **Note**:\n\u003e Further installation methods can be found under [Installation Guide](./docs/installation_guide.md).\n\n## Getting Started\n\nSetting up the environment:\n```bash\npip install onnx-neural-compressor \"onnxruntime\u003e=1.17.0\" onnx\n```\nAfter successfully installing these packages, try your first quantization program.\n\u003e Notes: please install from source before the formal pypi release.\n\n### Weight-Only Quantization (LLMs)\nFollowing example code demonstrates Weight-Only Quantization on LLMs, device will be selected for efficiency automatically when multiple devices are available.\n\nRun the example:\n```python\nfrom onnx_neural_compressor.quantization import matmul_nbits_quantizer\n\nalgo_config = matmul_nbits_quantizer.RTNWeightOnlyQuantConfig()\nquant = matmul_nbits_quantizer.MatMulNBitsQuantizer(\n    model,\n    n_bits=4,\n    block_size=32,\n    is_symmetric=True,\n    algo_config=algo_config,\n)\nquant.process()\nbest_model = quant.model\n```\n\n### Static Quantization\n\n```python\nfrom onnx_neural_compressor.quantization import quantize, config\nfrom onnx_neural_compressor import data_reader\n\n\nclass DataReader(data_reader.CalibrationDataReader):\n    def __init__(self):\n        self.encoded_list = []\n        # append data into self.encoded_list\n\n        self.iter_next = iter(self.encoded_list)\n\n    def get_next(self):\n        return next(self.iter_next, None)\n\n    def rewind(self):\n        self.iter_next = iter(self.encoded_list)\n\n\ndata_reader = DataReader()\nqconfig = config.StaticQuantConfig(calibration_data_reader=data_reader)\nquantize(model, output_model_path, qconfig)\n```\n\n## Documentation\n\n\u003ctable class=\"docutils\"\u003e\n  \u003cthead\u003e\n  \u003ctr\u003e\n    \u003cth colspan=\"8\"\u003eOverview\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003c/thead\u003e\n  \u003ctbody\u003e\n    \u003ctr\u003e\n      \u003ctd colspan=\"3\" align=\"center\"\u003e\u003ca href=\"./docs/design.md#architecture\"\u003eArchitecture\u003c/a\u003e\u003c/td\u003e\n      \u003ctd colspan=\"3\" align=\"center\"\u003e\u003ca href=\"./docs/design.md#workflow\"\u003eWorkflow\u003c/a\u003e\u003c/td\u003e\n      \u003ctd colspan=\"3\" align=\"center\"\u003e\u003ca href=\"./examples/\"\u003eExamples\u003c/a\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c/tbody\u003e\n  \u003cthead\u003e\n    \u003ctr\u003e\n      \u003cth colspan=\"8\"\u003eFeature\u003c/th\u003e\n    \u003c/tr\u003e\n  \u003c/thead\u003e\n  \u003ctbody\u003e\n    \u003ctr\u003e\n        \u003ctd colspan=\"4\" align=\"center\"\u003e\u003ca href=\"./docs/quantization.md\"\u003eQuantization\u003c/a\u003e\u003c/td\u003e\n          \u003ctd colspan=\"4\" align=\"center\"\u003e\u003ca href=\"./docs/smooth_quant.md\"\u003eSmoothQuant\u003c/td\u003e\n      \u003ctr\u003e\n          \u003ctd colspan=\"4\" align=\"center\"\u003e\u003ca href=\"./docs/quantization_weight_only.md\"\u003eWeight-Only Quantization (INT8/INT4) \u003c/td\u003e\n           \u003c/td\u003e\n          \u003ctd colspan=\"4\" align=\"center\"\u003e\u003ca href=\"./docs/quantization_layer_wise.md\"\u003eLayer-Wise Quantization \u003c/td\u003e\n      \u003c/tr\u003e\n  \u003c/tbody\u003e\n\u003c/table\u003e\n\n\n\n## Additional Content\n\n* [Contribution Guidelines](./docs/source/CONTRIBUTING.md)\n* [Security Policy](SECURITY.md)\n\n## Communication\n- [GitHub Issues](https://github.com/onnx/neural-compressor/issues): mainly for bug reports, new feature requests, question asking, etc.\n- [Email](mailto:inc.maintainers@intel.com): welcome to raise any interesting research ideas on model compression techniques by email for collaborations.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fonnx%2Fneural-compressor","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fonnx%2Fneural-compressor","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fonnx%2Fneural-compressor/lists"}