{"id":13442131,"url":"https://github.com/intel/neural-compressor","last_synced_at":"2025-05-12T13:20:44.353Z","repository":{"id":37041544,"uuid":"281528773","full_name":"intel/neural-compressor","owner":"intel","description":"SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) \u0026 sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime","archived":false,"fork":false,"pushed_at":"2025-05-09T09:05:51.000Z","size":491827,"stargazers_count":2398,"open_issues_count":55,"forks_count":267,"subscribers_count":32,"default_branch":"master","last_synced_at":"2025-05-11T05:02:02.532Z","etag":null,"topics":["auto-tuning","awq","fp4","gptq","int4","int8","knowledge-distillation","large-language-models","low-precision","mxformat","post-training-quantization","pruning","quantization","quantization-aware-training","smoothquant","sparsegpt","sparsity"],"latest_commit_sha":null,"homepage":"https://intel.github.io/neural-compressor/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/intel.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2020-07-21T23:49:56.000Z","updated_at":"2025-05-09T09:32:05.000Z","dependencies_parsed_at":"2023-09-22T20:56:01.799Z","dependency_job_id":"200a2799-cc41-4544-bdbd-1c0a90d4ca16","html_url":"https://github.com/intel/neural-compressor","commit_stats":{"total_commits":3588,"total_committers":125,"mean_commits":28.704,"dds":0.9077480490523969,"last_synced_commit":"444efb7127a354737bfb9b331fde0c83608597ba"},"previous_names":["intel/lpot","intel/lp-opt-tool"],"tags_count":53,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2Fneural-compressor","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2Fneural-compressor/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2Fneural-compressor/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2Fneural-compressor/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/intel","download_url":"https://codeload.github.com/intel/neural-compressor/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253745197,"owners_count":21957320,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["auto-tuning","awq","fp4","gptq","int4","int8","knowledge-distillation","large-language-models","low-precision","mxformat","post-training-quantization","pruning","quantization","quantization-aware-training","smoothquant","sparsegpt","sparsity"],"created_at":"2024-07-31T03:01:42.036Z","updated_at":"2025-05-12T13:20:44.343Z","avatar_url":"https://github.com/intel.png","language":"Python","readme":"\u003cdiv align=\"center\"\u003e\n\nIntel® Neural Compressor\n===========================\n\u003ch3\u003e An open-source Python library supporting popular model compression techniques on all mainstream deep learning frameworks (TensorFlow, PyTorch, and ONNX Runtime)\u003c/h3\u003e\n\n[![python](https://img.shields.io/badge/python-3.8%2B-blue)](https://github.com/intel/neural-compressor)\n[![version](https://img.shields.io/badge/release-3.3.1-green)](https://github.com/intel/neural-compressor/releases)\n[![license](https://img.shields.io/badge/license-Apache%202-blue)](https://github.com/intel/neural-compressor/blob/master/LICENSE)\n[![coverage](https://img.shields.io/badge/coverage-85%25-green)](https://github.com/intel/neural-compressor)\n[![Downloads](https://static.pepy.tech/personalized-badge/neural-compressor?period=total\u0026units=international_system\u0026left_color=grey\u0026right_color=green\u0026left_text=downloads)](https://pepy.tech/project/neural-compressor)\n\n[Architecture](./docs/source/3x/design.md#architecture)\u0026nbsp;\u0026nbsp;\u0026nbsp;|\u0026nbsp;\u0026nbsp;\u0026nbsp;[Workflow](./docs/source/3x/design.md#workflows)\u0026nbsp;\u0026nbsp;\u0026nbsp;|\u0026nbsp;\u0026nbsp;\u0026nbsp;[LLMs Recipes](./docs/source/llm_recipes.md)\u0026nbsp;\u0026nbsp;\u0026nbsp;|\u0026nbsp;\u0026nbsp;\u0026nbsp;[Results](./docs/source/validated_model_list.md)\u0026nbsp;\u0026nbsp;\u0026nbsp;|\u0026nbsp;\u0026nbsp;\u0026nbsp;[Documentations](https://intel.github.io/neural-compressor)\n\n---\n\u003cdiv align=\"left\"\u003e\n\nIntel® Neural Compressor aims to provide popular model compression techniques such as quantization, pruning (sparsity), distillation, and neural architecture search on mainstream frameworks such as [TensorFlow](https://www.tensorflow.org/), [PyTorch](https://pytorch.org/), and [ONNX Runtime](https://onnxruntime.ai/),\nas well as Intel extensions such as [Intel Extension for TensorFlow](https://github.com/intel/intel-extension-for-tensorflow) and [Intel Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch).\nIn particular, the tool provides the key features, typical examples, and open collaborations as below:\n\n* Support a wide range of Intel hardware such as [Intel Gaudi Al Accelerators](https://www.intel.com/content/www/us/en/products/details/processors/ai-accelerators/gaudi-overview.html), [Intel Core Ultra Processors](https://www.intel.com/content/www/us/en/products/details/processors/core-ultra.html), [Intel Xeon Scalable Processors](https://www.intel.com/content/www/us/en/products/details/processors/xeon/scalable.html), [Intel Xeon CPU Max Series](https://www.intel.com/content/www/us/en/products/details/processors/xeon/max-series.html), [Intel Data Center GPU Flex Series](https://www.intel.com/content/www/us/en/products/details/discrete-gpus/data-center-gpu/flex-series.html), and [Intel Data Center GPU Max Series](https://www.intel.com/content/www/us/en/products/details/discrete-gpus/data-center-gpu/max-series.html) with extensive testing;\nsupport AMD CPU, ARM CPU, and NVidia GPU through ONNX Runtime with limited testing; support NVidia GPU for some WOQ algorithms like AutoRound and HQQ.\n\n* Validate popular LLMs such as [LLama2](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), [Falcon](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), [GPT-J](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), [Bloom](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), [OPT](/examples/pytorch/nlp/huggingface_models/language-modeling/quantization/llm), and more than 10,000 broad models such as [Stable Diffusion](/examples/pytorch/nlp/huggingface_models/text-to-image/quantization), [BERT-Large](/examples/pytorch/nlp/huggingface_models/text-classification/quantization/ptq_static/fx), and [ResNet50](/examples/pytorch/image_recognition/torchvision_models/quantization/ptq/cpu/fx) from popular model hubs such as [Hugging Face](https://huggingface.co/), [Torch Vision](https://pytorch.org/vision/stable/index.html), and [ONNX Model Zoo](https://github.com/onnx/models#models), with automatic [accuracy-driven](/docs/source/design.md#workflow) quantization strategies\n\n* Collaborate with cloud marketplaces such as [Google Cloud Platform](https://console.cloud.google.com/marketplace/product/bitnami-launchpad/inc-tensorflow-intel?project=verdant-sensor-286207), [Amazon Web Services](https://aws.amazon.com/marketplace/pp/prodview-yjyh2xmggbmga#pdp-support), and [Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bitnami.inc-tensorflow-intel), software platforms such as [Alibaba Cloud](https://www.intel.com/content/www/us/en/developer/articles/technical/quantize-ai-by-oneapi-analytics-on-alibaba-cloud.html), [Tencent TACO](https://new.qq.com/rain/a/20221202A00B9S00) and [Microsoft Olive](https://github.com/microsoft/Olive), and open AI ecosystem such as [Hugging Face](https://huggingface.co/blog/intel), [PyTorch](https://pytorch.org/tutorials/recipes/intel_neural_compressor_for_pytorch.html), [ONNX](https://github.com/onnx/models#models), [ONNX Runtime](https://github.com/microsoft/onnxruntime), and [Lightning AI](https://github.com/Lightning-AI/lightning/blob/master/docs/source-pytorch/advanced/post_training_quantization.rst)\n\n## What's New\n* [2024/10] [Transformers-like API](./docs/source/3x/transformers_like_api.md) for INT4 inference on Intel CPU and GPU.\n* [2024/07] From 3.0 release, framework extension API is recommended to be used for quantization.\n* [2024/07] Performance optimizations and usability improvements on [client-side](./docs/source/3x/client_quant.md).\n\n## Installation\nChoose the necessary framework dependencies to install based on your deploy environment.\n### Install Framework\n* [Install intel_extension_for_pytorch for CPU](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/)    \n* [Install intel_extension_for_pytorch for Intel GPU](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/)    \n* [Use Docker Image with torch installed for HPU](https://docs.habana.ai/en/latest/Installation_Guide/Bare_Metal_Fresh_OS.html#bare-metal-fresh-os-single-click)    \n  **Note**: There is a version mapping between Intel Neural Compressor and Gaudi Software Stack, please refer to this [table](./docs/source/3x/gaudi_version_map.md) and make sure to use a matched combination.    \n* [Install torch for other platform](https://pytorch.org/get-started/locally)    \n* [Install TensorFlow](https://www.tensorflow.org/install)    \n\n### Install Neural Compressor from pypi\n```\n# Install 2.X API + Framework extension API + PyTorch dependency\npip install neural-compressor[pt]\n# Install 2.X API + Framework extension API + TensorFlow dependency\npip install neural-compressor[tf]\n```    \n**Note**: Further installation methods can be found under [Installation Guide](./docs/source/installation_guide.md). check out our [FAQ](./docs/source/faq.md) for more details.\n\n## Getting Started\nAfter successfully installing these packages, try your first quantization program. **Following example code demonstrates FP8 Quantization**, it is supported by Intel Gaudi2 AI Accelerator.     \nTo try on Intel Gaudi2, docker image with Gaudi Software Stack is recommended, please refer to following script for environment setup. More details can be found in [Gaudi Guide](https://docs.habana.ai/en/latest/Installation_Guide/Bare_Metal_Fresh_OS.html#launch-docker-image-that-was-built).    \n\nRun a container with an interactive shell, [more info](https://docs.habana.ai/en/latest/Installation_Guide/Additional_Installation/Docker_Installation.html#docker-installation)\n```\ndocker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.20.0/ubuntu24.04/habanalabs/pytorch-installer-2.6.0:latest\n```\nRun the example,\n```python\nfrom neural_compressor.torch.quantization import (\n    FP8Config,\n    prepare,\n    convert,\n)\n\nimport torch\nimport torchvision.models as models\n\nmodel = models.resnet18()\nqconfig = FP8Config(fp8_config=\"E4M3\")\nmodel = prepare(model, qconfig)\n\n# Customer defined calibration. Below is a dummy calibration\nmodel(torch.randn(1, 3, 224, 224).to(\"hpu\"))\n\nmodel = convert(model)\n\noutput = model(torch.randn(1, 3, 224, 224).to(\"hpu\")).to(\"cpu\")\nprint(output.shape)\n```    \nMore [FP8 quantization doc](./docs/source/3x/PT_FP8Quant.md).\n\n**Following example code demonstrates weight-only large language model loading** on Intel Gaudi2 AI Accelerator. \n```python\nfrom neural_compressor.torch.quantization import load\n\nmodel_name = \"TheBloke/Llama-2-7B-GPTQ\"\nmodel = load(\n    model_name_or_path=model_name,\n    format=\"huggingface\",\n    device=\"hpu\",\n    torch_dtype=torch.bfloat16,\n)\n```\n**Note:** Intel Neural Compressor will convert the model format from auto-gptq to hpu format on the first load and save hpu_model.safetensors to the local cache directory for the next load. So it may take a while to load for the first time.\n\n## Documentation\n\n\u003ctable class=\"docutils\"\u003e\n  \u003cthead\u003e\n  \u003ctr\u003e\n    \u003cth colspan=\"8\"\u003eOverview\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003c/thead\u003e\n  \u003ctbody\u003e\n    \u003ctr\u003e\n      \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"./docs/source/3x/design.md#architecture\"\u003eArchitecture\u003c/a\u003e\u003c/td\u003e\n      \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"./docs/source/3x/design.md#workflows\"\u003eWorkflow\u003c/a\u003e\u003c/td\u003e\n      \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"https://intel.github.io/neural-compressor/latest/docs/source/api-doc/apis.html\"\u003eAPIs\u003c/a\u003e\u003c/td\u003e\n      \u003ctd colspan=\"1\" align=\"center\"\u003e\u003ca href=\"./docs/source/3x/llm_recipes.md\"\u003eLLMs Recipes\u003c/a\u003e\u003c/td\u003e\n      \u003ctd colspan=\"1\" align=\"center\"\u003e\u003ca href=\"./examples/3.x_api/README.md\"\u003eExamples\u003c/a\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c/tbody\u003e\n  \u003cthead\u003e\n    \u003ctr\u003e\n      \u003cth colspan=\"8\"\u003ePyTorch Extension APIs\u003c/th\u003e\n    \u003c/tr\u003e\n  \u003c/thead\u003e\n  \u003ctbody\u003e\n    \u003ctr\u003e\n        \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"./docs/source/3x/PyTorch.md\"\u003eOverview\u003c/a\u003e\u003c/td\u003e\n        \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"./docs/source/3x/PT_DynamicQuant.md\"\u003eDynamic Quantization\u003c/a\u003e\u003c/td\u003e\n        \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"./docs/source/3x/PT_StaticQuant.md\"\u003eStatic Quantization\u003c/a\u003e\u003c/td\u003e\n        \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"./docs/source/3x/PT_SmoothQuant.md\"\u003eSmooth Quantization\u003c/a\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"./docs/source/3x/PT_WeightOnlyQuant.md\"\u003eWeight-Only Quantization\u003c/a\u003e\u003c/td\u003e\n        \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"./docs/source/3x/PT_FP8Quant.md\"\u003eFP8 Quantization\u003c/a\u003e\u003c/td\u003e\n        \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"./docs/source/3x/PT_MXQuant.md\"\u003eMX Quantization\u003c/a\u003e\u003c/td\u003e\n        \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"./docs/source/3x/PT_MixedPrecision.md\"\u003eMixed Precision\u003c/a\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c/tbody\u003e\n  \u003cthead\u003e\n      \u003ctr\u003e\n        \u003cth colspan=\"8\"\u003eTensorflow Extension APIs\u003c/th\u003e\n      \u003c/tr\u003e\n  \u003c/thead\u003e\n  \u003ctbody\u003e\n      \u003ctr\u003e\n          \u003ctd colspan=\"3\" align=\"center\"\u003e\u003ca href=\"./docs/source/3x/TensorFlow.md\"\u003eOverview\u003c/a\u003e\u003c/td\u003e\n          \u003ctd colspan=\"3\" align=\"center\"\u003e\u003ca href=\"./docs/source/3x/TF_Quant.md\"\u003eStatic Quantization\u003c/a\u003e\u003c/td\u003e\n          \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"./docs/source/3x/TF_SQ.md\"\u003eSmooth Quantization\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n  \u003c/tbody\u003e\n  \u003cthead\u003e\n      \u003ctr\u003e\n        \u003cth colspan=\"8\"\u003eTransformers-like APIs\u003c/th\u003e\n      \u003c/tr\u003e\n  \u003c/thead\u003e\n  \u003ctbody\u003e\n      \u003ctr\u003e\n          \u003ctd colspan=\"8\" align=\"center\"\u003e\u003ca href=\"./docs/source/3x/transformers_like_api.md\"\u003eOverview\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n  \u003c/tbody\u003e\n  \u003cthead\u003e\n      \u003ctr\u003e\n        \u003cth colspan=\"8\"\u003eOther Modules\u003c/th\u003e\n      \u003c/tr\u003e\n  \u003c/thead\u003e\n  \u003ctbody\u003e\n      \u003ctr\u003e\n          \u003ctd colspan=\"4\" align=\"center\"\u003e\u003ca href=\"./docs/source/3x/autotune.md\"\u003eAuto Tune\u003c/a\u003e\u003c/td\u003e\n          \u003ctd colspan=\"4\" align=\"center\"\u003e\u003ca href=\"./docs/source/3x/benchmark.md\"\u003eBenchmark\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n  \u003c/tbody\u003e\n\u003c/table\u003e\n\n\u003e **Note**:\n\u003e From 3.0 release, we recommend to use 3.X API. Compression techniques during training such as QAT, Pruning, Distillation only available in [2.X API](https://github.com/intel/neural-compressor/blob/master/docs/source/2x_user_guide.md) currently.\n\n## Selected Publications/Events\n\n* arXiv: [Faster Inference of LLMs using FP8 on the Intel Gaudi](https://arxiv.org/abs/2503.09975) (Mar 2025)\n* PyTorch landscape: [PyTorch general optimizations](https://landscape.pytorch.org/) (Mar 2025)\n* Blog on SqueezeBits: [[Intel Gaudi] #4. FP8 Quantization](https://blog.squeezebits.com/intel-gaudi-4-fp8-quantization--40269) (Jan 2025)\n* EMNLP'2024: [Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs](https://arxiv.org/abs/2309.05516) (Sep 2024)\n* arXiv: [Efficient Post-training Quantization with FP8 Formats](https://arxiv.org/abs/2309.14592) (Sep 2023)\n* arXiv: [Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs](https://arxiv.org/abs/2309.05516) (Sep 2023)\n\n\u003e **Note**:\n\u003e View [Full Publication List](https://github.com/intel/neural-compressor/blob/master/docs/source/publication_list.md).\n\n## Additional Content\n\n* [Release Information](./docs/source/releases_info.md)\n* [Contribution Guidelines](./docs/source/CONTRIBUTING.md)\n* [Legal Information](./docs/source/legal_information.md)\n* [Security Policy](SECURITY.md)\n\n## Communication\n- [GitHub Issues](https://github.com/intel/neural-compressor/issues): mainly for bug reports, new feature requests, question asking, etc.\n- [Email](mailto:inc.maintainers@intel.com): welcome to raise any interesting research ideas on model compression techniques by email for collaborations.\n- [Discord Channel](https://discord.com/invite/Wxk3J3ZJkU): join the discord channel for more flexible technical discussion.\n- [WeChat group](/docs/source/imgs/wechat_group.jpg): scan the QA code to join the technical discussion.\n","funding_links":[],"categories":["Python","📦 Deployment \u0026 Optimization","Table of Contents","Tools","A01_文本生成_文本对话","Model Storage Optimisation","Building"],"sub_categories":["🔧 Model Optimization Tools","AI - Frameworks and Toolkits","Approximations Frameworks","大语言对话模型及数据","Other","LLM Models"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fintel%2Fneural-compressor","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fintel%2Fneural-compressor","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fintel%2Fneural-compressor/lists"}