{"id":13476802,"url":"https://github.com/huggingface/optimum-intel","last_synced_at":"2025-10-14T15:29:40.758Z","repository":{"id":37524155,"uuid":"496192011","full_name":"huggingface/optimum-intel","owner":"huggingface","description":"🤗 Optimum Intel: Accelerate inference with Intel optimization tools","archived":false,"fork":false,"pushed_at":"2025-10-03T17:02:54.000Z","size":18672,"stargazers_count":498,"open_issues_count":61,"forks_count":144,"subscribers_count":37,"default_branch":"main","last_synced_at":"2025-10-03T17:48:27.626Z","etag":null,"topics":["diffusers","distillation","inference","intel","onnx","openvino","optimization","pruning","quantization","transformers"],"latest_commit_sha":null,"homepage":"https://huggingface.co/docs/optimum-intel/en/index","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/huggingface.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2022-05-25T10:56:08.000Z","updated_at":"2025-10-03T11:13:23.000Z","dependencies_parsed_at":"2023-12-07T12:26:01.681Z","dependency_job_id":"e4233562-aeb3-4ea7-81a0-a7b9e431692c","html_url":"https://github.com/huggingface/optimum-intel","commit_stats":null,"previous_names":[],"tags_count":60,"template":false,"template_full_name":null,"purl":"pkg:github/huggingface/optimum-intel","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/huggingface%2Foptimum-intel","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/huggingface%2Foptimum-intel/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/huggingface%2Foptimum-intel/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/huggingface%2Foptimum-intel/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/huggingface","download_url":"https://codeload.github.com/huggingface/optimum-intel/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/huggingface%2Foptimum-intel/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":279019314,"owners_count":26086711,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-14T02:00:06.444Z","response_time":60,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["diffusers","distillation","inference","intel","onnx","openvino","optimization","pruning","quantization","transformers"],"created_at":"2024-07-31T16:01:34.792Z","updated_at":"2025-10-14T15:29:40.753Z","avatar_url":"https://github.com/huggingface.png","language":"Jupyter Notebook","readme":"\u003cp align=\"center\"\u003e\n    \u003cimg src=\"https://huggingface.co/datasets/optimum/documentation-images/resolve/main/intel/logo/hf_intel_logo.png\" /\u003e\n\u003c/p\u003e\n\n# Optimum Intel\n\n🤗 Optimum Intel is the interface between the 🤗 Transformers and Diffusers libraries and the different tools and libraries provided by Intel to accelerate end-to-end pipelines on Intel architectures.\n\n[Intel Extension for PyTorch](https://intel.github.io/intel-extension-for-pytorch/#introduction) is an open-source library which provides optimizations like faster attention and operators fusion.\n\nIntel [Neural Compressor](https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html) is an open-source library enabling the usage of the most popular compression techniques such as quantization, pruning and knowledge distillation. It supports automatic accuracy-driven tuning strategies in order for users to easily generate quantized model. The users can easily apply static, dynamic and aware-training quantization approaches while giving an expected accuracy criteria. It also supports different weight pruning techniques enabling the creation of pruned model giving a predefined sparsity target.\n\n[OpenVINO](https://docs.openvino.ai) is an open-source toolkit that enables high performance inference capabilities for Intel CPUs, GPUs, and special DL inference accelerators ([see](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) the full list of supported devices). It is supplied with a set of tools to optimize your models with compression techniques such as quantization, pruning and knowledge distillation. Optimum Intel provides a simple interface to optimize your Transformers and Diffusers models, convert them to the OpenVINO Intermediate Representation (IR) format and run inference using OpenVINO Runtime.\n\n\n## Installation\n\nTo install the latest release of 🤗 Optimum Intel with the corresponding required dependencies, you can use `pip` as follows:\n\n| Accelerator                                                                                                      | Installation                                                         |\n|:-----------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------|\n| [Intel Neural Compressor](https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html) | `pip install --upgrade --upgrade-strategy eager \"optimum[neural-compressor]\"`  |\n| [OpenVINO](https://docs.openvino.ai)                                                                             | `pip install --upgrade --upgrade-strategy eager \"optimum[openvino]\"`           |\n| [Intel Extension for PyTorch](https://intel.github.io/intel-extension-for-pytorch/#introduction)                 | `pip install --upgrade --upgrade-strategy eager \"optimum[ipex]\"`               |\n\nThe `--upgrade-strategy eager` option is needed to ensure `optimum-intel` is upgraded to the latest version.\n\nWe recommend creating a [virtual environment](https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/#creating-a-virtual-environment) and upgrading\npip with `python -m pip install --upgrade pip`.\n\nOptimum Intel is a fast-moving project, and you may want to install from source with the following command:\n\n```bash\npython -m pip install git+https://github.com/huggingface/optimum-intel.git\n```\n\nor to install from source including dependencies:\n\n```bash\npython -m pip install \"optimum-intel[extras]\"@git+https://github.com/huggingface/optimum-intel.git\n```\n\nwhere `extras` can be one or more of `ipex`, `neural-compressor`, `openvino`.\n\n# Quick tour\n\n## Neural Compressor\n\nDynamic quantization can be used through the Optimum CLI:\n\n```bash\noptimum-cli inc quantize --model distilbert-base-cased-distilled-squad --output ./quantized_distilbert\n```\nNote that quantization is currently only supported for CPUs (only CPU backends are available), so we will not be utilizing GPUs / CUDA in this example.\n\nYou can load many more quantized models hosted on the hub under the Intel organization [`here`](https://huggingface.co/Intel).\n\nFor more details on the supported compression techniques, please refer to the [documentation](https://huggingface.co/docs/optimum-intel/en/neural_compressor/optimization).\n\n## OpenVINO\n\nBelow are examples of how to use OpenVINO and its [NNCF](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/compressing-models-during-training.html) framework to accelerate inference.\n\n#### Export:\n\nIt is also possible to export your model to the [OpenVINO IR](https://docs.openvino.ai/2024/documentation/openvino-ir-format.html) format with the CLI :\n\n```plain\noptimum-cli export openvino --model meta-llama/Meta-Llama-3-8B ov_llama/\n```\n\nYou can also apply 8-bit weight-only quantization when exporting your model : the model linear, embedding and convolution weights will be quantized to INT8, the activations will be kept in floating point precision.\n\n```plain\noptimum-cli export openvino --model meta-llama/Meta-Llama-3-8B --weight-format int8 ov_llama_int8/\n```\n\nQuantization in hybrid mode can be applied to Stable Diffusion pipeline during model export. This involves applying hybrid post-training quantization to the UNet model and weight-only quantization for the rest of the pipeline components. In the hybrid mode, weights in MatMul and Embedding layers are quantized, as well as activations of other layers.\n\n```plain\noptimum-cli export openvino --model stabilityai/stable-diffusion-2-1 --dataset conceptual_captions --weight-format int8 ov_model_sd/\n```\n\nTo apply quantization on both weights and activations, you can find more information in the [documentation](https://huggingface.co/docs/optimum-intel/en/openvino/optimization).\n\n#### Inference:\n\nTo load a model and run inference with OpenVINO Runtime, you can just replace your `AutoModelForXxx` class with the corresponding `OVModelForXxx` class.\n\n```diff\n- from transformers import AutoModelForSeq2SeqLM\n+ from optimum.intel import OVModelForSeq2SeqLM\n  from transformers import AutoTokenizer, pipeline\n\n  model_id = \"echarlaix/t5-small-openvino\"\n- model = AutoModelForSeq2SeqLM.from_pretrained(model_id)\n+ model = OVModelForSeq2SeqLM.from_pretrained(model_id)\n  tokenizer = AutoTokenizer.from_pretrained(model_id)\n  pipe = pipeline(\"translation_en_to_fr\", model=model, tokenizer=tokenizer)\n  results = pipe(\"He never went out without a book under his arm, and he often came back with two.\")\n\n  [{'translation_text': \"Il n'est jamais sorti sans un livre sous son bras, et il est souvent revenu avec deux.\"}]\n```\n\n#### Quantization:\n\nPost-training static quantization can also be applied. Here is an example on how to apply static quantization on a Whisper model using the [LibriSpeech](https://huggingface.co/datasets/openslr/librispeech_asr) dataset for the calibration step.\n\n```python\nfrom optimum.intel import OVModelForSpeechSeq2Seq, OVQuantizationConfig\n\nmodel_id = \"openai/whisper-tiny\"\nq_config = OVQuantizationConfig(dtype=\"int8\", dataset=\"librispeech\", num_samples=50)\nq_model = OVModelForSpeechSeq2Seq.from_pretrained(model_id, quantization_config=q_config)\n\n# The directory where the quantized model will be saved\nsave_dir = \"nncf_results\"\nq_model.save_pretrained(save_dir)\n```\nYou can find more information in the [documentation](https://huggingface.co/docs/optimum-intel/en/openvino/optimization).\n\n\n## IPEX\nTo load your IPEX model, you can just replace your `AutoModelForXxx` class with the corresponding `IPEXModelForXxx` class. It will load a PyTorch checkpoint, and apply IPEX operators optimization (replaced with customized IPEX operators).\n```diff\n  from transformers import AutoTokenizer, pipeline\n- from transformers import AutoModelForCausalLM\n+ from optimum.intel import IPEXModelForCausalLM\n\n\n  model_id = \"gpt2\"\n- model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16)\n+ model = IPEXModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16)\n  tokenizer = AutoTokenizer.from_pretrained(model_id)\n  pipe = pipeline(\"text-generation\", model=model, tokenizer=tokenizer)\n  results = pipe(\"He's a dreadful magician and\")\n```\n\nFor more details, please refer to the [documentation](https://intel.github.io/intel-extension-for-pytorch/#introduction).\n\n\n## Running the examples\n\nCheck out the [`examples`](https://github.com/huggingface/optimum-intel/tree/main/examples) and [`notebooks`](https://github.com/huggingface/optimum-intel/tree/main/notebooks) directory to see how 🤗 Optimum Intel can be used to optimize models and accelerate inference.\n\nDo not forget to install requirements for every example:\n\n```\ncd \u003cexample-folder\u003e\npip install -r requirements.txt\n```\n\n\n## Gaudi\n\nTo train your model on [Intel Gaudi AI Accelerators (HPU)](https://docs.habana.ai/en/latest/index.html), check out [Optimum Habana](https://github.com/huggingface/optimum-habana) which provides a set of tools enabling easy model loading, training and inference on single- and multi-HPU settings for different downstream tasks. After training your model, feel free to submit it to the Intel [leaderboard](https://huggingface.co/spaces/Intel/powered_by_intel_llm_leaderboard) which is designed to evaluate, score, and rank open-source LLMs that have been pre-trained or fine-tuned on Intel Hardwares. Models submitted to the leaderboard will be evaluated on the Intel Developer Cloud. The evaluation platform consists of Gaudi Accelerators and Xeon CPUs running benchmarks from the Eleuther AI Language Model Evaluation Harness.\n","funding_links":[],"categories":["Table of Contents","Jupyter Notebook"],"sub_categories":["AI - Frameworks and Toolkits"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhuggingface%2Foptimum-intel","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fhuggingface%2Foptimum-intel","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhuggingface%2Foptimum-intel/lists"}