Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/intel/intel-extension-for-pytorch
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
https://github.com/intel/intel-extension-for-pytorch
deep-learning intel machine-learning neural-network pytorch quantization
Last synced: about 15 hours ago
JSON representation
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
- Host: GitHub
- URL: https://github.com/intel/intel-extension-for-pytorch
- Owner: intel
- License: apache-2.0
- Created: 2020-04-15T23:35:29.000Z (over 4 years ago)
- Default Branch: main
- Last Pushed: 2024-10-22T05:57:13.000Z (2 months ago)
- Last Synced: 2024-10-29T15:11:08.996Z (about 2 months ago)
- Topics: deep-learning, intel, machine-learning, neural-network, pytorch, quantization
- Language: Python
- Homepage:
- Size: 101 MB
- Stars: 1,598
- Watchers: 37
- Forks: 247
- Open Issues: 180
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
- Codeowners: .github/CODEOWNERS
- Security: SECURITY.md
Awesome Lists containing this project
- awesome-oneapi - intel-extension-for-pytorch - Intel Extension for PyTorch provides features optimizations for an extra performance boost on Intel hardware including CPUs and Discrete GPUs and offers easy GPU acceleration for Intel Discrete GPUs with PyTorch. (Table of Contents / AI - Frameworks and Toolkits)
- StarryDivineSky - intel/intel-extension-for-pytorch - 512) 矢量神经网络指令 (VNNI) 和 Intel® 高级矩阵扩展 (Intel® AMX) 在 Intel CPU 上进行优化,以及在 Intel 独立 GPU 上利用 Intel Xe 矩阵扩展 (XMX) AI引擎。此外,Intel® Extension for PyTorch* 通过 PyTorch* xpu 设备为 Intel 独立 GPU 提供简单的 GPU 加速。该项目还针对大型语言模型 (LLM) 提供了特定优化,例如 Llama和 GPT-J,支持 FP32、BF16、INT8 量化等多种精度。 (A01_文本生成_文本对话 / 大语言对话模型及数据)
README
Intel® Extension for PyTorch\*
===========================**CPU** [💻main branch](https://github.com/intel/intel-extension-for-pytorch/tree/main) | [🌱Quick Start](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/getting_started.html) | [📖Documentations](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/) | [🏃Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=cpu&version=v2.5.0%2Bcpu) | [💻LLM Example](https://github.com/intel/intel-extension-for-pytorch/tree/main/examples/cpu/llm)
**GPU** [💻main branch](https://github.com/intel/intel-extension-for-pytorch/tree/xpu-main) | [🌱Quick Start](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/getting_started.html) | [📖Documentations](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/) | [🏃Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu) | [💻LLM Example](https://github.com/intel/intel-extension-for-pytorch/tree/xpu-main/examples/gpu/llm)
Intel® Extension for PyTorch\* extends PyTorch\* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Vector Neural Network Instructions (VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs through the PyTorch* xpu device.
## ipex.llm - Large Language Models (LLMs) Optimization
In the current technological landscape, Generative AI (GenAI) workloads and models have gained widespread attention and popularity. Large Language Models (LLMs) have emerged as the dominant models driving these GenAI applications. Starting from 2.1.0, specific optimizations for certain LLM models are introduced in the Intel® Extension for PyTorch\*. Check [**LLM optimizations**](./examples/cpu/llm) for details.
### Optimized Model List
| MODEL FAMILY | MODEL NAME (Huggingface hub) | FP32 | BF16 | Static quantization INT8 | Weight only quantization INT8 | Weight only quantization INT4 |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|LLAMA| meta-llama/Llama-2-7b-hf | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|LLAMA| meta-llama/Llama-2-13b-hf | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|LLAMA| meta-llama/Llama-2-70b-hf | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|LLAMA| meta-llama/Meta-Llama-3-8B | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|LLAMA| meta-llama/Meta-Llama-3-70B | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|LLAMA| meta-llama/Meta-Llama-3.1-8B-Instruct | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|LLAMA| meta-llama/Llama-3.2-3B-Instruct | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|LLAMA| meta-llama/Llama-3.2-11B-Vision-Instruct | 🟩 | 🟩 | | 🟩 | |
|GPT-J| EleutherAI/gpt-j-6b | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|GPT-NEOX| EleutherAI/gpt-neox-20b | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|DOLLY| databricks/dolly-v2-12b | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|FALCON| tiiuae/falcon-7b | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|FALCON| tiiuae/falcon-11b | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|FALCON| tiiuae/falcon-40b | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|OPT| facebook/opt-30b | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|OPT| facebook/opt-1.3b | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|Bloom| bigscience/bloom-1b7 | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|CodeGen| Salesforce/codegen-2B-multi | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|Baichuan| baichuan-inc/Baichuan2-7B-Chat | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|Baichuan| baichuan-inc/Baichuan2-13B-Chat | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|Baichuan| baichuan-inc/Baichuan-13B-Chat | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|ChatGLM| THUDM/chatglm3-6b | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|ChatGLM| THUDM/chatglm2-6b | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|GPTBigCode| bigcode/starcoder | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|T5| google/flan-t5-xl | 🟩 | 🟩 | 🟩 | 🟩 | |
|MPT| mosaicml/mpt-7b | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|Mistral| mistralai/Mistral-7B-v0.1 | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|Mixtral| mistralai/Mixtral-8x7B-v0.1 | 🟩 | 🟩 | | 🟩 | 🟩 |
|Stablelm| stabilityai/stablelm-2-1_6b | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|Qwen| Qwen/Qwen-7B-Chat | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|Qwen| Qwen/Qwen2-7B | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|LLaVA| liuhaotian/llava-v1.5-7b | 🟩 | 🟩 | | 🟩 | 🟩 |
|GIT| microsoft/git-base | 🟩 | 🟩 | | 🟩 | |
|Yuan| IEITYuan/Yuan2-102B-hf | 🟩 | 🟩 | | 🟩 | |
|Phi| microsoft/phi-2 | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|Phi| microsoft/Phi-3-mini-4k-instruct | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|Phi| microsoft/Phi-3-mini-128k-instruct | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|Phi| microsoft/Phi-3-medium-4k-instruct | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|Phi| microsoft/Phi-3-medium-128k-instruct | 🟩 | 🟩 | 🟩 | 🟩 | 🟩 |
|Whisper| openai/whisper-large-v2 | 🟩 | 🟩 | 🟩 | 🟩 | |*Note*: The above verified models (including other models in the same model family, like "codellama/CodeLlama-7b-hf" from LLAMA family) are well supported with all optimizations like indirect access KV cache, fused ROPE, and customized linear kernels.
We are working in progress to better support the models in the tables with various data types. In addition, more models will be optimized in the future.In addition, Intel® Extension for PyTorch* introduces module level optimization APIs (prototype feature) since release 2.3.0.
The feature provides optimized alternatives for several commonly used LLM modules and functionalities for the optimizations of the niche or customized LLMs.
Please read [**LLM module level optimization practice**](./examples/cpu/inference/python/llm-modeling) to better understand how to optimize your own LLM and achieve better performance.## Support
The team tracks bugs and enhancement requests using [GitHub issues](https://github.com/intel/intel-extension-for-pytorch/issues/). Before submitting a suggestion or bug report, search the existing GitHub issues to see if your issue has already been reported.
## License
_Apache License_, Version _2.0_. As found in [LICENSE](https://github.com/intel/intel-extension-for-pytorch/blob/main/LICENSE) file.
## Security
See Intel's [Security Center](https://www.intel.com/content/www/us/en/security-center/default.html)
for information on how to report a potential security issue or vulnerability.See also: [Security Policy](SECURITY.md)