{"id":13478160,"url":"https://github.com/intel/intel-extension-for-transformers","last_synced_at":"2025-02-24T02:31:22.903Z","repository":{"id":63271868,"uuid":"564623890","full_name":"intel/intel-extension-for-transformers","owner":"intel","description":"⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡","archived":true,"fork":false,"pushed_at":"2024-10-08T21:09:46.000Z","size":613403,"stargazers_count":2133,"open_issues_count":56,"forks_count":211,"subscribers_count":28,"default_branch":"main","last_synced_at":"2024-10-29T15:34:42.493Z","etag":null,"topics":["4-bits","autoround","chatbot","chatpdf","gaudi3","habana","intel-optimized-llamacpp","large-language-model","llm-cpu","llm-inference","neural-chat","neural-chat-7b","rag","retrieval","speculative-decoding","streamingllm"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/intel.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":"docs/code_of_conduct.md","threat_model":null,"audit":null,"citation":null,"codeowners":".github/CODEOWNERS","security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2022-11-11T05:32:27.000Z","updated_at":"2024-10-27T06:56:35.000Z","dependencies_parsed_at":"2023-10-17T11:56:17.332Z","dependency_job_id":"0aad1b66-59ee-4255-a06d-5d8e2674b22c","html_url":"https://github.com/intel/intel-extension-for-transformers","commit_stats":null,"previous_names":[],"tags_count":21,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2Fintel-extension-for-transformers","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2Fintel-extension-for-transformers/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2Fintel-extension-for-transformers/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2Fintel-extension-for-transformers/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/intel","download_url":"https://codeload.github.com/intel/intel-extension-for-transformers/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":240405996,"owners_count":19796282,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["4-bits","autoround","chatbot","chatpdf","gaudi3","habana","intel-optimized-llamacpp","large-language-model","llm-cpu","llm-inference","neural-chat","neural-chat-7b","rag","retrieval","speculative-decoding","streamingllm"],"created_at":"2024-07-31T16:01:53.265Z","updated_at":"2025-02-24T02:31:17.876Z","avatar_url":"https://github.com/intel.png","language":"Python","readme":"\u003cdiv align=\"center\"\u003e\n  \nIntel® Extension for Transformers\n===========================\n\u003ch3\u003eAn Innovative Transformer-based Toolkit to Accelerate GenAI/LLM Everywhere\u003c/h3\u003e\n\n[![](https://dcbadge.vercel.app/api/server/Wxk3J3ZJkU?compact=true\u0026style=flat-square)](https://discord.gg/Wxk3J3ZJkU)\n[![Release Notes](https://img.shields.io/github/v/release/intel/intel-extension-for-transformers)](https://github.com/intel/intel-extension-for-transformers/releases)\n\n[🏭Architecture](./docs/architecture.md)\u0026nbsp;\u0026nbsp;\u0026nbsp;|\u0026nbsp;\u0026nbsp;\u0026nbsp;[💬NeuralChat](./intel_extension_for_transformers/neural_chat)\u0026nbsp;\u0026nbsp;\u0026nbsp;|\u0026nbsp;\u0026nbsp;\u0026nbsp;[😃Inference on CPU](https://github.com/intel/neural-speed/tree/main)\u0026nbsp;\u0026nbsp;\u0026nbsp;|\u0026nbsp;\u0026nbsp;\u0026nbsp;[😃Inference  on GPU](https://github.com/intel/intel-extension-for-transformers/blob/main/docs/weightonlyquant.md#examples-for-gpu)\u0026nbsp;\u0026nbsp;\u0026nbsp;|\u0026nbsp;\u0026nbsp;\u0026nbsp;[💻Examples](./docs/examples.md)\u0026nbsp;\u0026nbsp;\u0026nbsp;|\u0026nbsp;\u0026nbsp;\u0026nbsp;[📖Documentations](https://intel.github.io/intel-extension-for-transformers/latest/docs/Welcome.html)\n\u003c/div\u003e\n\n## 🚀Latest News\n* [2024/06] Support Qwen2, please find the details in [Blog](https://medium.com/intel-analytics-software/accelerating-qwen2-models-with-intel-extension-for-transformers-99403de82f68)\n* [2024/04] Support the launch of **[Meta Llama 3](https://llama.meta.com/llama3/)**, the next generation of Llama models. Check out [Accelerate Meta* Llama 3 with Intel AI Solutions](https://www.intel.com/content/www/us/en/developer/articles/technical/accelerate-meta-llama3-with-intel-ai-solutions.html).\n* [2024/04] Demonstrated the chatbot in 4th, 5th, and 6th Gen Xeon Scalable Processors in [**Intel Vision Pat's Keynote**](https://youtu.be/QB7FoIpx8os?t=2280).\n* [2024/04] Supported **INT4 inference on Intel Meteor Lake**.\n* [2024/04] Achieved a 1.8x performance improvement in GPT-J inference on the 5th Gen Xeon MLPerf v4.0 submission compared to v3.1. [News](https://www.intel.com/content/www/us/en/newsroom/news/new-gaudi-2-xeon-performance-ai-inference.html#gs.71ti1m), [Results](https://mlcommons.org/2024/03/mlperf-inference-v4/).\n* [2024/01] Supported **INT4 inference on Intel GPUs** including Intel Data Center GPU Max Series (e.g., PVC) and Intel Arc A-Series (e.g., ARC). Check out the [examples](https://github.com/intel/intel-extension-for-transformers/blob/main/docs/weightonlyquant.md#examples-for-gpu) and [scripts](https://github.com/intel/intel-extension-for-transformers/blob/main/examples/huggingface/pytorch/text-generation/quantization/run_generation_gpu_woq.py).\n* [2024/01] Demonstrated **Intel Hybrid Copilot** in **CES 2024 Great Minds** Session \"[Bringing the Limitless Potential of AI Everywhere](https://youtu.be/70J3uO3eLZA?t=1348)\".\n* [2023/12] Supported **QLoRA on CPUs** to make fine-tuning on client CPU possible. Check out the [blog](https://medium.com/@NeuralCompressor/creating-your-own-llms-on-your-laptop-a08cc4f7c91b) and [readme](https://github.com/intel/intel-extension-for-transformers/blob/main/docs/qloracpu.md) for more details.\n* [2023/11] Released **top-1 7B-sized LLM** [**NeuralChat-v3-1**](https://huggingface.co/Intel/neural-chat-7b-v3-1) and [DPO dataset](https://huggingface.co/datasets/Intel/orca_dpo_pairs). Check out the [nice video](https://www.youtube.com/watch?v=bWhZ1u_1rlc) published by [WorldofAI](https://www.youtube.com/@intheworldofai).\n* [2023/11] Published a **4-bit chatbot demo** (based on NeuralChat) available on [Intel Hugging Face Space](https://huggingface.co/spaces/Intel/NeuralChat-ICX-INT4). Welcome to have a try! To setup the demo locally, please follow the [instructions](https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/neural_chat/docs/notebooks/setup_text_chatbot_service_on_spr.ipynb).\n\n---\n\u003cdiv align=\"left\"\u003e\n\n## 🏃Installation\n### Quick Install from Pypi\n```bash\npip install intel-extension-for-transformers\n```\n\u003e For system requirements and other installation tips, please refer to [Installation Guide](./docs/installation.md)\n\n## 🌟Introduction\nIntel® Extension for Transformers is an innovative toolkit designed to accelerate GenAI/LLM everywhere with the optimal performance of Transformer-based models on various Intel platforms, including Intel Gaudi2, Intel CPU, and Intel GPU. The toolkit provides the below key features and examples:\n\n*  Seamless user experience of model compressions on Transformer-based models by extending [Hugging Face transformers](https://github.com/huggingface/transformers) APIs and leveraging [Intel® Neural Compressor](https://github.com/intel/neural-compressor)\n\n*  Advanced software optimizations and unique compression-aware runtime (released with NeurIPS 2022's paper [Fast Distilbert on CPUs](https://arxiv.org/abs/2211.07715) and [QuaLA-MiniLM: a Quantized Length Adaptive MiniLM](https://arxiv.org/abs/2210.17114), and NeurIPS 2021's paper [Prune Once for All: Sparse Pre-Trained Language Models](https://arxiv.org/abs/2111.05754))\n\n*  Optimized Transformer-based model packages such as [Stable Diffusion](examples/huggingface/pytorch/text-to-image/deployment/stable_diffusion), [GPT-J-6B](examples/huggingface/pytorch/text-generation/deployment), [GPT-NEOX](examples/huggingface/pytorch/language-modeling/quantization#2-validated-model-list), [BLOOM-176B](examples/huggingface/pytorch/language-modeling/inference#BLOOM-176B), [T5](examples/huggingface/pytorch/summarization/quantization#2-validated-model-list), [Flan-T5](examples/huggingface/pytorch/summarization/quantization#2-validated-model-list), and end-to-end workflows such as [SetFit-based text classification](docs/tutorials/pytorch/text-classification/SetFit_model_compression_AGNews.ipynb) and [document level sentiment analysis (DLSA)](workflows/dlsa) \n\n*  [NeuralChat](intel_extension_for_transformers/neural_chat), a customizable chatbot framework to create your own chatbot within minutes by leveraging a rich set of [plugins](https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/neural_chat/docs/advanced_features.md) such as [Knowledge Retrieval](./intel_extension_for_transformers/neural_chat/pipeline/plugins/retrieval/README.md), [Speech Interaction](./intel_extension_for_transformers/neural_chat/pipeline/plugins/audio/README.md), [Query Caching](./intel_extension_for_transformers/neural_chat/pipeline/plugins/caching/README.md), and [Security Guardrail](./intel_extension_for_transformers/neural_chat/pipeline/plugins/security/README.md). This framework supports Intel Gaudi2/CPU/GPU.\n\n*  [Inference](https://github.com/intel/neural-speed/tree/main) of Large Language Model (LLM) in pure C/C++ with weight-only quantization kernels for Intel CPU and Intel GPU (TBD), supporting [GPT-NEOX](https://github.com/intel/neural-speed/tree/main/neural_speed/models/gptneox), [LLAMA](https://github.com/intel/neural-speed/tree/main/neural_speed/models/llama), [MPT](https://github.com/intel/neural-speed/tree/main/neural_speed/models/mpt), [FALCON](https://github.com/intel/neural-speed/tree/main/neural_speed/models/falcon), [BLOOM-7B](https://github.com/intel/neural-speed/tree/main/neural_speed/models/bloom), [OPT](https://github.com/intel/neural-speed/tree/main/neural_speed/models/opt), [ChatGLM2-6B](https://github.com/intel/neural-speed/tree/main/neural_speed/models/chatglm), [GPT-J-6B](https://github.com/intel/neural-speed/tree/main/neural_speed/models/gptj), and [Dolly-v2-3B](https://github.com/intel/neural-speed/tree/main/neural_speed/models/gptneox). Support AMX, VNNI, AVX512F and AVX2 instruction set. We've boosted the performance of Intel CPUs, with a particular focus on the 4th generation Intel Xeon Scalable processor, codenamed [Sapphire Rapids](https://www.intel.com/content/www/us/en/products/docs/processors/xeon-accelerated/4th-gen-xeon-scalable-processors.html).\n\n## 🔓Validated Hardware\n\u003ctable\u003e\n\t\u003ctbody\u003e\n\t\t\u003ctr\u003e\n\t\t\t\u003ctd rowspan=\"2\"\u003eHardware\u003c/td\u003e\n\t\t\t\u003ctd colspan=\"2\"\u003eFine-Tuning\u003c/td\u003e\n\t\t\t\u003ctd colspan=\"2\"\u003eInference\u003c/td\u003e\n\t\t\u003c/tr\u003e\n\t\t\u003ctr\u003e\n\t\t\t\u003ctd\u003eFull\u003c/td\u003e\n\t\t\t\u003ctd\u003ePEFT\u003c/td\u003e\n\t\t\t\u003ctd\u003e8-bit\u003c/td\u003e\n\t\t\t\u003ctd\u003e4-bit\u003c/td\u003e\n\t\t\u003c/tr\u003e\n\t\t\u003ctr\u003e\n\t\t\t\u003ctd\u003eIntel Gaudi2\u003c/td\u003e\n\t\t\t\u003ctd\u003e✔\u003c/td\u003e\n\t\t\t\u003ctd\u003e✔\u003c/td\u003e\n\t\t\t\u003ctd\u003eWIP (FP8)\u003c/td\u003e\n\t\t\t\u003ctd\u003e-\u003c/td\u003e\n\t\t\u003c/tr\u003e\n\t\t\u003ctr\u003e\n\t\t\t\u003ctd\u003eIntel Xeon Scalable Processors\u003c/td\u003e\n\t\t\t\u003ctd\u003e✔\u003c/td\u003e\n\t\t\t\u003ctd\u003e✔\u003c/td\u003e\n\t\t\t\u003ctd\u003e✔ (INT8, FP8)\u003c/td\u003e\n\t\t\t\u003ctd\u003e✔ (INT4, FP4, NF4)\u003c/td\u003e\n\t\t\u003c/tr\u003e\n\t\t\u003ctr\u003e\n\t\t\t\u003ctd\u003eIntel Xeon CPU Max Series\u003c/td\u003e\n\t\t\t\u003ctd\u003e✔\u003c/td\u003e\n\t\t\t\u003ctd\u003e✔\u003c/td\u003e\n\t\t\t\u003ctd\u003e✔ (INT8, FP8)\u003c/td\u003e\n\t\t\t\u003ctd\u003e✔ (INT4, FP4, NF4)\u003c/td\u003e\n\t\t\u003c/tr\u003e\n\t\t\u003ctr\u003e\n\t\t\t\u003ctd\u003eIntel Data Center GPU Max Series\u003c/td\u003e\n\t\t\t\u003ctd\u003eWIP \u003c/td\u003e\n\t\t\t\u003ctd\u003eWIP \u003c/td\u003e\n\t\t\t\u003ctd\u003eWIP (INT8)\u003c/td\u003e\n\t\t\t\u003ctd\u003e✔ (INT4)\u003c/td\u003e\n\t\t\u003c/tr\u003e\n\t\t\u003ctr\u003e\n\t\t\t\u003ctd\u003eIntel Arc A-Series\u003c/td\u003e\n\t\t\t\u003ctd\u003e-\u003c/td\u003e\n\t\t\t\u003ctd\u003e-\u003c/td\u003e\n\t\t\t\u003ctd\u003eWIP (INT8)\u003c/td\u003e\n\t\t\t\u003ctd\u003e✔ (INT4)\u003c/td\u003e\n\t\t\u003c/tr\u003e\n\t\t\u003ctr\u003e\n\t\t\t\u003ctd\u003eIntel Core Processors\u003c/td\u003e\n\t\t\t\u003ctd\u003e-\u003c/td\u003e\n\t\t\t\u003ctd\u003e✔\u003c/td\u003e\n\t\t\t\u003ctd\u003e✔ (INT8, FP8)\u003c/td\u003e\n\t\t\t\u003ctd\u003e✔ (INT4, FP4, NF4)\u003c/td\u003e\n\t\t\u003c/tr\u003e\n\t\u003c/tbody\u003e\n\u003c/table\u003e\n\n\n\u003e In the table above, \"-\" means not applicable or not started yet.\n\n## 🔓Validated Software\n\u003ctable\u003e\n\t\u003ctbody\u003e\n\t\t\u003ctr\u003e\n\t\t\t\u003ctd rowspan=\"2\"\u003eSoftware\u003c/td\u003e\n\t\t\t\u003ctd colspan=\"2\"\u003eFine-Tuning\u003c/td\u003e\n\t\t\t\u003ctd colspan=\"2\"\u003eInference\u003c/td\u003e\n\t\t\u003c/tr\u003e\n\t\t\u003ctr\u003e\n\t\t\t\u003ctd\u003eFull\u003c/td\u003e\n\t\t\t\u003ctd\u003ePEFT\u003c/td\u003e\n\t\t\t\u003ctd\u003e8-bit\u003c/td\u003e\n\t\t\t\u003ctd\u003e4-bit\u003c/td\u003e\n\t\t\u003c/tr\u003e\n\t\t\u003ctr\u003e\n\t\t\t\u003ctd\u003ePyTorch\u003c/td\u003e\n\t\t\t\u003ctd\u003e2.0.1+cpu,\u003c/br\u003e 2.0.1a0 (gpu)\u003c/td\u003e\n\t\t\t\u003ctd\u003e2.0.1+cpu,\u003c/br\u003e 2.0.1a0 (gpu)\u003c/td\u003e\n\t\t\t\u003ctd\u003e2.1.0+cpu,\u003c/br\u003e 2.0.1a0 (gpu)\u003c/td\u003e\n\t\t\t\u003ctd\u003e2.1.0+cpu,\u003c/br\u003e 2.0.1a0 (gpu)\u003c/td\u003e\n\t\t\u003c/tr\u003e\n\t\t\u003ctr\u003e\n\t\t\t\u003ctd\u003eIntel® Extension for PyTorch\u003c/td\u003e\n\t\t\t\u003ctd\u003e2.1.0+cpu,\u003c/br\u003e 2.0.110+xpu\u003c/td\u003e\n\t\t\t\u003ctd\u003e2.1.0+cpu,\u003c/br\u003e 2.0.110+xpu\u003c/td\u003e\n\t\t\t\u003ctd\u003e2.1.0+cpu,\u003c/br\u003e 2.0.110+xpu\u003c/td\u003e\n\t\t\t\u003ctd\u003e2.1.0+cpu,\u003c/br\u003e 2.0.110+xpu\u003c/td\u003e\n\t\t\u003c/tr\u003e\n\t\t\u003ctr\u003e\n\t\t\t\u003ctd\u003eTransformers\u003c/td\u003e\n\t\t\t\u003ctd\u003e4.35.2(CPU),\u003c/br\u003e 4.31.0 (Intel GPU)\u003c/td\u003e\n\t\t\t\u003ctd\u003e4.35.2(CPU),\u003c/br\u003e 4.31.0 (Intel GPU)\u003c/td\u003e\n\t\t\t\u003ctd\u003e4.35.2(CPU),\u003c/br\u003e 4.31.0 (Intel GPU)\u003c/td\u003e\n\t\t\t\u003ctd\u003e4.35.2(CPU),\u003c/br\u003e 4.31.0 (Intel GPU)\u003c/td\u003e\n\t\t\u003c/tr\u003e\n\t\t\u003ctr\u003e\n\t\t\t\u003ctd\u003eSynapse AI\u003c/td\u003e\n\t\t\t\u003ctd\u003e1.13.0\u003c/td\u003e\n\t\t\t\u003ctd\u003e1.13.0\u003c/td\u003e\n\t\t\t\u003ctd\u003e1.13.0\u003c/td\u003e\n\t\t\t\u003ctd\u003e1.13.0\u003c/td\u003e\n\t\t\u003c/tr\u003e\n\t\t\u003ctr\u003e\n\t\t\t\u003ctd\u003eGaudi2 driver\u003c/td\u003e\n\t\t\t\u003ctd\u003e1.13.0-ee32e42\u003c/td\u003e\n\t\t\t\u003ctd\u003e1.13.0-ee32e42\u003c/td\u003e\n\t\t\t\u003ctd\u003e1.13.0-ee32e42\u003c/td\u003e\n\t\t\t\u003ctd\u003e1.13.0-ee32e42\u003c/td\u003e\n\t\t\u003c/tr\u003e\n                \u003ctr\u003e\n                        \u003ctd\u003eintel-level-zero-gpu\u003c/td\u003e\n                        \u003ctd\u003e1.3.26918.50-736~22.04 \u003c/td\u003e\n                        \u003ctd\u003e1.3.26918.50-736~22.04 \u003c/td\u003e\n                        \u003ctd\u003e1.3.26918.50-736~22.04 \u003c/td\u003e\n                        \u003ctd\u003e1.3.26918.50-736~22.04 \u003c/td\u003e\n                \u003c/tr\u003e\n\t\u003c/tbody\u003e\n\u003c/table\u003e\n\n\u003e Please refer to the detailed requirements in [CPU](intel_extension_for_transformers/neural_chat/requirements_cpu.txt), [Gaudi2](intel_extension_for_transformers/neural_chat/requirements_hpu.txt), [Intel GPU](intel_extension_for_transformers/neural_chat/requirements_xpu.txt).\n\n## 🔓Validated OS\nUbuntu 20.04/22.04, Centos 8.\n\n## 🌱Getting Started\n\n### Chatbot\nBelow is the sample code to create your chatbot. See more [examples](intel_extension_for_transformers/neural_chat/docs/full_notebooks.md).\n\n#### Serving (OpenAI-compatible RESTful APIs)\nNeuralChat provides OpenAI-compatible RESTful APIs for chat, so you can use NeuralChat as a drop-in replacement for OpenAI APIs.\nYou can start NeuralChat server either using the Shell command or Python code.\n\n```shell\n# Shell Command\nneuralchat_server start --config_file ./server/config/neuralchat.yaml\n```\n\n```python\n# Python Code\nfrom intel_extension_for_transformers.neural_chat import NeuralChatServerExecutor\nserver_executor = NeuralChatServerExecutor()\nserver_executor(config_file=\"./server/config/neuralchat.yaml\", log_file=\"./neuralchat.log\")\n```\n\nNeuralChat service can be accessible through [OpenAI client library](https://github.com/openai/openai-python), `curl` commands, and `requests` library. See more in [NeuralChat](intel_extension_for_transformers/neural_chat/README.md).\n\n#### Offline\n\n```python\nfrom intel_extension_for_transformers.neural_chat import build_chatbot\nchatbot = build_chatbot()\nresponse = chatbot.predict(\"Tell me about Intel Xeon Scalable Processors.\")\n```\n\n### Transformers-based extension APIs\nBelow is the sample code to use the extended Transformers APIs. See more [examples](https://github.com/intel/neural-speed/tree/main).\n\n#### INT4 Inference (CPU)\nWe encourage you to install [NeuralSpeed](https://github.com/intel/neural-speed) to get the latest features (e.g., GGUF support) of LLM low-bit inference on CPUs. You may also want to use v1.3 without NeuralSpeed by following the [document](https://github.com/intel/intel-extension-for-transformers/tree/v1.3/intel_extension_for_transformers/llm/runtime/graph/README.md)\n\n```python\nfrom transformers import AutoTokenizer\nfrom intel_extension_for_transformers.transformers import AutoModelForCausalLM\nmodel_name = \"Intel/neural-chat-7b-v3-1\"     \nprompt = \"Once upon a time, there existed a little girl,\"\n\ntokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)\ninputs = tokenizer(prompt, return_tensors=\"pt\").input_ids\n\nmodel = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True)\noutputs = model.generate(inputs)\n```\nYou can also load GGUF format model from Huggingface, we only support Q4_0/Q5_0/Q8_0 gguf format for now.\n```python\nfrom transformers import AutoTokenizer\nfrom intel_extension_for_transformers.transformers import AutoModelForCausalLM\n\n# Specify the GGUF repo on the Hugginface\nmodel_name = \"TheBloke/Llama-2-7B-Chat-GGUF\"\n# Download the the specific gguf model file from the above repo\ngguf_file = \"llama-2-7b-chat.Q4_0.gguf\"\n# make sure you are granted to access this model on the Huggingface.\ntokenizer_name = \"meta-llama/Llama-2-7b-chat-hf\"\nprompt = \"Once upon a time, there existed a little girl,\"\ntokenizer = AutoTokenizer.from_pretrained(tokenizer_name, trust_remote_code=True)\ninputs = tokenizer(prompt, return_tensors=\"pt\").input_ids\n\nmodel = AutoModelForCausalLM.from_pretrained(model_name, gguf_file = gguf_file)\noutputs = model.generate(inputs)\n```\n\n\nYou can also load PyTorch Model from Modelscope\n\u003e**Note**:require modelscope\n```python\nfrom transformers import TextStreamer\nfrom modelscope import AutoTokenizer\nfrom intel_extension_for_transformers.transformers import AutoModelForCausalLM\nmodel_name = \"qwen/Qwen-7B\"     # Modelscope model_id or local model\nprompt = \"Once upon a time, there existed a little girl,\"\n\nmodel = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True, model_hub=\"modelscope\")\ntokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)\ninputs = tokenizer(prompt, return_tensors=\"pt\").input_ids\nstreamer = TextStreamer(tokenizer)\noutputs = model.generate(inputs, streamer=streamer, max_new_tokens=300)\n```\n\nYou can also load the low-bit model quantized by GPTQ/AWQ/RTN/AutoRound algorithm.\n```python\nfrom transformers import AutoTokenizer\nfrom intel_extension_for_transformers.transformers import AutoModelForCausalLM, GPTQConfig\n\n# Hugging Face GPTQ/AWQ model or use local quantize model\nmodel_name = \"MODEL_NAME_OR_PATH\"\nprompt = \"Once upon a time, a little girl\"\n\ntokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)\ninputs = tokenizer(prompt, return_tensors=\"pt\").input_ids\nmodel = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)\noutputs = model.generate(inputs)\n```\n\n#### INT4 Inference (GPU)\n```python\nimport intel_extension_for_pytorch as ipex\nfrom intel_extension_for_transformers.transformers.modeling import AutoModelForCausalLM\nfrom transformers import AutoTokenizer\nimport torch\n\ndevice_map = \"xpu\"\nmodel_name =\"Qwen/Qwen-7B\"\ntokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)\nprompt = \"Once upon a time, there existed a little girl,\"\ninputs = tokenizer(prompt, return_tensors=\"pt\").input_ids.to(device_map)\n\nmodel = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True,\n                                              device_map=device_map, load_in_4bit=True)\n\nmodel = ipex.optimize_transformers(model, inplace=True, dtype=torch.float16, quantization_config=True, device=device_map)\n\noutput = model.generate(inputs)\n```\n\u003e Note: Please refer to the [example](https://github.com/intel/intel-extension-for-transformers/blob/main/docs/weightonlyquant.md#examples-for-gpu) and [script](https://github.com/intel/intel-extension-for-transformers/blob/main/examples/huggingface/pytorch/text-generation/quantization/run_generation_gpu_woq.py) for more details.\n\n### Langchain-based extension APIs\nBelow is the sample code to use the extended Langchain APIs. See more [examples](intel_extension_for_transformers/neural_chat/pipeline/plugins/retrieval/README.md).\n\n```python\nfrom langchain_community.llms.huggingface_pipeline import HuggingFacePipeline\nfrom langchain.chains import RetrievalQA\nfrom langchain_core.vectorstores import VectorStoreRetriever\nfrom intel_extension_for_transformers.langchain.vectorstores import Chroma\nretriever = VectorStoreRetriever(vectorstore=Chroma(...))\nretrievalQA = RetrievalQA.from_llm(llm=HuggingFacePipeline(...), retriever=retriever)\n```\n\n## 🎯Validated  Models\nYou can access the validated models, accuracy and performance from [Release data](./docs/release_data.md) or [Medium blog](https://medium.com/@NeuralCompressor/llm-performance-of-intel-extension-for-transformers-f7d061556176).\n\n## 📖Documentation\n\u003ctable\u003e\n\u003cthead\u003e\n  \u003ctr\u003e\n    \u003cth colspan=\"8\" align=\"center\"\u003eOVERVIEW\u003c/th\u003e\n  \u003c/tr\u003e\n\u003c/thead\u003e\n\u003ctbody\u003e\n  \u003ctr\u003e\n    \u003ctd colspan=\"4\" align=\"center\"\u003e\u003ca href=\"intel_extension_for_transformers/neural_chat\"\u003eNeuralChat\u003c/a\u003e\u003c/td\u003e\n    \u003ctd colspan=\"4\" align=\"center\"\u003e\u003ca href=\"https://github.com/intel/neural-speed/tree/main\"\u003eNeural Speed\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003cth colspan=\"8\" align=\"center\"\u003eNEURALCHAT\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"intel_extension_for_transformers/neural_chat/docs/notebooks/deploy_chatbot_on_spr.ipynb\"\u003eChatbot on Intel CPU\u003c/a\u003e\u003c/td\u003e\n    \u003ctd colspan=\"3\" align=\"center\"\u003e\u003ca href=\"intel_extension_for_transformers/neural_chat/docs/notebooks/deploy_chatbot_on_xpu.ipynb\"\u003eChatbot on Intel GPU\u003c/a\u003e\u003c/td\u003e\n    \u003ctd colspan=\"3\" align=\"center\"\u003e\u003ca href=\"intel_extension_for_transformers/neural_chat/docs/notebooks/deploy_chatbot_on_habana_gaudi.ipynb\"\u003eChatbot on Gaudi\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd colspan=\"4\" align=\"center\"\u003e\u003ca href=\"intel_extension_for_transformers/neural_chat/examples/deployment/talkingbot/pc/build_talkingbot_on_pc.ipynb\"\u003eChatbot on Client\u003c/a\u003e\u003c/td\u003e\n    \u003ctd colspan=\"4\" align=\"center\"\u003e\u003ca href=\"intel_extension_for_transformers/neural_chat/docs/full_notebooks.md\"\u003eMore Notebooks\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003cth colspan=\"8\" align=\"center\"\u003eNEURAL SPEED\u003c/th\u003e\n  \u003c/tr\u003e\n \u003ctr\u003e\n    \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"https://github.com/intel/neural-speed/tree/main/README.md\"\u003eNeural Speed\u003c/a\u003e\u003c/td\u003e\n    \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"https://github.com/intel/neural-speed/tree/main/README.md#2-neural-speed-straight-forward\"\u003eStreaming LLM\u003c/a\u003e\u003c/td\u003e\n    \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"https://github.com/intel/neural-speed/tree/main/neural_speed/core#support-matrix\"\u003eLow Precision Kernels\u003c/a\u003e\u003c/td\u003e\n    \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"https://github.com/intel/neural-speed/tree/main/docs/tensor_parallelism.md\"\u003eTensor Parallelism\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003cth colspan=\"8\" align=\"center\"\u003eLLM COMPRESSION\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"docs/smoothquant.md\"\u003eSmoothQuant (INT8)\u003c/a\u003e\u003c/td\u003e\n    \u003ctd colspan=\"3\" align=\"center\"\u003e\u003ca href=\"docs/weightonlyquant.md\"\u003eWeight-only Quantization (INT4/FP4/NF4/INT8)\u003c/a\u003e\u003c/td\u003e\n    \u003ctd colspan=\"3\" align=\"center\"\u003e\u003ca href=\"docs/qloracpu.md\"\u003eQLoRA on CPU\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003cth colspan=\"8\" align=\"center\"\u003eGENERAL COMPRESSION\u003c/th\u003e\n  \u003ctr\u003e\n  \u003ctr\u003e\n    \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"docs/quantization.md\"\u003eQuantization\u003c/a\u003e\u003c/td\u003e\n    \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"docs/pruning.md\"\u003ePruning\u003c/a\u003e\u003c/td\u003e\n    \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"docs/distillation.md\"\u003eDistillation\u003c/a\u003e\u003c/td\u003e\n    \u003ctd align=\"center\" colspan=\"2\"\u003e\u003ca href=\"examples/huggingface/pytorch/text-classification/orchestrate_optimizations/README.md\"\u003eOrchestration\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd align=\"center\" colspan=\"2\"\u003e\u003ca href=\"docs/data_augmentation.md\"\u003eData Augmentation\u003c/a\u003e\u003c/td\u003e\n    \u003ctd align=\"center\" colspan=\"2\"\u003e\u003ca href=\"docs/export.md\"\u003eExport\u003c/a\u003e\u003c/td\u003e\n    \u003ctd align=\"center\" colspan=\"2\"\u003e\u003ca href=\"docs/metrics.md\"\u003eMetrics\u003c/a\u003e\u003c/td\u003e\n    \u003ctd align=\"center\" colspan=\"2\"\u003e\u003ca href=\"docs/objectives.md\"\u003eObjectives\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd align=\"center\" colspan=\"2\"\u003e\u003ca href=\"docs/pipeline.md\"\u003ePipeline\u003c/a\u003e\u003c/td\u003e\n    \u003ctd align=\"center\" colspan=\"3\"\u003e\u003ca href=\"examples/huggingface/pytorch/question-answering/dynamic/README.md\"\u003eLength Adaptive\u003c/a\u003e\u003c/td\u003e\n    \u003ctd align=\"center\" colspan=\"3\"\u003e\u003ca href=\"docs/examples.md#early-exit\"\u003eEarly Exit\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003cth colspan=\"8\" align=\"center\"\u003eTUTORIALS \u0026 RESULTS\u003c/a\u003e\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"docs/tutorials/README.md\"\u003eTutorials\u003c/a\u003e\u003c/td\u003e\n    \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"https://github.com/intel/neural-speed/blob/main/docs/supported_models.md\"\u003eLLM List\u003c/a\u003e\u003c/td\u003e\n    \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"docs/examples.md\"\u003eGeneral Model List\u003c/a\u003e\u003c/td\u003e\n    \u003ctd colspan=\"2\" align=\"center\"\u003e\u003ca href=\"intel_extension_for_transformers/transformers/runtime/docs/validated_model.md\"\u003eModel Performance\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n\u003c/tbody\u003e\n\u003c/table\u003e\n\n## 🙌Demo\n\n* LLM Infinite Inference (up to 4M tokens)\n\nhttps://github.com/intel/intel-extension-for-transformers/assets/109187816/1698dcda-c9ec-4f44-b159-f4e9d67ab15b\n\n* LLM QLoRA on Client CPU\n\nhttps://github.com/intel/intel-extension-for-transformers/assets/88082706/9d9bdb7e-65db-47bb-bbed-d23b151e8b31\n\n## 📃Selected Publications/Events\n* Blog published on Huggingface: [Building Cost-Efficient Enterprise RAG applications with Intel Gaudi 2 and Intel Xeon](https://huggingface.co/blog/cost-efficient-rag-applications-with-intel) (May 2024)\n* Blog published on Intel Developer News: [Efficient Natural Language Embedding Models with Intel® Extension for Transformers](https://www.intel.com/content/www/us/en/developer/articles/technical/efficient-natural-language-embedding-models.html) (May 2024)\n* Blog published on Techcrunch: [Intel and others commit to building open generative AI tools for the enterprise](https://techcrunch.com/2024/04/16/intel-and-others-commit-to-building-open-generative-ai-tools-for-the-enterprise) (Apr 2024)\n* Video on YouTube: [Intel Vision Keynotes 2024](https://www.youtube.com/watch?v=QB7FoIpx8os\u0026t=2280s) (Apr 2024)\n* Blog published on Vectara: [Do Smaller Models Hallucinate More?](https://vectara.com/blog/do-smaller-models-hallucinate-more) (Apr 2024)\n* Blog of Intel Developer News: [Use the neural-chat-7b Model for Advanced Fraud Detection: An AI-Driven Approach in Cybersecurity](https://www.intel.com/content/www/us/en/developer/articles/technical/bilics-approach-cybersecurity-using-neuralchat-7b.html) (March 2024)\n* CES 2024: [CES 2024 Great Minds Keynote: Bringing the Limitless Potential of AI Everywhere: Intel Hybrid Copilot demo](https://youtu.be/70J3uO3eLZA?t=1348) (Jan 2024)\n* Blog published on Medium: [Connect an AI agent with your API: Intel Neural-Chat 7b LLM can replace Open AI Function Calling](https://medium.com/11tensors/connect-an-ai-agent-with-your-api-intel-neural-chat-7b-llm-can-replace-open-ai-function-calling-242d771e7c79) (Dec 2023)\n* NeurIPS'2023 on Efficient Natural Language and Speech Processing: [Efficient LLM Inference on CPUs](https://arxiv.org/abs/2311.00502) (Nov 2023)\n* Blog published on Hugging Face: [Intel Neural-Chat 7b: Fine-Tuning on Gaudi2 for Top LLM Performance](https://huggingface.co/blog/Andyrasika/neural-chat-intel) (Nov 2023)\n* Blog published on VMware: [AI without GPUs: A Technical Brief for VMware Private AI with Intel](https://core.vmware.com/resource/ai-without-gpus-technical-brief-vmware-private-ai-intel#section6) (Nov 2023)\n  \n\u003e View [Full Publication List](./docs/publication.md)\n\n## Additional Content\n\n* [Release Information](./docs/release.md)\n* [Contribution Guidelines](./docs/contributions.md)\n* [Legal Information](./docs/legal.md)\n* [Security Policy](SECURITY.md)\n* [Apache License](./LICENSE)\n\n\n## Acknowledgements\n* Excellent open-source projects: [bitsandbytes](https://github.com/TimDettmers/bitsandbytes), [FastChat](https://github.com/lm-sys/FastChat), [fastRAG](https://github.com/IntelLabs/fastRAG), [ggml](https://github.com/ggerganov/ggml), [gptq](https://github.com/IST-DASLab/gptq), [llama.cpp](https://github.com/ggerganov/llama.cpp), [lm-evauation-harness](https://github.com/EleutherAI/lm-evaluation-harness), [peft](https://github.com/huggingface/peft), [trl](https://github.com/huggingface/trl), [streamingllm](https://github.com/mit-han-lab/streaming-llm) and many others.\n\n* Thanks to all the [contributors](./docs/contributors.md).\n\n## 💁Collaborations\n\nWelcome to raise any interesting ideas on model compression techniques and LLM-based chatbot development! Feel free to reach [us](mailto:itrex.maintainers@intel.com), and we look forward to our collaborations on Intel Extension for Transformers!\n","funding_links":[],"categories":["Python","Table of Contents","C++","A01_文本生成_文本对话","Deployment and Serving"],"sub_categories":["AI - Frameworks and Toolkits","大语言对话模型及数据"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fintel%2Fintel-extension-for-transformers","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fintel%2Fintel-extension-for-transformers","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fintel%2Fintel-extension-for-transformers/lists"}