https://github.com/intel/ipex-llm-tutorial
Accelerate LLM with low-bit (FP4 / INT4 / FP8 / INT8) optimizations using ipex-llm
https://github.com/intel/ipex-llm-tutorial
Last synced: 8 months ago
JSON representation
Accelerate LLM with low-bit (FP4 / INT4 / FP8 / INT8) optimizations using ipex-llm
- Host: GitHub
- URL: https://github.com/intel/ipex-llm-tutorial
- Owner: intel
- License: apache-2.0
- Created: 2023-07-27T10:39:43.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2024-07-24T11:45:08.000Z (over 1 year ago)
- Last Synced: 2025-03-31T18:41:23.600Z (9 months ago)
- Language: Jupyter Notebook
- Homepage: https://github.com/intel-analytics/bigdl
- Size: 428 KB
- Stars: 164
- Watchers: 12
- Forks: 41
- Open Issues: 12
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Security: SECURITY.md
Awesome Lists containing this project
README
IPEX-LLM Tutorial
English |
中文
[_IPEX-LLM_](https://github.com/intel-analytics/ipex-llm/tree/main/python/llm) is a low-bit LLM library on Intel XPU (Xeon/Core/Flex/Arc/PVC). This repository contains tutorials to help you understand what is _IPEX-LLM_ and how to use _IPEX-LLM_ to build LLM applications.
The tutorials are organized as follows:
- [Chapter 1 **`Introduction`**](./ch_1_Introduction/) introduces what is _IPEX-LLM_ and what you can do with it.
- [Chapter 2 **`Environment Setup`**](./ch_2_Environment_Setup/) provides a set of best practices for setting-up your environment.
- [Chapter 3 **`Application Development: Basics`**](./ch_3_AppDev_Basic/) introduces the basic usage of _IPEX-LLM_ and how to build a very simple Chat application.
- [Chapter 4 **`Chinese Support`**](./ch_4_Chinese_Support/) shows the usage of some LLMs which suppports Chinese input/output, e.g. ChatGLM2, Baichuan
- [Chapter 5 **`Application Development: Intermediate`**](./ch_5_AppDev_Intermediate/) introduces intermediate-level knowledge for application development using _IPEX-LLM_, e.g. How to build a more sophisticated Chatbot, Speech recoginition, etc.
- [Chapter 6 **`GPU Acceleration`**](./ch_6_GPU_Acceleration/) introduces how to use Intel GPU to accelerate LLMs using _IPEX-LLM_.
- [Chapter 7 **`Finetune`**](./ch_7_Finetune/) introduces how to do finetune using _IPEX-LLM_.
- [Chapter 8 **`Application Development: Advanced`**](./ch_8_AppDev_Advanced/) introduces advanced-level knowledge for application development using _IPEX-LLM_, e.g. langchain usage.
[^1]: Performance varies by use, configuration and other factors. `ipex-llm` may not optimize to the same degree for non-Intel products. Learn more at www.Intel.com/PerformanceIndex.