{"id":24360611,"url":"https://github.com/intel/ipex-llm-tutorial","last_synced_at":"2025-05-08T21:45:26.966Z","repository":{"id":188686351,"uuid":"671447765","full_name":"intel/ipex-llm-tutorial","owner":"intel","description":"Accelerate LLM with low-bit (FP4 / INT4 / FP8 / INT8) optimizations using ipex-llm","archived":false,"fork":false,"pushed_at":"2024-07-24T11:45:08.000Z","size":438,"stargazers_count":164,"open_issues_count":12,"forks_count":41,"subscribers_count":12,"default_branch":"main","last_synced_at":"2025-03-31T18:41:23.600Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://github.com/intel-analytics/bigdl","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/intel.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-07-27T10:39:43.000Z","updated_at":"2025-03-19T21:03:18.000Z","dependencies_parsed_at":"2024-01-17T04:27:38.491Z","dependency_job_id":"bf3291cd-1f93-48cc-9263-a741f811b531","html_url":"https://github.com/intel/ipex-llm-tutorial","commit_stats":null,"previous_names":["analytics-zoo/bigdl-llm-tutorial","intel-analytics/bigdl-llm-tutorial","intel/ipex-llm-tutorial"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2Fipex-llm-tutorial","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2Fipex-llm-tutorial/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2Fipex-llm-tutorial/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2Fipex-llm-tutorial/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/intel","download_url":"https://codeload.github.com/intel/ipex-llm-tutorial/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253154346,"owners_count":21862506,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-01-18T21:38:15.430Z","updated_at":"2025-05-08T21:45:26.958Z","avatar_url":"https://github.com/intel.png","language":"Jupyter Notebook","readme":"\u003cp align=\"center\"\u003e\u003ch1\u003eIPEX-LLM Tutorial\u003c/h1\u003e\u003cp\u003e\n  \n\u003ch4 align=\"center\"\u003e\n    \u003cp\u003e\n        \u003cb\u003eEnglish\u003c/b\u003e |\n        \u003ca href=\"./Chinese_Version/README.md\"\u003e中文\u003c/a\u003e\n    \u003cp\u003e\n\u003c/h4\u003e\n\n[_IPEX-LLM_](https://github.com/intel-analytics/ipex-llm/tree/main/python/llm) is a low-bit LLM library on Intel XPU (Xeon/Core/Flex/Arc/PVC). This repository contains tutorials to help you understand what is _IPEX-LLM_ and how to use _IPEX-LLM_ to build LLM applications.\n\nThe tutorials are organized as follows:\n- [Chapter 1 **`Introduction`**](./ch_1_Introduction/) introduces what is _IPEX-LLM_ and what you can do with it. \n- [Chapter 2 **`Environment Setup`**](./ch_2_Environment_Setup/) provides a set of best practices for setting-up your environment.\n- [Chapter 3 **`Application Development: Basics`**](./ch_3_AppDev_Basic/) introduces the basic usage of _IPEX-LLM_ and how to build a very simple Chat application.\n- [Chapter 4 **`Chinese Support`**](./ch_4_Chinese_Support/) shows the usage of some LLMs which suppports Chinese input/output, e.g. ChatGLM2, Baichuan  \n- [Chapter 5 **`Application Development: Intermediate`**](./ch_5_AppDev_Intermediate/) introduces intermediate-level knowledge for application development using _IPEX-LLM_, e.g. How to build a more sophisticated Chatbot, Speech recoginition, etc. \n- [Chapter 6 **`GPU Acceleration`**](./ch_6_GPU_Acceleration/) introduces how to use Intel GPU to accelerate LLMs using _IPEX-LLM_.\n- [Chapter 7 **`Finetune`**](./ch_7_Finetune/) introduces how to do finetune using _IPEX-LLM_.\n- [Chapter 8 **`Application Development: Advanced`**](./ch_8_AppDev_Advanced/) introduces advanced-level knowledge for application development using _IPEX-LLM_, e.g. langchain usage. \n\n[^1]: Performance varies by use, configuration and other factors. `ipex-llm` may not optimize to the same degree for non-Intel products. Learn more at www.Intel.com/PerformanceIndex.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fintel%2Fipex-llm-tutorial","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fintel%2Fipex-llm-tutorial","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fintel%2Fipex-llm-tutorial/lists"}