{"id":18266349,"url":"https://github.com/biosfood/intel-llm-guide","last_synced_at":"2025-04-09T02:19:55.631Z","repository":{"id":217458112,"uuid":"743728912","full_name":"biosfood/intel-llm-guide","owner":"biosfood","description":"A guide on how to run LLMs on intel CPUs","archived":false,"fork":false,"pushed_at":"2024-01-23T20:00:19.000Z","size":21,"stargazers_count":2,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-02-14T20:40:31.403Z","etag":null,"topics":["guide","intel","llm","llm-inference","llm-serving","machine-learning","setup","setup-development-environment","tutorial"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/biosfood.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-01-15T21:24:50.000Z","updated_at":"2025-01-22T01:23:55.000Z","dependencies_parsed_at":"2024-01-20T15:27:34.306Z","dependency_job_id":"3d2b639e-37f7-4803-94a3-5b001ed86701","html_url":"https://github.com/biosfood/intel-llm-guide","commit_stats":null,"previous_names":["biosfood/intel-llm-guide"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/biosfood%2Fintel-llm-guide","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/biosfood%2Fintel-llm-guide/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/biosfood%2Fintel-llm-guide/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/biosfood%2Fintel-llm-guide/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/biosfood","download_url":"https://codeload.github.com/biosfood/intel-llm-guide/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247962764,"owners_count":21024892,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["guide","intel","llm","llm-inference","llm-serving","machine-learning","setup","setup-development-environment","tutorial"],"created_at":"2024-11-05T11:23:01.094Z","updated_at":"2025-04-09T02:19:55.614Z","avatar_url":"https://github.com/biosfood.png","language":"Python","readme":"# intel-llm-guide\nA guide on how to run large language models on intel CPUs with limited resources on a linux platform.\n\nPlease note that this guide is __still in development__, so some inaccuracies are expected.\n\n## Prerequesites\nIm am assuming you have an intel-CPU with an integrated graphics chip. I have tested and developed this guide using an `11th Gen Intel i7-1165G7` and 16 GB of RAM, but it should work with many more models. You might also want to check out the [intel extension for pytorch system requirements](https://github.com/intel/intel-extension-for-transformers/blob/main/docs/installation.md#system-requirements)\n\nYou should  also have the `python3` package installed (`sudo apt install python` or `sudo pacman -S python`).\n\nYou will also need the `git` and `git-lfs` packages.\n\n### Virtual environment\n\nFirst of all, create a new virtual python environment. This should probably be located in the `/opt` folder. My AI development environment path for example is `/opt/python-envs/AI`. After deciding on your environment path, run:\n\n```bash\npython -m venv \u003cvenv_folder\u003e\n```\n\nTo ativate this enironment, run `source \u003cvenv_folder\u003e/bin/acitvate`. Because the `/opt` folder is normally not owned by the user, you might need to run `sudo chown $USER \u003cvenv_folder\u003e -R` to give your user permission to use it.\n\nTo check if this step worked, check which file the python command points to now using `which python` and you should get a response of the form `\u003cvenv_folder\u003e/bin/python`.\n\nYou will need to do this every time when using the environment, so maybe consider adding this line to your `.bashrc` but be careful as this might break other python dependencies.\n\n## Dependencies\nTo install the `intel extensions for pytorch` and all other needed packages, run\n```bash\npip install intel-extension-for-transformers torch tokenizers sentencepiece protobuf accelerate\n```\n\n## Huggingface models\n\nAs of Janurary 2023, there are some problems with the intel-extension-for-transformers module, making \"normal\" usage not possible.\n\nFirst of all, create a new directory to store all of your models with `mkdir /opt/models \u0026\u0026 sudo chown $USER /opt/models`.\n\nTo correctly use a language model from the [huggingface](https://huggingface.co/) server, you first have to create an account and add your SSH key in the options menu. Then, use the clone script like this:\n\n```bash\n./clone_model.sh \u003cmodel_id\u003e\n```\n\nAfter this, you can start using the model:\n\n```python\nfrom transformers import AutoTokenizer, TextStreamer\nfrom intel_extension_for_transformers.transformers import AutoModelForCausalLM\nimport torch\n\nmodel = AutoModelForCausalLM.from_pretrained(\"/opt/llms/\" + model_id)\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbiosfood%2Fintel-llm-guide","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbiosfood%2Fintel-llm-guide","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbiosfood%2Fintel-llm-guide/lists"}