https://github.com/nikelborm/amd-amdgpu-rocm-ollama-gfx90c-ati-radeon-vega-ryzen7-5800h-arch-linux
Run Ollama on AMD Ryzen 7 5800H CPU with integrated GPU AMD ATI Radeon Vega (gfx90c) with optimizations
https://github.com/nikelborm/amd-amdgpu-rocm-ollama-gfx90c-ati-radeon-vega-ryzen7-5800h-arch-linux
amd amd-gpu amdgpu archlinux avx2 bash bash-scripting cuda linux llama llama3 llm ollama oneapi radeon rocm ssse3 vega
Last synced: about 2 months ago
JSON representation
Run Ollama on AMD Ryzen 7 5800H CPU with integrated GPU AMD ATI Radeon Vega (gfx90c) with optimizations
- Host: GitHub
- URL: https://github.com/nikelborm/amd-amdgpu-rocm-ollama-gfx90c-ati-radeon-vega-ryzen7-5800h-arch-linux
- Owner: nikelborm
- Created: 2024-10-27T13:25:08.000Z (8 months ago)
- Default Branch: main
- Last Pushed: 2025-03-31T03:42:06.000Z (3 months ago)
- Last Synced: 2025-04-30T06:45:22.762Z (about 2 months ago)
- Topics: amd, amd-gpu, amdgpu, archlinux, avx2, bash, bash-scripting, cuda, linux, llama, llama3, llm, ollama, oneapi, radeon, rocm, ssse3, vega
- Language: Shell
- Homepage:
- Size: 24.4 KB
- Stars: 7
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# amd-amdgpu-rocm-ollama-gfx90c-ati-radeon-vega-ryzen7-5800H-arch-linux
~~WORKING~~ version of Ollama for AMD Ryzen 7 5800H CPU with integrated AMD ATI Radeon Vega (gfx90c) GPU with optimizations for this specific CPU and GPU: ROCM=on IntelOneAPI=on AVX=on AVX2=on F16C=on FMA=on SSSE3=on;
Tested on Arch Linux
Relevant projects:
1. [ollama/ollama](https://github.com/ollama/ollama)
2. [segurac/force-host-alloction-APU](https://github.com/segurac/force-host-alloction-APU)## UPD
Fuck this shit. It worked for 30 fucking minutes and then forever died without me ever being able to reproduce the working state. If you have no discrete GPU and only this CPU, give up and buy a graphics card / rent a server. It's not worth it to attempt to make it work.