An open API service indexing awesome lists of open source software.

https://github.com/nikelborm/amd-amdgpu-rocm-ollama-gfx90c-ati-radeon-vega-ryzen7-5800h-arch-linux

Run Ollama on AMD Ryzen 7 5800H CPU with integrated GPU AMD ATI Radeon Vega (gfx90c) with optimizations
https://github.com/nikelborm/amd-amdgpu-rocm-ollama-gfx90c-ati-radeon-vega-ryzen7-5800h-arch-linux

amd amd-gpu amdgpu archlinux avx2 bash bash-scripting cuda linux llama llama3 llm ollama oneapi radeon rocm ssse3 vega

Last synced: about 2 months ago
JSON representation

Run Ollama on AMD Ryzen 7 5800H CPU with integrated GPU AMD ATI Radeon Vega (gfx90c) with optimizations

Awesome Lists containing this project

README

        

# amd-amdgpu-rocm-ollama-gfx90c-ati-radeon-vega-ryzen7-5800H-arch-linux

~~WORKING~~ version of Ollama for AMD Ryzen 7 5800H CPU with integrated AMD ATI Radeon Vega (gfx90c) GPU with optimizations for this specific CPU and GPU: ROCM=on IntelOneAPI=on AVX=on AVX2=on F16C=on FMA=on SSSE3=on;

Tested on Arch Linux

Relevant projects:

1. [ollama/ollama](https://github.com/ollama/ollama)
2. [segurac/force-host-alloction-APU](https://github.com/segurac/force-host-alloction-APU)

## UPD

Fuck this shit. It worked for 30 fucking minutes and then forever died without me ever being able to reproduce the working state. If you have no discrete GPU and only this CPU, give up and buy a graphics card / rent a server. It's not worth it to attempt to make it work.