Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/nktice/AMD-AI

AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1
https://github.com/nktice/AMD-AI

Last synced: 7 days ago
JSON representation

AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1

Awesome Lists containing this project

README

        

# AMD Radeon 7900XTX GPU ROCm install / setup / config
# Ubuntu 24.04.1
# ROCm 6.2.3
# Automatic1111 Stable Diffusion + ComfyUI ( venv )
# Oobabooga - Text Generation WebUI ( conda, Exllamav2, Llama-cpp-python, BitsAndBytes )

## Install notes / instructions
This file is focused on the current stable version of PyTorch. There is another variation of these instructions for the development / nightly version(s) here : https://github.com/nktice/AMD-AI/blob/main/dev.md

2023-07 -
I have composed this collection of instructions as they are my notes.
I use this setup my own Linux system with AMD parts.
I've gone over these doing many re-installs to get them all right.
This is what I had hoped to find when I had search for install instructions -
so I'm sharing them in the hopes that they save time for other people.
There may be in here extra parts that aren't needed but this works for me.
Originally text, with comments like a shell script that I cut and paste.

2023-09-09 - I had a report that this doesn't work in virtual machines (virtualbox) as the system there cannot see the hardware, it can't load drivers, etc. While this is not a guide about Windows, Windows users may find it more helpful to try DirectML - https://rocm.docs.amd.com/en/latest/deploy/windows/quick_start.html / https://github.com/lshqqytiger/stable-diffusion-webui-directml

[ ... updates abridged ... ]

2024-10-16 -
- ROCm 6.2.3 is out...
- Ubuntu 24.10 tested - no deadsnakes support, amdgpu-dkms gave errors, so wasn't functioning... wiped my /home partition unexpectedly.
Updates to use the current "Stable" version of PyTorch ( 2.4.1 ).
- Note bug report filed on issues with TGW. https://github.com/oobabooga/text-generation-webui/issues/6471
- To those following these guides... I have plans to do retreat starting November 2024 into March 2025, so it is unlikely there will be updates here during that period.

-----

# Ubuntu 24.04.1 - Base system install
ROCm 6.2.3 includes support for Ubuntu 24.04.1 (noble).

At this point we assume you've done the system install
and you know what that is, have a user, root, etc.

```bash
# update system packages
sudo apt update -y && sudo apt upgrade -y
```

```bash
#turn on devel and sources.
sudo apt-add-repository -y -s -s
sudo apt install -y "linux-headers-$(uname -r)" \
"linux-modules-extra-$(uname -r)"
```

## Support older version of Python
This allows calls to older versions of Python by using "deadsnakes"
```bash
sudo add-apt-repository ppa:deadsnakes/ppa -y
sudo apt update -y
```

## Add AMD GPU package sources
Make the directory if it doesn't exist yet.
This location is recommended by the distribution maintainers.

```bash
sudo mkdir --parents --mode=0755 /etc/apt/keyrings
```

Download the key, convert the signing-key to a full
Keyring required by apt and store in the keyring directory
```bash
wget https://repo.radeon.com/rocm/rocm.gpg.key -O - | \
gpg --dearmor | sudo tee /etc/apt/keyrings/rocm.gpg > /dev/null
```

amdgpu repository
```bash
echo 'deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/amdgpu/6.2.3/ubuntu noble main' \
| sudo tee /etc/apt/sources.list.d/amdgpu.list
sudo apt update -y
```

## AMDGPU DKMS

```bash
sudo apt install amdgpu-dkms -y
```

## ROCm repositories...
https://rocmdocs.amd.com/en/latest/deploy/linux/os-native/install.html

```bash
echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/rocm/apt/6.2.3 noble main" \
| sudo tee --append /etc/apt/sources.list.d/rocm.list
echo -e 'Package: *\nPin: release o=repo.radeon.com\nPin-Priority: 600' \
| sudo tee /etc/apt/preferences.d/rocm-pin-600
sudo apt update -y
```

## More AMD ROCm related packages
This is lots of stuff, but comparatively small so worth including,
as some stuff later may want as dependencies without much notice.
```bash
# ROCm...
sudo apt install -y rocm-dev rocm-libs rocm-hip-sdk rocm-libs
```

```bash
# ld.so.conf update
sudo tee --append /etc/ld.so.conf.d/rocm.conf <> ~/.profile
```

## Find graphics device
```bash
sudo /opt/rocm/bin/rocminfo | grep gfx
```

My 6900 reported as gfx1030, and my 7900 XTX show up as gfx1100

## Add user to groups
Of course note to change the user name to match your user.
```bash
sudo adduser `whoami` video
sudo adduser `whoami` render
```

```bash
# git and git-lfs (large file support
sudo apt install -y git git-lfs
# development tool may be required later...
sudo apt install -y libstdc++-12-dev
# stable diffusion likes TCMalloc...
sudo apt install -y libtcmalloc-minimal4
```

## Performance Tuning
This section is optional, and as such has been moved to [performance-tuning](https://github.com/nktice/AMD-AI/blob/main/performance-tuning.md)

## Top for video memory and usage
nvtop
Note : I have had issues with the distro version crashes with 2 GPUs, installing new version from sources works fine. Instructions for that are included at the bottom, as they depend on things installed between here and there. Project website : https://github.com/Syllo/nvtop
```bash
sudo apt install -y nvtop
```

## Radeon specific tools...
```bash
sudo apt install -y radeontop rovclock
```

## and now we reboot...
```bash
reboot
```

## End of OS / base setup

---

# Stable Diffusion (Automatic1111)
This system is built to use its own venv ( rather than Conda )...

## Download Stable Diffusion ( Automatic1111 webui )
https://github.com/AUTOMATIC1111/stable-diffusion-webui
Get the files...
```bash
cd
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui
```

The 1.9.x+ release series breaks the API so that it won't work with Oobabooga's TGW - so the following resets to use the 1.8.0 relaase that does work with Oobabooga.
2024-07-04 - Oobabooga 1.9 resolves this issue - these lines are remarked out for now, but preserved in case someone wants to see how to do something similar in the future...
```bash
# git checkout bef51ae
# git reset --hard
```

# Requisites :
```bash
sudo apt install -y wget git python3.10 python3.10-venv libgl1
python3.10 -m venv venv
source venv/bin/activate
python3.10 -m pip install -U pip
deactivate
```

## Edit environment settings...
```bash
tee --append webui-user.sh < 2.4.x - https://github.com/comfyanonymous/ComfyUI/issues/3698
export TORCH_BLAS_PREFER_HIPBLASLT=0
# generic import...
# export TORCH_COMMAND="pip install torch torchvision --index-url https://download.pytorch.org/whl/nightly/rocm6.1"
# use specific versions to avoid downloading all the nightlies... ( update dates as needed )
export TORCH_COMMAND="pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/rocm6.1"
## And if you want to call this from other programs...
export COMMANDLINE_ARGS="--api"
## crashes with 2 cards, so to get it to run on the second card (only), unremark the following
# export CUDA_VISIBLE_DEVICES="1"
EOF
```

## If you keep models for SD somewhere, this is where you'd like them up...
If you don't do this, it will install a default to get you going.
Note that these start files do include things that it needs you'll want to copy
into the folder where you have other models ( to avoid issues )
```bash
#mv models models.1
#ln -s /path/to/models models
```

## Run SD...
Note that the first time it starts it may take it a while to go and get things
it's not always good about saying what it's up to.
```bash
./webui.sh
```

## end Stable Diffusion

-------

# ComfyUI install script
- variation of https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/scripts/install-comfyui-venv-linux.sh
Includes ComfyUI-Manager

Same install of packages here as for Stable Diffusion ( included here in case you're not installed SD and just want ComfyUI... )
```bash
sudo apt install -y wget git python3.10 python3.10-venv libgl1
```

```bash
cd
git clone https://github.com/comfyanonymous/ComfyUI
cd ComfyUI/custom_nodes
git clone https://github.com/ltdrdata/ComfyUI-Manager
cd ..
## if we want to save some effort, we can reuse the venv from sd
# mv venv venv.1
# ln -s ../stable-diffusion/venv venv
python3.10 -m venv venv
source venv/bin/activate
# pre-install torch and torchvision from nightlies - note you may want to update versions...
python3.10 -m pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/rocm6.1
python3.10 -m pip install -r requirements.txt --extra-index-url https://download.pytorch.org/whl/rocm6.1
python3.10 -m pip install -r custom_nodes/ComfyUI-Manager/requirements.txt --extra-index-url https://download.pytorch.org/whl/rocm6.1

# end vend if needed...
deactivate
```

Scripts for running the program...
```bash
# run_gpu.sh
tee --append run_gpu.sh <> ~/.profile
source ~/.profile
conda update -y -n base -c defaults conda
```

```bash
conda install -y cmake ninja
```

```bash
conda init
source ~/.profile
```
### conda is now active...

### install pip
```bash
sudo apt install -y pip
pip3 install --upgrade pip
```

#### useful pip stuff to know ...
```bash
## show outdated packages...
#pip list --outdated
## check dependencies
#pip check
## install specified bersion
#pip install ==
```

### End conda and pip setup.

## Oobabooga / Textgen webui
- https://github.com/oobabooga/text-generation-webui

```bash
conda create -n textgen python=3.11 -y
conda activate textgen
```

## PyTorch install...

```bash
# pre-install
pip install --pre cmake colorama filelock lit numpy Pillow Jinja2 \
mpmath fsspec MarkupSafe certifi filelock networkx \
sympy packaging requests \
--index-url https://download.pytorch.org/whl/rocm6.1
```

There's version conflicts, so we specify versions that we want installed -
```bash
pip install --pre torch torchvision torchaudio triton pytorch-triton-rocm \
--index-url https://download.pytorch.org/whl/rocm6.1
```
2024-05-12 For some odd reason, torchtext isn't recognized, even though it's there... so we specify it using it's URL to be explicit.
```bash
pip install https://download.pytorch.org/whl/cpu/torchtext-0.18.0%2Bcpu-cp311-cp311-linux_x86_64.whl#sha256=c760e672265cd6f3e4a7c8d4a78afe9e9617deacda926a743479ee0418d4207d
```

### bitsandbytes rocm
2024-04-24 - AMD's own ROCm version of bitsandbytes has been updated! - https://github.com/ROCm/bitsandbytes ( ver 0.44.0.dev0 at time of writing )
```bash
cd
git clone https://github.com/ROCm/bitsandbytes.git
cd bitsandbytes
pip install .
```

## Oobabooga / Text-generation-webui - Install webui...
```bash
cd
git clone https://github.com/oobabooga/text-generation-webui
cd text-generation-webui
```

### Oobabooga's 'requirements'

As of TGW 1.15 the requirements install smoothly. (2024-10-16)
```bash
pip install -r requirements_amd.txt --extra-index-url https://download.pytorch.org/whl/rocm6.1
```

#### Exllamav2 loader
2024-10-16 - I filed bug reports as manually loading loaders currently breaks TGW.
https://github.com/oobabooga/text-generation-webui/issues/6471

```bash
#git clone https://github.com/turboderp/exllamav2 repositories/exllamav2
#cd repositories/exllamav2
### Force collection back to base 0.0.11
### git reset --hard a4ecea6
#pip install -r requirements.txt --extra-index-url https://download.pytorch.org/whl/nightly/rocm6.2
#pip install . --index-url https://download.pytorch.org/whl/rocm6.1
#cd ../..
```

#### Llama-cpp-python
2024-06-18 - Llama-cpp-python - Another loader, that is highly efficient in resource use, but not very fast. https://github.com/abetlen/llama-cpp-python It may need models in GGUF format ( and not other types ).
```
### remove old versions
#pip uninstall llama_cpp_python -y
#pip uninstall llama_cpp_python_cuda -y
### install llama-cpp-python
#git clone --recurse-submodules https://github.com/abetlen/llama-cpp-python.git #repositories/llama-cpp-python
#cd repositories/llama-cpp-python
#CC='/opt/rocm/llvm/bin/clang' CXX='/opt/rocm/llvm/bin/clang++' CFLAGS='-fPIC' CXXFLAGS='-fPIC' CMAKE_PREFIX_PATH='/opt/rocm' ROCM_PATH="/opt/rocm" HIP_PATH="/opt/rocm" CMAKE_ARGS="-GNinja -DLLAMA_HIPBLAS=ON -DLLAMA_AVX2=on " pip install --no-cache-dir .
#cd ../..
```

### Models
Models : If you're new to this - new models can be downloaded from the shell via a python script, or from a form in the interface. There are lots of them - http://huggingface.co Generally the GPTQ models by TheBloke are likely to load... https://huggingface.co/TheBloke The 30B/33B models will load on 24GB of VRAM, but may error, or run out of memory depending on usage and parameters.
Worthy of mention, TurboDerp ( author of the exllama loaders ) has been posting exllamav2 ( exl2 ) processed versions of models - https://huggingface.co/turboderp ( for use with exllamav2 loader ) - when downloading, note the --branch option.

To get new models note the ~/text-generation-webui directory has a program " download-model.py " that is made for downloading models from HuggingFace's collection.

If you have old models, link pre-stored models into the models
```bash
# cd ~/text-generation-webui
# mv models models.1
# ln -s /path/to/models models
```

### Running TGW

Let's create a script (run.sh) to run the program...
```bash
tee --append run.sh <