-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathComfyUi-AMD-WLS2-Ubuntu-24
More file actions
28 lines (26 loc) · 1.98 KB
/
ComfyUi-AMD-WLS2-Ubuntu-24
File metadata and controls
28 lines (26 loc) · 1.98 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
- install python 3.10.16 - I installed from deadsnakes (installed all packgages like "sudo apt install python3.10*")
sudo add-apt-repository ppa:deadsnakes/ppa - or if you are too scared to use this repo you can compile from source.
sudo apt install python3.12*
- wget https://repo.radeon.com/amdgpu-install/6.4.2.1/ubuntu/noble/amdgpu-install_6.4.60402-1_all.deb
- sudo apt install ./amdgpu-install_6.4.60402-1_all.deb
- sudo amdgpu-install --list-usecase
- amdgpu-install -y --usecase=wsl,rocm --no-dkms
- to check if it is working: "rocminfo"
- You need to restart the WLS2 -> "sudo reboot"
- git clone https://github.com/comfyanonymous/ComfyUI.git
- cd ComfyUI
- python3.12 -m venv comfy-env
- source comfy-env/bin/activate
- python3.12 -m pip install --upgrade pip wheel
- pip3.12 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.4/
- pip3.12 install -r requirements.txt
- python3.12 main.py --listen --use-pytorch-cross-attention
With Flux/WAN models you can not use "--use-pytorch-cross-attention" the image will never be generated, in case of WAN the decode will never finish(I run out of patiente). Use "--use-split-cross-attention" or "--use-quad-cross-attention"
On Ubuntu:
export HIP_VISIBLE_DEVICES=0 - this of course depends how many GPUs you have
export HSA_OVERRIDE_GFX_VERSION=11.0.0 - this is based on the type of the AMD card which you have -> this one is for 7900XTX
export TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1 - this seems to lower the requirement of VRAM like +40% - tested only with "--use-pytorch-cross-attention"
On WLS:
- cp /opt/rocm/lib/libhsa-runtime64.so.1.15.60402 comfy-env/lib/python3.10/site-packages/torch/lib/libhsa-runtime64.so
- cp /opt/rocm/lib/libhsa-runtime64.so.1.15.60402 comfy-env/lib/python3.10/site-packages/triton/backends/amd/lib/libhsa-runtime64.so
- netsh interface portproxy set v4tov4 listenport=7860 listenaddress=127.0.0.1 connectport=7860 connectaddress=<ip address of WSL you can get it by "ifconfig">