Hi,
I found that ROCM6.2 version of PyTorch using nightly build is failing for llama3.2-11B and MiniCPM-6.2 version. I’m using a system with AMD GPU Radeon RRO W7900 in Ubuntu 22.4 OS.
The issue is:
RuntimeError: Attempting to use hipBLASLt on a unsupported architecture!
This issue has been around for more than 36 hours. I hope someone will fix it ASAP.
I’m already on the latest version of pytorch. I found this: Attempting to use hipBLASLt on a unsupported architecture! · Issue #138067 · pytorch/pytorch · GitHub which means my gpu architecture is failing with hipblastlt. They say to downgrade the pytorch version to an older version where hipblastlt didn’t cause any problem until amd fixes the problem which means i have to try with different versions of pytorch and rocm until i don’t have this problem. Here are the pytorch version I’m going to try with rocm 6.2.2:
2.3, 2.2, 2.1, 2.0, 1.13
as a conclusion, pytorch 2.5/rocm6.2.2 is unusable for gpu architectures like gfx1100. You and amd should update your site because it took me 5 days to realize it. I really thought i had a useless 1000$ gpu
This will install the latest PyTorch2.5 stable version for ROCm6.2
I tested Llama3.2-11B a few minutes back, it is working properly.
I had the same error using the earlier nightly PyTorch version for ROCm6.2.
Using the latest stable release of PyTorch2.5 I found this issue is no more there.
thank you the command “pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.2” works but i still have the warning: UserWarning: Attempting to use hipBLASLt on an unsupported architecture! Overriding blas backend to hipblas.
I guess i’ll just wait that you and amd work together to enable hipBLASLt on gfx1100 gpus