Torch.cuda.is_available() fails with segmentation fault on ROCm

Hello, i got an issue when i try to run torch.cuda.is_available(), it crashes with Segmentation fault (core dumped) message. I am not very experienced with linux so i don’t really know the cause of this.

Specs:

GPU: Amd rx580 8gb
CPU: AMD ryzen 5 2600
OS: Ubuntu 22.04 dual boot (Windows 10)
Total of 24GB Ram

pip list:

torch                  2.0.1+rocm5.4.2
torchaudio             2.0.2+rocm5.4.2
torchvision            0.15.2+rocm5.4.2

ROCm Version: 5.5.0

apt show rocm-libs -a:

Package: rocm-libs
Version: 5.5.0.50500-63~22.04
Priority: optional
Section: devel
Maintainer: ROCm Libs Support <rocm-libs.support@amd.com>
Installed-Size: 13.3 kB
Depends: hipblas (= 0.54.0.50500-63~22.04), hipfft (= 1.0.11.50500-63~22.04), hipsolver (= 1.7.0.50500-63~22.04), hipsparse (= 2.3.6.50500-63~22.04), miopen-hip (= 2.19.0.50500-63~22.04), rccl (= 2.15.5.50500-63~22.04), rocalution (= 2.1.8.50500-63~22.04), rocblas (= 2.47.0.50500-63~22.04), rocfft (= 1.0.21.50500-63~22.04), rocrand (= 2.10.16.50500-63~22.04), rocsolver (= 3.21.0.50500-63~22.04), rocsparse (= 2.5.1.50500-63~22.04), rocm-core (= 5.5.0.50500-63~22.04), hipblas-dev (= 0.54.0.50500-63~22.04), hipcub-dev (= 2.10.12.50500-63~22.04), hipfft-dev (= 1.0.11.50500-63~22.04), hipsolver-dev (= 1.7.0.50500-63~22.04), hipsparse-dev (= 2.3.6.50500-63~22.04), miopen-hip-dev (= 2.19.0.50500-63~22.04), rccl-dev (= 2.15.5.50500-63~22.04), rocalution-dev (= 2.1.8.50500-63~22.04), rocblas-dev (= 2.47.0.50500-63~22.04), rocfft-dev (= 1.0.21.50500-63~22.04), rocprim-dev (= 2.10.9.50500-63~22.04), rocrand-dev (= 2.10.16.50500-63~22.04), rocsolver-dev (= 3.21.0.50500-63~22.04), rocsparse-dev (= 2.5.1.50500-63~22.04), rocthrust-dev (= 2.10.9.50500-63~22.04), rocwmma-dev (= 0.7.0.50500-63~22.04)
Homepage: https://github.com/RadeonOpenCompute/ROCm
Download-Size: 1,004 B
APT-Manual-Installed: yes
APT-Sources: https://repo.radeon.com/rocm/apt/5.5 jammy/main amd64 Packages
Description: Radeon Open Compute (ROCm) Runtime software stack

test.py script: (Followed the verification tab on pytorch)

import torch
torch.cuda.is_available()

Python version: 3.10.6

Thanks for help in advance

It looks like this issue may be related: rocm/pytorch:latest Segmentation fault · Issue #1930 · RadeonOpenCompute/ROCm · GitHub

Specifically, this solution: rocm/pytorch:latest Segmentation fault · Issue #1930 · RadeonOpenCompute/ROCm · GitHub which suggests that you might need to rebuild PyTorch for your specific GPU architecture.