Torch.cuda.is_available() fails with segmentation fault on ROCm

Hello, i got an issue when i try to run torch.cuda.is_available(), it crashes with Segmentation fault (core dumped) message. I am not very experienced with linux so i don’t really know the cause of this.


GPU: Amd rx580 8gb
CPU: AMD ryzen 5 2600
OS: Ubuntu 22.04 dual boot (Windows 10)
Total of 24GB Ram

pip list:

torch                  2.0.1+rocm5.4.2
torchaudio             2.0.2+rocm5.4.2
torchvision            0.15.2+rocm5.4.2

ROCm Version: 5.5.0

apt show rocm-libs -a:

Package: rocm-libs
Priority: optional
Section: devel
Maintainer: ROCm Libs Support <>
Installed-Size: 13.3 kB
Depends: hipblas (=, hipfft (=, hipsolver (=, hipsparse (=, miopen-hip (=, rccl (=, rocalution (=, rocblas (=, rocfft (=, rocrand (=, rocsolver (=, rocsparse (=, rocm-core (=, hipblas-dev (=, hipcub-dev (=, hipfft-dev (=, hipsolver-dev (=, hipsparse-dev (=, miopen-hip-dev (=, rccl-dev (=, rocalution-dev (=, rocblas-dev (=, rocfft-dev (=, rocprim-dev (=, rocrand-dev (=, rocsolver-dev (=, rocsparse-dev (=, rocthrust-dev (=, rocwmma-dev (=
Download-Size: 1,004 B
APT-Manual-Installed: yes
APT-Sources: jammy/main amd64 Packages
Description: Radeon Open Compute (ROCm) Runtime software stack script: (Followed the verification tab on pytorch)

import torch

Python version: 3.10.6

Thanks for help in advance

It looks like this issue may be related: rocm/pytorch:latest Segmentation fault · Issue #1930 · RadeonOpenCompute/ROCm · GitHub

Specifically, this solution: rocm/pytorch:latest Segmentation fault · Issue #1930 · RadeonOpenCompute/ROCm · GitHub which suggests that you might need to rebuild PyTorch for your specific GPU architecture.