How to run torch with AMD gpu?

I usually run my models on Nvidia GPU and I had no problem with torch detecting it.
Now I have this GPU:

lspci | grep VGA
75eb:00:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Vega 10 [Radeon Instinct MI25 MxGPU]

and I’m trying to understand how to make it visible for torch?

import torch
torch.cuda.is_available()
False

how can I use it with torch?

I’d say it’s a bad decision as DL is dominated by Nvidia.

Here there is some info. Good luck!

CUDA is only available for NVIDIA devices. However, you can get GPU support via using ROCm. Just go to getting started and select the ROCm option rather than NVIDIA.

I tried rocm docker but what I can’t understand is how can I check that I am using gpu? with cuda I can do torch.cuda.device_count() but how can I do the same with ROCm?

and I also get this error when trying to check devices:

python -c 'import torch; print(torch.cuda.is_available());'
/opt/conda/lib/python3.7/site-packages/torch/cuda/__init__.py:82: 
UserWarning: HIP initialization: Unexpected error from hipGetDeviceCount(). 
Did you run some cuda functions before calling NumHipDevices() that might have already set an error? 
Error 101: hipErrorInvalidDevice (Triggered internally at  /var/lib/jenkins/pytorch/c10/hip/HIPFunctions.cpp:110.)
  return torch._C._cuda_getDeviceCount() > 0
False

So it seems you should just be able to use the cuda equivalent commands and pytorch should know it’s using ROCm instead (see here). You also might want to check if your AMD GPU is supported here.

But it seems that PyTorch can’t see your AMD GPU.

Did you install ROCm?
If ROCm is installed, torch.cuda.is_available() will return True.