CUDA is only available for NVIDIA devices. However, you can get GPU support via using ROCm. Just go to getting started and select the ROCm option rather than NVIDIA.
I tried rocm docker but what I can’t understand is how can I check that I am using gpu? with cuda I can do torch.cuda.device_count() but how can I do the same with ROCm?
and I also get this error when trying to check devices:
python -c 'import torch; print(torch.cuda.is_available());'
/opt/conda/lib/python3.7/site-packages/torch/cuda/__init__.py:82:
UserWarning: HIP initialization: Unexpected error from hipGetDeviceCount().
Did you run some cuda functions before calling NumHipDevices() that might have already set an error?
Error 101: hipErrorInvalidDevice (Triggered internally at /var/lib/jenkins/pytorch/c10/hip/HIPFunctions.cpp:110.)
return torch._C._cuda_getDeviceCount() > 0
False
So it seems you should just be able to use the cuda equivalent commands and pytorch should know it’s using ROCm instead (see here). You also might want to check if your AMD GPU is supported here.