Trying to run cuda on AMD using ROCM

Im unable to run any of the usual cuda commands in pytorch like torch.cuda.is_available() or tensor.to("cuda") using the ROCM library.

I had installed it using the following docker image Docker Hub

Building the image-

docker pull rocm/pytorch

Running the container -

docker run -i -t 6b8335f798a5 /bin/bash

I assumed that we could directly use the usual GPU commands like we did using ROCM but doesn’t seem to work. Any help would be helpful

>>> import torch
>>> torch.cuda.is_available()
False
>>>torch.tensor([1.0, 2.0, 3.0]).to("cuda")

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/conda/lib/python3.6/site-packages/torch/cuda/__init__.py", line 172, in _lazy_init
    torch._C._cuda_init()
RuntimeError: No HIP GPUs are available