PyTorch ROCm does not recognize AMD MI250X GPUs

I’ve installed PyTorch 1.11.0 with ROCm support using pip per official instructions.

And I’m on a system with 4x AMD MI250X GPUs (gfx90a), and torch.cuda.is_available() returns False. Is there a fundamental incompatibility at this point with these particular AMD GPUs?

The same installation procedure using the same version of Python and same ROCm version works fine on another system with MI100. It just doesn’t work on system with MI250X

Python version is 3.6.13

@Eugene_Walker did you finally solved the problem?
Interested using PyTorch on MI250 for big dataset training, leveraging the aggregated 128GB memory, wanted to ask if is will be truely able to leverage that amount or in fact will be limited to 64GB on each die.