Mock torch device for unit testing

I’m interested in mocking a torch device like torch.device(‘cuda’) but with a cpu instead. This way I can use configure my unit tests to check if the devices are being mapped properly in my program (i.e. catch errors such as “Expected all tensors to be on the same device, but found at least two devices”)

Ideally I would want to simply make a dummy device like this:

# create two devices
dv1 = torch.device('dummy1') 
dv2 = torch.device('dummy2')

# map to devices
a = torch.rand(1).to(dv1)
b = torch.rand(1).to(dv2)

# raises RuntimeError
a + b

Right now I’ve found the best way to do this is to use my own GPU to test for these situations but its a bit overkill and also causes issues if another program is using it or a GPU is not available. Any chance there is some functionality like this in PyTorch or some trick to get this to work?

Also would like to mention I have found a similar thread here, but the solution I found recommended using a library which literally simulates the GPU, but again thats a bit much for what I want to do. Somewhat related problem here: GPU Emulator for CUDA programming without the hardware - Stack Overflow

2 Likes

I’m also interested in a solution for this. @Daniel_Crawford have you found a solution?

Unfortunately did not find anything to test in a way similar to this. Haven’t looked in months though so maybe something has changed.

I’ve learned about “meta” device today, maybe it solves your goals. See Allow creation of pseudo devices for testing purposes · Issue #61654 · pytorch/pytorch · GitHub

Even better, MacOS users can have “mps” device (metal performance shaders). It seems to be intended for M1+ macs, but on my intel macbook pro with AMD gpu, it’s also available. Note torch.has_mps boolean and test with .to(device=“mps”).

there’s a problem with “meta” when you try more diverse pytorch code. e.g.:

      return x.detach().to(device="cpu").numpy()

E NotImplementedError: Cannot copy out of meta tensor; no data!