I’m interested in mocking a torch device like torch.device(‘cuda’) but with a cpu instead. This way I can use configure my unit tests to check if the devices are being mapped properly in my program (i.e. catch errors such as “Expected all tensors to be on the same device, but found at least two devices”)
Ideally I would want to simply make a dummy device like this:
# create two devices
dv1 = torch.device('dummy1')
dv2 = torch.device('dummy2')
# map to devices
a = torch.rand(1).to(dv1)
b = torch.rand(1).to(dv2)
# raises RuntimeError
a + b
Right now I’ve found the best way to do this is to use my own GPU to test for these situations but its a bit overkill and also causes issues if another program is using it or a GPU is not available. Any chance there is some functionality like this in PyTorch or some trick to get this to work?
Also would like to mention I have found a similar thread here, but the solution I found recommended using a library which literally simulates the GPU, but again thats a bit much for what I want to do. Somewhat related problem here: GPU Emulator for CUDA programming without the hardware - Stack Overflow