Simulate GPU for testing?

I’m often in a situation where on my local machine (without a GPU) my code runs fine, but I’d like to test that all the tensors are added to the correct device if there’s a GPU present. I want to test this before I commit my code, but do so would require deploying it to a remote machine, which are usually running something I don’t want to interrupt, so I need to wait until they’re finished before testing and committing my code. Of course, there are ways I could change my workflow in committing my code, but ideally I would just be able to test the code locally by simulating that a GPU is present. Is there anyway to do this?

I’m not aware of a way to simulate a GPU, but you could run an AWS instance to try it out or use a Colab notebook, although I’m not sure about the licensing.

Thanks! Not quite the answer I wanted to hear, but probably the correct answer nonetheless. Also, didn’t know about Colab, and that looks pretty neat, so thank you!

They provide a free GPU for afaik 24 hours. After that you would need to restart your notebook.
Because of that, I would look closely to the license and intellectual property, if you are using code written for your employer.

Alternatively you could try to use one of the programs listed in this thread.
However, I am not sure whether any of these options are runnable in python. If not you might be able to use them together with torch.jit as it is mentioned in Road to 1.0

1 Like

Maybe the meta device is what you need.

>>> a = torch.ones(2) # defaults to cpu
>>> b = torch.ones(2, device='meta')
>>> c = a + b
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, meta and cpu!