I had a quick question about best practices for device agnostic coding. Some context: I prototype my code on my laptop (CPU only), before training in the cloud.
Right now, I follow the following pattern.
device = torch.device(“cuda:0” if torch.cuda.is_available() else “cpu”
model.to(device)
for everything. This works quite nicely, and I’m happy that this functionality has been added to PyTorch.
But there are situations like the following where everything will work well on my local CPU environment, then fail in a GPU environment.
a = torch.tensor([1]).to(device)
b = torch.tensor([1])
c = a+b
which is entirely correct, and the behaviour I would expect.
What I would like to be able to impose restrictions on myself which ensure that errors like that above would fail both in my local CPU environment and in the GPU environment where I do my training.
Are there any existing solutions to this? If not, I think what I would like is an option I could enable which would force me to specify the device on which all leaf tensors live. Does such an option exist? Am I the only person who would be interested in this?