Option to force device specification for leaf tensors?

I had a quick question about best practices for device agnostic coding. Some context: I prototype my code on my laptop (CPU only), before training in the cloud.

Right now, I follow the following pattern.

device = torch.device(“cuda:0” if torch.cuda.is_available() else “cpu”
model.to(device)

for everything. This works quite nicely, and I’m happy that this functionality has been added to PyTorch.

But there are situations like the following where everything will work well on my local CPU environment, then fail in a GPU environment.

a = torch.tensor([1]).to(device)
b = torch.tensor([1])
c = a+b

which is entirely correct, and the behaviour I would expect.

What I would like to be able to impose restrictions on myself which ensure that errors like that above would fail both in my local CPU environment and in the GPU environment where I do my training.

Are there any existing solutions to this? If not, I think what I would like is an option I could enable which would force me to specify the device on which all leaf tensors live. Does such an option exist? Am I the only person who would be interested in this?

The canonical thing is to test in an environment that resembles the target better.

There are 19 other people who want this. However, there are reservations to the feature request because it would replace an obvious way to get things wrong with a more subtle one.

Best regards

Thomas

Thanks for the reply. I agree that setting the default to the GPU could be problematic, so I was careful in my original post not to suggest this.

I posted what’s below already in your linked GitHub issue, but posting it here too for the sake of continuity of this thread:

What about things like torch.set_default_device() and torch.get_default_device(), where the default device type could initially be the CPU?

There is already similar functionality for dtypes with torch.set_default_tensor_type() and torch.get_default_dtype() .