import torch cuda = torch.device("cuda") torch.manual_seed(10) torch.randn(3) # always yields tensor([-0.6014, -1.0122, -0.3023]) torch.manual_seed(10) torch.randn(3, device=cuda) # always yields tensor([-0.1029, 1.6810, -0.2708], device='cuda:0')
As you can see, the random numbers yielded are different depending on the device used, even though I use the same seed for them. Isn’t this a problem for reproducibility? Depending on whether the machine is using a particular device, the random numbers will be different. It should be the same across any set of devices, right?