Manual seed cannot make dropout deterministic on CUDA for Pytorch 1.0 preview version

It has been there in one form or the other for quite a while: https://github.com/pytorch/pytorch/pull/1762
Last I heard was that it was improved to “lazily” seeds all GPUs instead of doing it at call time.

Apparently something doesn’t work with (cuda) manual_seed, though. To decide between dropout and seeding as a source of error, I did two things:

  • It manual_seed also doesn’t render bernoulli deterministic, so it’s not in dropout.
  • When using set_rng_state, you achieve reproducible random numbers.
import torch
import torch.nn as nn
seed = 1
use_cuda = True
l = (torch.cuda.get_rng_state())
print("manual_seed - different each time")
for i in range(3):
    torch.manual_seed(seed)
    torch.cuda.manual_seed(seed)
    lastl = l
    l = (torch.cuda.get_rng_state())
    print ((l==lastl).all())
    a = torch.bernoulli(torch.full((3,3), 0.5, device='cuda'))
    print (a)
print("set_rng_state - same each time")
for i in range(3):
    torch.cuda.set_rng_state(l)
    a = torch.bernoulli(torch.full((3,3), 0.5, device='cuda'))
    print (a)
    b = torch.nn.functional.dropout(torch.ones(3,3, device='cuda'))
    print (b)

If you mention me (t-vi on github) on a bug report, I’ll try to figure out what’s going wrong and produce a fix.

Best regards

Thomas

1 Like