In Manual seed cannot make dropout deterministic on CUDA for Pytorch 1.0 preview version, it is mentioned that seeding does not help in reproducibility when a model contains modules like nn.Dropout
. In that forum, a potential solution would be using torch.set_rng_state()
. Here are my findings:
- LeNet5 does not contain dropout layer while seeding failed to give reproducible results. Seeds are configured as below.
np.random.seed(args.seed)
os.environ['PYTHONHASHSEED'] = str(args.seed)
# Set seed for pytorch
torch.manual_seed(args.seed) # Set seed for CPU
torch.cuda.manual_seed(args.seed) # Set seed for the current GPU
torch.cuda.manual_seed_all(args.seed) # Set seed for all the GPUs
cudnn.benchmark = False
cudnn.deterministic = True
- When
torch.nn.DataParallel
is used, the results among different runs varies slightly more obvious.
I would be grateful if you could explain why these could happen.