torch.backends.cudnn.deterministic = True in addition to:
if torch.cuda.is_available(): torch.cuda.manual_seed_all(999)
but accuracy for same model/same data still varies considerably across runs. I’ve even tried duplicating the above in the code and even tried switching to the latest version of pytorch (3.1) but still getting the same variability in accuracy across runs for same model/same data. Weird.
Was following this post b/c I ran into same issues training an autoencoder. I don’t know if the OP has solved the problem. but I did a test last night on a AWS GPU and cuda on w/ the parameters below gave me consistent results. torch.backends.cudnn.deterministic = True torch.manual_seed(999)
Further I explicitly specify model.eval() after training when computing the decoders and encoders.
Alternatively when I have, below, the results were inconsistent. torch.backends.cudnn.deterministic = True torch.cuda.manual_seed_all(999)
As an above poster mentioned it seems as though torch.manual_seed() applies to both cuda and cpu devices for the latest version. So if you’re not getting consistent result w/ torch.cuda.manual_seed_all, try just torch.manual_seed. This may depend on the pytorch version you have installed…Hope this helps.
num_workers = 0 and torch.backends.cudnn.enabled = False are the real thing that works! And I also see that if you train one step 10 times, only using num_workers = 0 we can get exactly same output 8 times and different output 2 times.
And initialized a linear layer nn.Linear(3,8).weight.
Then re-iterating nn.Linear(3,8).weight is giving me different weight values.
I think this is why you guys are having fluctuations in your results.
Could you explain your use case a bit more?
If you are rerunning nn.Linear(3, 8).weight, you’ll create new layers with new randomly initialized parameters, so different values are expected and necessary.