Random seed initialization

I added:

torch.backends.cudnn.deterministic = True in addition to:

torch.manual_seed(999) and

if torch.cuda.is_available(): torch.cuda.manual_seed_all(999)

but accuracy for same model/same data still varies considerably across runs. I’ve even tried duplicating the above in the code and even tried switching to the latest version of pytorch (3.1) but still getting the same variability in accuracy across runs for same model/same data. Weird.

Hi,

I’m having the same issue, did you figure out a way to make the results consistent across runs?

Thanks,
Amir

Hi,

Have you figured out how to make the results reproducible now?

Thanks,
Darren

Same problem here, running on PyTorch 0.4. I am using RRelu though, even though I’ve set all the flags mentioned above, results differ by a margin of +/- 0.5% from run to run.

What’s the number of workers for your dataloader? The following post might be helpful for deterministic results.

2 Likes

Was following this post b/c I ran into same issues training an autoencoder. I don’t know if the OP has solved the problem. but I did a test last night on a AWS GPU and cuda on w/ the parameters below gave me consistent results.
torch.backends.cudnn.deterministic = True
torch.manual_seed(999)

Further I explicitly specify model.eval() after training when computing the decoders and encoders.

Alternatively when I have, below, the results were inconsistent.
torch.backends.cudnn.deterministic = True
torch.cuda.manual_seed_all(999)

As an above poster mentioned it seems as though torch.manual_seed() applies to both cuda and cpu devices for the latest version. So if you’re not getting consistent result w/ torch.cuda.manual_seed_all, try just torch.manual_seed. This may depend on the pytorch version you have installed…Hope this helps.

3 Likes

Good info.

The docs also suggest setting: torch.backends.cudnn.benchmark = False

and remember that Numpy should be seeded as well.

–> Randomness [Docs]

2 Likes

did anyone solve this yet?

Sounds like there is another question related here.

anyways, I think this can be a solution:

manualSeed = 1

np.random.seed(manualSeed)
random.seed(manualSeed)
torch.manual_seed(manualSeed)
# if you are suing GPU
torch.cuda.manual_seed(manualSeed)
torch.cuda.manual_seed_all(manualSeed)


torch.backends.cudnn.enabled = False 
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True

also in the dataloader i set num_workers = 0

based on here
you also need to change worker_init_fn as :

def _init_fn():
    np.random.seed(manualSeed)
    

DataLoding = data.DataLoader(..., batch_size = ..., 
                             collate_fn = ..., 
                             num_workers =..., 
                             shuffle = ..., 
                             pin_memory = ...,
                             worker_init_fn=_init_fn)


I noticed if we dont do torch.backends.cudnn.enabled = False the results are very close, but some times not match :hushed:
p.s. im using pytorch 1.0.1

6 Likes

Thanks!

num_workers = 0 and torch.backends.cudnn.enabled = False are the real thing that works! And I also see that if you train one step 10 times, only using num_workers = 0 we can get exactly same output 8 times and different output 2 times.

np.random.seed(0)
random.seed(0)
torch.manual_seed(0)
torch.cuda.manual_seed(0)
torch.cuda.manual_seed_all(0)
torch.backends.cudnn.enabled = False
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True

and setting dataloader like the following:

torch.utils.data.DataLoader(training, shuffle = True, batch_size=BATCH_SIZE, worker_init_fn=np.random.seed(0),num_workers=0)

WORKED FOR ME!

I am using Pytorch version 1.0.0.

2 Likes

I tried exactly same setting, even with torch.backends.cudnn.enabled =False, the results are not the same… Do you have any idea?

1 Like

Hi Guys, I am having the exact same problem using the DETR model and no matter what I try I can’t seem to get reproducible results!.

Hi @ptrblck , None of the solutions given on this post works for me. I have used all the possibilities on PyTorch 1.4.0 and 1.3.1

torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
torch.manual_seed(0)
torch.cuda.manual_seed_all(0)
np.random.seed(0)
random.seed(0)
torch.cuda.manual_seed(0)
torch.backends.cudnn.enabled = False

Also, set the dataloader num_workers =0

Do you have any suggestions?

I would recommend to check the reproducibility docs additionally to the posts here.

Hi,

I did the following:

torch.manual_seed(56)
random.seed(56)
np.random.seed(56)

And initialized a linear layer nn.Linear(3,8).weight.
Then re-iterating nn.Linear(3,8).weight is giving me different weight values.
I think this is why you guys are having fluctuations in your results.

I am using Pytorch 1.8.1

Any help from anybody…

Thanks.

Could you explain your use case a bit more?
If you are rerunning nn.Linear(3, 8).weight, you’ll create new layers with new randomly initialized parameters, so different values are expected and necessary.

You are right.

I just want a way that initializes the same weight matrices of a layer in order to produce the same results after re-runs. This is my use case.

Thanks.

To get reproducible and deterministic results for the entire script, please take a look at the reproducibility docs, which were linked in my previous post.

Oh! I got it.

Thanks.