Random seed initialization

I have a problem regarding a large variation in the result I get, by running my model multiple times. The exact same architecture and training gives anywhere from 91.5% to 93.4% accuracy on image classification (cifar 10).

The problem is that I don’t know how to use the torch random seed in order to get the better results, not the worse ones. I tried various values for the random seed, with:

torch.manual_seed(7)

and I get the lower bound of the results. Any ideas?

5 Likes

if you are using GPU, you might also need to set torch.cuda.manual_seed_all.
http://pytorch.org/docs/master/cuda.html#random-number-generator

4 Likes

@smth do we need to set torch.manual_seed() and torch.cuda.manual_seed_all() or the second is enough? thanks.

1 Like

with the latest pytorch 0.3 version you only need to set torch.manual_seed which will seed all devices

12 Likes

Is the random number generator platform independent?

the CPU RNG is platform-independent. I am not sure about the CUDA RNG and what guarantees NVIDIA gives across GPU models, CUDA versions and platforms.

1 Like

Funny, even though I have included both:

torch.manual_seed(999)

and

if torch.cuda.is_available():
torch.cuda.manual_seed_all(999)

I am still getting inconsistent results, fluctuating 1-2% by re-running the model. I wonder why that could be?

1 Like

Could you try to add torch.backends.cudnn.deterministic = True to your code?
CUDNN has some non-deterministic methods, so small fluctuations might come from this.

5 Likes

I added:

torch.backends.cudnn.deterministic = True in addition to:

torch.manual_seed(999) and

if torch.cuda.is_available(): torch.cuda.manual_seed_all(999)

but accuracy for same model/same data still varies considerably across runs. I’ve even tried duplicating the above in the code and even tried switching to the latest version of pytorch (3.1) but still getting the same variability in accuracy across runs for same model/same data. Weird.

Hi,

I’m having the same issue, did you figure out a way to make the results consistent across runs?

Thanks,
Amir

Hi,

Have you figured out how to make the results reproducible now?

Thanks,
Darren

Same problem here, running on PyTorch 0.4. I am using RRelu though, even though I’ve set all the flags mentioned above, results differ by a margin of +/- 0.5% from run to run.

What’s the number of workers for your dataloader? The following post might be helpful for deterministic results.

2 Likes

Was following this post b/c I ran into same issues training an autoencoder. I don’t know if the OP has solved the problem. but I did a test last night on a AWS GPU and cuda on w/ the parameters below gave me consistent results.
torch.backends.cudnn.deterministic = True
torch.manual_seed(999)

Further I explicitly specify model.eval() after training when computing the decoders and encoders.

Alternatively when I have, below, the results were inconsistent.
torch.backends.cudnn.deterministic = True
torch.cuda.manual_seed_all(999)

As an above poster mentioned it seems as though torch.manual_seed() applies to both cuda and cpu devices for the latest version. So if you’re not getting consistent result w/ torch.cuda.manual_seed_all, try just torch.manual_seed. This may depend on the pytorch version you have installed…Hope this helps.

3 Likes

Good info.

The docs also suggest setting: torch.backends.cudnn.benchmark = False

and remember that Numpy should be seeded as well.

–> Randomness [Docs]

2 Likes

did anyone solve this yet?

Sounds like there is another question related here.

anyways, I think this can be a solution:

manualSeed = 1

np.random.seed(manualSeed)
random.seed(manualSeed)
torch.manual_seed(manualSeed)
# if you are suing GPU
torch.cuda.manual_seed(manualSeed)
torch.cuda.manual_seed_all(manualSeed)


torch.backends.cudnn.enabled = False 
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True

also in the dataloader i set num_workers = 0

based on here
you also need to change worker_init_fn as :

def _init_fn():
    np.random.seed(manualSeed)
    

DataLoding = data.DataLoader(..., batch_size = ..., 
                             collate_fn = ..., 
                             num_workers =..., 
                             shuffle = ..., 
                             pin_memory = ...,
                             worker_init_fn=_init_fn)


I noticed if we dont do torch.backends.cudnn.enabled = False the results are very close, but some times not match :hushed:
p.s. im using pytorch 1.0.1

6 Likes

Thanks!

num_workers = 0 and torch.backends.cudnn.enabled = False are the real thing that works! And I also see that if you train one step 10 times, only using num_workers = 0 we can get exactly same output 8 times and different output 2 times.

np.random.seed(0)
random.seed(0)
torch.manual_seed(0)
torch.cuda.manual_seed(0)
torch.cuda.manual_seed_all(0)
torch.backends.cudnn.enabled = False
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True

and setting dataloader like the following:

torch.utils.data.DataLoader(training, shuffle = True, batch_size=BATCH_SIZE, worker_init_fn=np.random.seed(0),num_workers=0)

WORKED FOR ME!

I am using Pytorch version 1.0.0.

2 Likes

I tried exactly same setting, even with torch.backends.cudnn.enabled =False, the results are not the same… Do you have any idea?

1 Like