What is manual_seed?

You just need to call torch.manual_seed(seed), and it will set the seed of the random number generator to a fixed value, so that when you call for example torch.rand(2), the results will be reproducible.
An example

import torch

torch.manual_seed(2)
print(torch.rand(2))

gives you

 0.4360
 0.1851
[torch.FloatTensor of size 2]

Try now without the torch.manual_seed, and you’ll see that it changes over time.

16 Likes

Hey @fmassa, in the above codes, what’s the differences between random.seed(opt.manualSeed) and torch.manual_seed(opt.manualSeed)? It seems the both of the two are produce seeds. Are they repetitive? Thanks!

@Mylinda first one is for python RNG, second is for pytorch RNG

7 Likes

Yes, I got this point. Can the codes use torch.manual_seed(opt.manualSeed) directly and merely? What is the significance of random.seed(opt.manualSeed)? Is it redundant? @smth

1 Like

depends on the code. if you are using python’s random package and any of it’s functions, you want to also do random.seed(...)

3 Likes

Got it. Thanks @smth

where is opt.manualSeed coming from? how is it getting its number for seeding purposes?

It’s probably defined in the DCGAN example. In this line the arguments from the argparser are assigned to opt.

However, this is just an example code. You can use whatever number you like.

is it possible to use os.urandom(n) to seed pytorch?


Do:

ord(os.urandom(1))

For the record, manual_seed seeds all of pytorch:

http://pytorch.org/docs/master/torch.html?highlight=manual%20seed#torch.manual_seed

Hi
Does that means once we set manual_seed fixed, parameter in RandomResizedCrop , RandomRotation will be fixed in every iteration?

No.
The random values will be still “random” but in a defined order.
I.e. if you restart your script, the same random numbers will be created.
Have a look at PRNG for more information.

3 Likes

Thus, to seed everything, on the assumption one is using PyTorch and Numpy:

# use_cuda = torch.cuda.is_available() 
# ...
def random_seeding(seed_value, use_cuda):
    numpy.random.seed(seed_value) # cpu vars
    torch.manual_seed(seed_value) # cpu  vars
    if use_cuda: torch.cuda.manual_seed_all(seed_value) # gpu vars

Anything else is missing?

1 Like

You should also seed the python random:

import random
random.seed(seed)
3 Likes

Thank you
I got it

and i have try it

Seeding is related to Reproducibility of results.
Consider https://pytorch.org/docs/stable/notes/randomness.html
or any other newly-updated documentation to avoid (up-to-a-point) or ensure non-determinism.

Question about this - I have about 10 files all of which have import torch included. Do I need to call torch.manual_seed(0) on all of them? Or only on the first.

Thanks,
Jeremy

2 Likes

Did you get to know the answer for this from somewhere? @JeremySMorgan

No I didn’t. I ended up just adding torch.manual_see(0) to all my files with torch imported. I guess it would be pretty easy to check, but i’m lazy

Hi @smth! Just a final clarification question, hopefully others will benefit too.

So torch.manual_seed(seed) should fix both the GPU/CPU PyTorch seed so the call to torch.cuda.manual_seed is redundant. If I use multiple devices, either in data prallel mode or the new sharding feature, do I still have to call torch.cuda.manual_seed_all or is this still implicitly done by torch.manual_seed call?

Many thanks