hi, i saw this from a dcgan codes . it seems it relates to cuda
here are parts of the code
if opt.manualSeed is None:
opt.manualSeed = random.randint(1, 10000)
print("Random Seed: ", opt.manualSeed)
manual_seed sets the random seed from pytorch random number generators, as explained in the docs.
But I still cannot understand why there should exist such code. Where are these generated random numbers used? It seems that “opt.manualSeed” has never been called later on.
You just need to call
torch.manual_seed(seed), and it will set the seed of the random number generator to a fixed value, so that when you call for example
torch.rand(2), the results will be reproducible.
[torch.FloatTensor of size 2]
Try now without the
torch.manual_seed, and you’ll see that it changes over time.
Hey @fmassa, in the above codes, what’s the differences between random.seed(opt.manualSeed) and torch.manual_seed(opt.manualSeed)? It seems the both of the two are produce seeds. Are they repetitive? Thanks!
@Mylinda first one is for python RNG, second is for pytorch RNG
Yes, I got this point. Can the codes use torch.manual_seed(opt.manualSeed) directly and merely? What is the significance of random.seed(opt.manualSeed)? Is it redundant? @smth
depends on the code. if you are using python’s
random package and any of it’s functions, you want to also do
opt.manualSeed coming from? how is it getting its number for seeding purposes?
It’s probably defined in the DCGAN example. In this line the arguments from the
argparser are assigned to
However, this is just an example code. You can use whatever number you like.
is it possible to use
os.urandom(n) to seed pytorch?
Does that means once we set manual_seed fixed, parameter in RandomResizedCrop , RandomRotation will be fixed in every iteration?
The random values will be still “random” but in a defined order.
I.e. if you restart your script, the same random numbers will be created.
Have a look at PRNG for more information.
Thus, to seed everything, on the assumption one is using PyTorch and Numpy:
# use_cuda = torch.cuda.is_available()
def random_seeding(seed_value, use_cuda):
numpy.random.seed(seed_value) # cpu vars
torch.manual_seed(seed_value) # cpu vars
if use_cuda: torch.cuda.manual_seed_all(seed_value) # gpu vars
Anything else is missing?
You should also seed the python random:
Seeding is related to Reproducibility of results.
or any other newly-updated documentation to avoid (up-to-a-point) or ensure non-determinism.
Question about this - I have about 10 files all of which have
import torch included. Do I need to call
torch.manual_seed(0) on all of them? Or only on the first.