I often use torch.manual_seed in my code. And I also set the same seed to numpy and native python’s random.
But I noticed that there is also torch.cuda.manual_seed. I definitely use a single GPU.
So what’s happening if I do not set
torch.cuda.manual_seed? For example,
torch.randn returns same values without
torch.cuda.manual_seed. So I want to know what situations I should use cuda’s manual_seed.
So the following code will be better?
The cuda manual seed should be set if you want to have reproducible results when using random generation on the gpu, for example if you do
I usually do not write such a code, however, I should call the seed function.
Hey but shouldn’t torch.manual_seed take care of both as written in https://pytorch.org/docs/stable/notes/randomness.html
You can use
torch.manual_seed() to seed the RNG for all devices (both CPU and CUDA)
Yes, it the behavior was changed some time ago and was most likely different, when @albanD answered in this thread.
I was just wondering best practice for using seeding. I’m using
for running experiments on a new loss function, with the changed loss and with standard loss. Is it best to keep using a specific seed value or to vary the seed? I’m thinking some seeds may affect initialisation and therefore get into a better solution, thinking of all you need is a good init…
I’m training two models simultaneously in the same script, so should I have the above seed lines prior to instantiating each model individually to ensure the same initialisation? For a fair comparison of one loss over the other.
Is it wise using a seed for this type of research in general?
To be reproducible you may try all this:
# 1. Set `PYTHONHASHSEED` environment variable at a fixed value
# 2. Set `python` built-in pseudo-random generator at a fixed value
# 3. Set `numpy` pseudo-random generator at a fixed value
import numpy as np
# 4. Set `pytorch` pseudo-random generator at a fixed value