I often use torch.manual_seed in my code. And I also set the same seed to numpy and native python’s random.
But I noticed that there is also torch.cuda.manual_seed. I definitely use a single GPU.
So what’s happening if I do not set torch.cuda.manual_seed? For example, torch.randn returns same values without torch.cuda.manual_seed. So I want to know what situations I should use cuda’s manual_seed.
The cuda manual seed should be set if you want to have reproducible results when using random generation on the gpu, for example if you do torch.cuda.FloatTensor(100).uniform_().
for running experiments on a new loss function, with the changed loss and with standard loss. Is it best to keep using a specific seed value or to vary the seed? I’m thinking some seeds may affect initialisation and therefore get into a better solution, thinking of all you need is a good init…
I’m training two models simultaneously in the same script, so should I have the above seed lines prior to instantiating each model individually to ensure the same initialisation? For a fair comparison of one loss over the other.
Is it wise using a seed for this type of research in general?
seed_value= 0
# 1. Set `PYTHONHASHSEED` environment variable at a fixed value
import os
os.environ['PYTHONHASHSEED']=str(seed_value)
# 2. Set `python` built-in pseudo-random generator at a fixed value
import random
random.seed(seed_value)
# 3. Set `numpy` pseudo-random generator at a fixed value
import numpy as np
np.random.seed(seed_value)
# 4. Set `pytorch` pseudo-random generator at a fixed value
import torch
torch.manual_seed(seed_value)