I was just wondering best practice for using seeding. I’m using
torch.manual_seed(args.seed)
torch.cuda.manual_seed(args.seed)
np.random.seed(args.seed)
random.seed(args.seed)
for running experiments on a new loss function, with the changed loss and with standard loss. Is it best to keep using a specific seed value or to vary the seed? I’m thinking some seeds may affect initialisation and therefore get into a better solution, thinking of all you need is a good init…
I’m training two models simultaneously in the same script, so should I have the above seed lines prior to instantiating each model individually to ensure the same initialisation? For a fair comparison of one loss over the other.
Is it wise using a seed for this type of research in general?