Hello pytorch devs and users,
My training runs in a distributed environment and I have to ensure that each parameter is initialized to the same value by calling the following instructions before spawning my processes:
def setRandomSeeds(randomSeed=0):
torch.manual_seed(randomSeed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(randomSeed)
random.seed(randomSeed)
It works perfectly. However; this time “DistributedSampler” fails to shuffle samples at every epoch. What should I turn off/on after weight initialization in every process so that it shuffles samples properly before every epoch starts?
Thanks in advance.