Hi, I am trying to reproduce the results of my pytorch code, but it’s giving different values every time I run it. I have set the seed as follows:

seed = 42
numpy.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
tf.random.set_seed(seed)
tf.experimental.numpy.random.seed(seed)
tf.compat.v1.set_random_seed(seed)
# When running on the CuDNN backend, two further options must be set
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = True
#For tensorflow
os.environ['TF_CUDNN_DETERMINISTIC'] = '1'
os.environ['TF_DETERMINISTIC_OPS'] = '1'
# Set a fixed value for the hash seed
os.environ["PYTHONHASHSEED"] = str(seed)

Have you narrowed it down to a particular operation or operation(s) that exhibit non-determinism? Note that the docs also recommend torch.backends.cudnn.benchmark = False for determinism: Reproducibility — PyTorch 1.13 documentation

@eqy I tried setting torch.backends.cudnn.benchmark = False but it’s still not working.
I am actually training a GAN, and the loss values and discriminator probability outputs for real/fake are different every time I run the code.

@ptrblck Actually I am getting non-deterministic values for multiple outputs. Upon searching for solutions, I got a post here that says setting num_workers = 0 for Dataloader works. I tried that but it still doesn’t work for me. But by making a few changes I am thinking it has something to do with the Dataloader part of my code. Here is the snippet:

Here, number_of_styles is 19.
And the InfiniteSamplerWrapper method is defined in a different file as follows:

import numpy as np
from torch.utils import data
def InfiniteSampler(n):
# i = 0
i = n - 1
order = np.random.permutation(n)
while True:
yield order[i]
i += 1
if i >= n:
np.random.seed(1)
order = np.random.permutation(n)
i = 0
class InfiniteSamplerWrapper(data.sampler.Sampler):
def __init__(self, data_source):
self.num_samples = len(data_source)
#print("Num samples:",self.num_samples)
def __iter__(self):
return iter(InfiniteSampler(self.num_samples))
def __len__(self):
return 2 ** 31

If I leave np.random.seed() as it is without specifying any number inside the braces, I get all different values from the beginning. Whereas, if I use np.random.seed(1) I get the exact values for the first iteration, then it starts to change from the second iteration.

Your code is unfortunately not executable so that I won’t be able to debug it. If you are using multiple worked you would have to use the worker_init_fn to seed third party libs such as numpy.