Getting different results running the code in Colab and JupyterNotebook

I am training and testing an autoencoder. I run the exact same code in Jupyter notebooks and also in Googlecolab. I have these settings:

np.random.seed(0)
random.seed(0)
torch.manual_seed(0)
torch.cuda.manual_seed(0)
torch.cuda.manual_seed_all(0)
torch.backends.cudnn.enabled = False
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True

and

train_loader = torch.utils.data.DataLoader(
dataset=train_set,
batch_size=batch_size,
shuffle=False,
num_workers=2,
worker_init_fn=np.random.seed(0))

test_loader = torch.utils.data.DataLoader(
dataset=test_set,
batch_size=batch_size,
shuffle=False,
num_workers=2,
worker_init_fn=np.random.seed(0))

Also, I am using GPU on both environments.

The performance in each environment is deterministic but they don’t produce the exact same accuracy: Colab=0.7068, JupyterNotebook=0.6689
I wonder why they behave like this? shouldn’t they produce the exact same values?

I appreciate any help and guidance.

Not necessarily, if different hardware and potentially software versions (CUDA, PyTorch) are used.
You could run some tests and see, if you would be getting the same random numbers on both platforms in the first place.

1 Like