Random reproducibility numpy - torch

Hello there,
I am try to compare the initialization between torch and another backend which employs numpy for random generator.

I am testing a code as simple as:

import numpy as np
import torch
import random

def random_seeding(seed):
random.seed(seed)
torch.manual_seed(seed)
np.random.seed(seed)

torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False

random_seeding(1)
print(np.random.randn(10))

random_seeding(1)
print(torch.randn(10))

where I tried to set the pytorch output to be equal to numpy. However, the starts values are completely different. and when I tried to copy the seed array (from get_rng_state) into the numpy random state, it breaks, so this option is not possible.

Is there a way to use numpy in pytorch as backend as random generator (ofc, avoiding the use of torch.from_numpy)

Regards

Hi,

The random generators used by numpy, torch cpu and torch gpu are all different. So setting the same see won’t give your the same value.

Also exactly reproducible experiments are hard, check the reproducibility page of the doc for more info.