Following this closed issue: np.random generates the same random numbers for each data batch · Issue #5059 · pytorch/pytorch (github.com)
We know that NumPy on Linux (in contrary to Windows) uses the same seed every time unless manually set. The issue was closed by adding a re-seeding to the DataLoader
(see [DataLoader] Add Numpy seeding to worker of DataLoader by ejguan · Pull Request #56488 · pytorch/pytorch (github.com))
However, as far as I can tell, the issue still exists in the weight initialization when creating a model or when calling reset_parameters() to reset the weights of a module.
Would it be possible to implement the seeding in the reset_parameters() methods?
Otherwise to ensure randomness across repeats on Linux, I think that a manual NumPy random seed should be forced.