Default initialization behavior

The initialization is layer-dependent. How does pytorch seed the RNGs by default?

If I have to train a model N times to see the average performance of the model, do I have to have special code to ensure that the initializations are different? Can I assume that I have different initialization on each training run?

I don’t think PyTorch seeds the code by default, as this would mean you could get deterministic result for each run, which is not the case.

You can execute this core repeatedly in your terminal and will get different values:

python -c "import torch; print(torch.randn(10))"