What is the src code for Tensor.normal_(), or what formula does it use under the hood?

Is it the standardization formula, which is basically (xi - x_mean) / std_dev ?

If it is, then how come it is not deterministic, for example

>>> x = t.Tensor([1,-1,0,2])
>>> x.normal_()
tensor([-0.3429, -0.7214,  0.0883, -0.2900])
>>> x = t.Tensor([1,-1,0,2])
>>> x.normal_()
tensor([0.7380, 1.9640, 0.3068, 0.2396])

whereas if I was using this standardization then it should be

>>> x = t.Tensor([1,-1,0,2])
>>> x = x.numpy()
>>> (x - x.mean()) / x.std()
array([ 0.4472136, -1.3416407, -0.4472136,  1.3416407], dtype=float32)

So how can we see what’s really going on here?

Here it says “standardization transforms data to have a mean of zero and a standard deviation of 1.” - https://www.statisticshowto.datasciencecentral.com/normalized/

In the torch docs it says

normal_(mean=0, std=1, *, generator=None) → Tensor

-https://pytorch.org/docs/stable/tensors.html#torch.Tensor.normal_

so I assumed it was doing standardization but I guess not.

Literally the line after your quote from the doc

Fills self tensor with elements samples from the normal distribution parameterized by mean and std .

1 Like

This might be asking for a lot, but do you think you can stub out the src code for me? Or link me to an article that explains how this is done?

If I had to guess, it might be something like

torch.randn(len(x)) * std_dev + x_mean

Since torch rand uses normal distribution under the hood. However, doesn’t that imply that the elements of x is not relevant?

Yes, tensor.normal_() will fill the tensor with values sampled from the normal distribution.
The old values will be overwritten.

1 Like

Thanks a bunch, I was curious as to how the elements of x played a part, but they don’t