Is it the standardization formula, which is basically (xi - x_mean) / std_dev ?

If it is, then how come it is not deterministic, for example

```
>>> x = t.Tensor([1,-1,0,2])
>>> x.normal_()
tensor([-0.3429, -0.7214, 0.0883, -0.2900])
>>> x = t.Tensor([1,-1,0,2])
>>> x.normal_()
tensor([0.7380, 1.9640, 0.3068, 0.2396])
```

whereas if I was using this standardization then it should be

```
>>> x = t.Tensor([1,-1,0,2])
>>> x = x.numpy()
>>> (x - x.mean()) / x.std()
array([ 0.4472136, -1.3416407, -0.4472136, 1.3416407], dtype=float32)
```

So how can we see whatâ€™s really going on here?

Here it says â€śstandardization transforms data to have a mean of zero and a standard deviation of 1.â€ť - https://www.statisticshowto.datasciencecentral.com/normalized/

In the torch docs it says

```
normal_(mean=0, std=1, *, generator=None) â†’ Tensor
```

-https://pytorch.org/docs/stable/tensors.html#torch.Tensor.normal_

so I assumed it was doing standardization but I guess not.