What does torch.Tensor.normal_() actually do?

Hi!

Could any of you please help me understand what torch.Tensor.normal_() is actually doing? Because I’m running some test and I don’t understand why I’m getting two different outputs when I think they should be the same.

import torch
a = torch.rand(2, 2)
print(a)
# tensor([[0.8657, 0.5614],
        [0.7639, 0.0196]])

print(a.normal_())
# tensor([[-0.3112,  0.2799],
        [-1.8803,  0.0472]])

print(a.normal_(0, 1))
# tensor([[-1.4572, -0.1000],
        [-1.6295,  2.0425]])

from the documentation:
“Fills self tensor with elements samples from the normal distribution parameterized by mean and std

Every time you run a.normal it will fill your tensor with new values from the normal distribution with the mean and std that you defined

And the actual values from the original tensor are not used at all right?

It seems like normal_ is overwriting the old values with the new ones.