Could any of you please help me understand what
torch.Tensor.normal_() is actually doing? Because I’m running some test and I don’t understand why I’m getting two different outputs when I think they should be the same.
a = torch.rand(2, 2)
# tensor([[0.8657, 0.5614],
# tensor([[-0.3112, 0.2799],
# tensor([[-1.4572, -0.1000],
March 6, 2023, 11:56am
from the documentation:
self tensor with elements samples from the normal distribution parameterized by
Every time you run a.normal it will fill your tensor with new values from the normal distribution with the mean and std that you defined
And the actual values from the original tensor are not used at all right?
March 6, 2023, 12:19pm
It seems like normal_ is overwriting the old values with the new ones.