Difference between torch.randn() and normal_()

I would like to know if there is any difference in the end results between

self.register_buffer('my_layer', torch.randn(m, n))

and

self.register_buffer('my_layer', torch.Tensor(m, n))
self.my_layer.data.normal_()

Thank you in advance for your help.

The second approach will use uninitialized memory for self.my_layer while an embedding tensor is filled with values from a Gaussian distribution. Maybe you wanted to call self.my_layer.normal_()?
In this case, both approaches would sample values from a Gaussian.

1 Like

Thanks @ptrblck!
Sorry I forgot to change the variable name. I meant self.my_layer.data.normal_() in the second (I’ve updated the post).
Now as you mention it, is there a difference between self.my_layer.data.normal_() and self.my_layer.normal_()?

Using .data bypasses certain correctness checks for in-place modification to tensor. You should avoid using that if possible.

1 Like

Thanks for your reply.