How are biases initialized in pytorch in the linear layer?

I was wondering, how are biases initialized in pytorch for the linear layer? I inspected it out of curiosity and it seems randn(0,1) would be my guess:

>>> l = torch.nn.Linear(3,2)
>>> l.bias
Parameter containing:
[torch.FloatTensor of size 2]
l = nn.Linear(3, 2), 1)

How is that different from using:



also how did u find out about these methods like fill_ and normal_? It doesn’t seem to me that they are super well documented (sort of mysterious how people even know how to do these things)

also is there anything wrong with doing:

x = torch.FloatTensor(3,2) = x

obviously assuming x is the initialization that we might want.

Whats the difference? you can check it in doc, maybe using torch.nn.init is no difference