How are biases initialized in pytorch in the linear layer?

I was wondering, how are biases initialized in pytorch for the linear layer? I inspected it out of curiosity and it seems randn(0,1) would be my guess:

>>> l = torch.nn.Linear(3,2)
>>> l.bias
Parameter containing:
0.2137
0.0904
[torch.FloatTensor of size 2]
l = nn.Linear(3, 2)
l.bias.data.normal_(0, 1)
l.bias.data.fill_(0)
3 Likes

How is that different from using:

torch.nn.init.constant

?

also how did u find out about these methods like fill_ and normal_? It doesn’t seem to me that they are super well documented (sort of mysterious how people even know how to do these things)

also is there anything wrong with doing:

x = torch.FloatTensor(3,2)
l.bias.data = x

obviously assuming x is the initialization that we might want.

Whats the difference?

http://pytorch.org/docs/master/tensors.html you can check it in doc, maybe using torch.nn.init is no difference