Hello,
I was mostly just wondering if this was intended behaviour and I am using something wrong, or whether this is a bug. I have noticed that since upgrading to 0.4, using the weight initialization method nn.init.eye or nn.init.eye_ in torch.nn.init sets requires_grad to False on the appropriate weight tensor.
For example, try running:
L = nn.Linear(5,5)
print(L.weight.requires_grad)
nn.init.eye_(L.weight)
print(L.weight.requires_grad)