nn.Linear(num_in_features, num_out_features) ?
I have to make a network where the initial hidden layer bias is constant, how can I do that ?
nn.Linear(num_in_features, num_out_features) ?
I have to make a network where the initial hidden layer bias is constant, how can I do that ?
If you don’t want to update the bias parameter, you could set the requires_grad
attribute of the bias
to False
and don’t pass it to the optimizer:
lin = nn.Linear(1, 1)
lin.bias.requires_grad = False
optimizer = torch.optim.Adam([lin.weight], lr=1.)
output = lin(torch.randn(1, 1))
output.backward()
lin.bias.grad
>
lin.weight.grad
> tensor([[-0.0095]])
Thanks, but how can I set the Bias to a constant tensor array ?
These methods should work:
lin = nn.Linear(1, 1)
with torch.no_grad():
lin.bias.fill_(1.)
# or
lin.bias = nn.Parameter(torch.randn(1))
How can I set the bias of the classifier of Mobilenetv2 to 55? Can I use this method? sth like this:
model = torchvision.models.mobilenet_v2(pretrained=pretrained)
model.classifier[1] = nn.Linear(model.last_channel, 1)
model.classifier[1].bias = nn.Parameter(55)
No, your code snippet work work since nn.Parameter
expects a tensor input not an int
(see my code snippet).
Besides that, model.classifier[1].bias
contains 1000
values but I would assume that a single bias
value would also work as internally a broadcast might be used.