I want to set and fix weights of nn.conv1d, so this layer in network has fixed parameters and is NOT learnable.
Is self.conv1.weight = torch.nn.Parameter(torch.ones_like(self.conv1.weight))
makes the weights fixed?
I want to set initial weights and they start learning from that initial weights, so is it true to use bellow script:
self.conv1.weight = torch.nn.Parameter(torch.ones_like(self.conv1.weight),requires_grad=True)
?
No, that’s not possible as you can change the requires_grad attribute for an entire tensor only.
An alternative approach would be to either set the gradients to zero for the desired elements after the backward() operation and before the step() call or to recreate the parameter from different tensors (which use different requires_grad attributes) via torch.cat or torch.stack. However, the latter approach might be cumbersome depending how complicated the tensor creation would be.