jpj
(jpj)
May 18, 2021, 9:56am
1
def forward(self, input):
H1 = self.hidden_layer_1(input)
H1 = self.relu(H1)
final_inputs = self.output(H1)
optimizer = torch.optim.SGD(self.parameters(), lr =lr,momentum=momentum)
weights = np.load('./w0.npy')
bias = np.load('./b0.npy')
self.hidden_layer_1.weight.data = torch.from_numpy(weights).float()
self.hidden_layer_1.bias.data = torch.from_numpy(bias).float()
# freeze
for param in self.hidden_layer_1.parameters():
param.requires_grad = False
I am trying to freeze the layer but the loss is still changing after epochs.
Try to do it (freeze the parameters) before passing them to the optimizer.
jpj
(jpj)
May 18, 2021, 10:18am
3
I tried that as well but training and validation loss still changing
Is hidden_layer_1
the only layer in your model?
Show a more complete code if possible
jpj
(jpj)
May 18, 2021, 12:17pm
5
Yes. I have included the forward function in the edited post.
But I saw that you call self.output
in this forward function. self.output
does not have a trainable parameters?
Because if your model is still learning, it means that something is changing somewhere. If you only have one layer and you want to freeze it, what is the point of training the model here then?
jpj
(jpj)
May 18, 2021, 1:05pm
7
I was checking if the freezing part is working for the first layer then I would add a second hidden layer which I would not freeze.