I want to freeze all parameters in network ks as a sub class in Net2_kc, but when i print parameters it changes everytime. Can someone tell me what the problem is?
#transfer learning
net2_kc.ks.load_state_dict(torch.load(path+"heat_conductivity_from_fluent_solution_one_layer_200"+".pt"))
class Net2_kc(nn.Module):
#The __init__ function stack the layers of the
#network Sequentially
def __init__(self):
super(Net2_kc, self).__init__()
self.ks = ks()
self.final = nn.Sequential(
nn.Linear(1,1), #Doing just 2,1 gives very bad results!!!!!!!!!
)
def forward(self,x):
ks = self.ks(x)
output2 = self.final(ks)
return output2
for parameter in Net2_kc().ks.main.parameters():
parameter.requires_grad = False
print("param", parameter)
optimizer_kc = optim.Adam(filter(lambda p: p.requires_grad, net2_kc.parameters()), lr=learning_rate,betas = (0.9,0.99),eps = 10**-15)
It is a bit hard to see what is going on without the full model definition, but it looks like in this line for parameter in Net2_kc().ks.main.parameters(): a separate, new network is being created with Net2_kc() so that you may not actually be changing the requires_grad field of the model that you are using.
Additionally, to simplify things, you could also try just wrapping the relevant part of your model with torch.no_grad: no_grad — PyTorch 1.11.0 documentation
Yes, something like that, but it depends on which part of the model you want to freeze; if it is part of the sequential module then your original approach might be cleaner.