def get_optim_policy(self):
params = [
{‘params’: self.backbone.parameters()},
{‘params’: self.res_part.parameters()},
{‘params’: self.global_reduction.parameters()},
{‘params’: self.global_softmax.parameters()},
{‘params’: self.res_part2.parameters()},
{‘params’: self.reduction.parameters()},
{‘params’: self.softmax.parameters()},
]
return params
optim_policy = model.get_optim_policy()
optimizer = torch.optim.SGD(optim_policy, lr=learning_rate, momentum= 0.9, weight_decay=5e-4)
I saw this kind of code in some open source. and found out that optim policy is specified in the torch.optim.SGD.
but I don’t understand why this kind of job is done. below code is what I’m familiar with. can someone tell me the difference between the two code? Is there a case when more specified optim_policy is needed?
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, momentum= 0.9, weight_decay=5e-4)