I’m learning doubleDQN, now I want to calculate a Variable by using a model, and then backward a loss calculated from the Variable, but I do not want to optimize the model’s parameters, How to do that?
you just want to run one iteration?
Two different solutions you can try.
- You can specify to not process the gradient on a Variable with :
variable.requires_grad = False
Then use your optimizer as:
optimizer = torch.optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), learning_rate)
- Process the gradient on all your Variables and choose which one you want to update with your optimizer.
optim.SGD([
{'params': model.base.parameters()},
{'params': model.classifier.parameters(), 'lr': 1e-3}
], lr=1e-2, momentum=0.9)
4 Likes
Hi Cadene,
May I know what is the lambda p: p
here and how does it work? Shouldn’t it be lambda variable: variable.requires_grad
?