I write a new loss in pytorch, and I define a nn.Parameter that should update with an expression.
But I can’t get the right grad when setting with self.parameter.grad or self.parameter._grad.
class Loss(nn.Module):
def __init__(...):
self.parameter = nn.Parameter(...) # init parameters that need to be manually updated
def forward(...):
...
print(self.parameter.grad) # it is still autograd result, not zero
self.paramter._grad = Variable(tensor.zeros(...)) # for example, set it to zero
The question is where and how can I set the grad manually.
Sorry to make you confused. I just want to set the grad manually, for example, set it to zero tensor. So in every backpropogation, it stays without change, rather than changing based on autograd. Also, I don’t want the update of former layers to be changed.
I guess you want to set your own backward method. If it’s the case, you should implement a autograd Function rather than an nn.Module. Something like this:
Thank you for your reply! I think your code change the update of former layers because you make grad_input to zero, which means all former layers’ parameters will not change. But I have a parameter in my loss, and I need this parameter to be update manually, without changing former layers. I’ve tried your code, and get the following:
optimzer = optim.SGD(lossobject.parameters(), lr =0.01)
AttributeError: 'Loss' object has no attribute 'parameters'
# Compute your loss as usual
out = model(input)
loss = loss_module(out, target)
# zero all gradients and backward
optimizer.zero_grad()
loss.backward()
# reset the gradients for the loss parameter
loss_module.zero_grad()
# compute the gradients for your loss parameters and set them
my_param_grad = get_params_grad(...)
loss_module.parameter.backward(my_params_grad) # EDITED: used to be `parameters` which is wrong
# now update all the parameters
optimizer.step()