How to manually set grad in a new layer's definition

I write a new loss in pytorch, and I define a nn.Parameter that should update with an expression.
But I can’t get the right grad when setting with self.parameter.grad or self.parameter._grad.

class Loss(nn.Module):
    def __init__(...):
        self.parameter = nn.Parameter(...) # init parameters that need to be manually updated
    def forward(...):
        ...
        print(self.parameter.grad) # it is still autograd result, not zero
        self.paramter._grad = Variable(tensor.zeros(...)) # for example, set it to zero

The question is where and how can I set the grad manually.

I am not sure to understand what you’re doing. Why would you set gradients during the forward pass?

If you want to accumulate gradients in a Variable, I guess the cleanest way to do it is:

your_var.backward(your_grad)

Sorry to make you confused. I just want to set the grad manually, for example, set it to zero tensor. So in every backpropogation, it stays without change, rather than changing based on autograd. Also, I don’t want the update of former layers to be changed.

I guess you want to set your own backward method. If it’s the case, you should implement a autograd Function rather than an nn.Module. Something like this:

class Loss(Function):
    
    def forward(ctx, input, target):
        ctx.save_for_backward(input, target)
        output = torch.sum((input - target).pow(2), 1)
        return output
    
    def backward(ctx, grad_output):
        input, target = ctx.saved_variables
        # should be: 
        # grad_input = torch.diag(grad_output).mm(2*(input - target))
        # but you want:
        grad_input = torch.zero(input.size())
        grad_target = None
        return grad_input, grad_target

Thank you for your reply! I think your code change the update of former layers because you make grad_input to zero, which means all former layers’ parameters will not change. But I have a parameter in my loss, and I need this parameter to be update manually, without changing former layers. I’ve tried your code, and get the following:

optimzer = optim.SGD(lossobject.parameters(), lr =0.01)
AttributeError: 'Loss' object has no attribute 'parameters'

I guess what you want is:

# Compute your loss as usual
out = model(input)
loss = loss_module(out, target)

# zero all gradients and backward
optimizer.zero_grad()
loss.backward()

# reset the gradients for the loss parameter
loss_module.zero_grad()

# compute the gradients for your loss parameters and set them
my_param_grad = get_params_grad(...)
loss_module.parameter.backward(my_params_grad) # EDITED: used to be `parameters` which is wrong

# now update all the parameters
optimizer.step()
1 Like

Thank you! I think your answer is exactly what I want. But I get an error:

criterion[1].parameters.backward(params_grad)
AttributeError: 'function' object has no attribute 'backward'

Is backward the right way to assign params_grad to the parameters in loss_module?

I think that’s my fault. You have to implement an nn.Module (and not a Function as I suggested) in order to try @albanD’ s solution.

My solution works with your original nn.Module and I made a typo, it should be loss_module.parameter not parameters sorry (edited my comment above).

Thank you! You solved my question!