How to get gradients wrt inputs for intermediate layers?

Hi there,

I’d like to compute the gradient wrt inputs for several layers inside a network. So far, I’ve built several intermediate models to compute the gradients of the network output wrt input using autograd.grad.

Can it be done more efficiently using hooks?

Thanks.

2 Likes

Hi,

Using autograd.grad sounds is the best solution in general.

Not sure how you would use hooks. If the final result you want is a linear combination of all these gradient, then you might be able to use the linearity of the gradient to do that. But not sure. You will need to share more information about what you’re trying to do here/

Hi,

Thanks for the reply.

I’d like to compute the gradient of each layer, so basically I do a loop over the layers and call autograd. (the layers in the code below are stored in a list)

  var_x = torch.autograd.Variable(x, requires_grad=True)  
  
  for l in range(n):  
    mx = layers[l](var_x)
    
    gradients = torch.autograd.grad(outputs=mx, inputs=var_x,
                                      grad_outputs=torch.ones(mx.size()).to(device), 
                            create_graph=True, retain_graph=True, only_inputs=True)[0]
      
    norm_gradient_velocity[l] = torch.max( torch.norm( gradients.view(gradients.shape[0],-1), p=np.inf, dim=1 ) )

Because it’s long during training to use such loop, I was wondering if it could be done using hooks.

1 Like

Ok, so not linear at all. So you wont be able to do that :confused:
In that case, I don’t think there is any other way to do this.
Hooks would only allow you to call a function during the backward pass. But you still have to do all these autograd.grad.

1 Like

sorry to bother you. have you achieved to get gradients of the intermediate layers w.r.t the input data with hook in pytorch?
Now I also met this issue and i want calculate the F-norm of Jacobi Matrix w.r.t the every layers and input data in a network .
Thanks for your reply!

Hi
I also face such challenge and I have tried several solutions that came in my mind.
If you find the solution, I would greatly appreciate it if you could please share it with us.

I have found a solution.
My problem is to start the gradient process from a specific filter in a specific layer with respect to the input image.
First, I have assigned a forward hook to all conv. layers in order to keep output of each filter (activation map). Second, I have assigned a backward hook to all layers to keep their gradient in the backward pass.
Third, I fed the model with an input image to obtain all activation maps. Then, I called the backward function on the norm of one of the activation maps (remind that each activation map of each filter is similar to the output of loss). Finally, I have all gradients in a list variable where the last element of this list is the gradient of input image which should have the size of (1,3,224,224) for instance.

Did you use same functino for regestering the forward_hook and backward_hook?

After I assigned a backward hook, I get tuple for intermediate layer’s output.

1 Like

Hello,
I also want to get gradient of output wrt activation of intermediate layer.
I tried backward_hook for that layer, it returns tuple of what? could you explain? is it gradient of output wrt weight and bias of that layer? I want wrt activation(output) of that layer.

Hi, were you able to find a solution to this? I’m trying to do the same but rather get None as output.

@shubham_vats You can either use .retain_grad() on the input value but a more consistent way of getting this value is to use hooks. You can use a full backward hook (not a backward hook as that’s deprecated and gives the wrong result). Also, check there’s no use of in-place operations (for example with ReLU) nor use of .detach() which breaks the gradient of your computational graph.

@Avani_Gupta For an explanation of what grad_input and grad_output mean there’s an explanation on the forums here. The grad_output term is the gradient of the loss with respect to the output of that nn.Module.

1 Like

@AlphaBetaGamma96 thank you. I’ll look into this. I’m fairly new to pytorch and hooks didn’t just come naturally to me. I was hoping to use create_feature_extractor to get those intermediate outputs and do something with them but apparently, the grad attribute is what I need for which retain_grad should be helpful. I also wanted to avoid hooks because as per my understanding I would have to retrain the model to use them? I am using load_state_dict to load a pretrained model for now.

@shubham_vats You won’t have to retrain the model, you can just attach the hooks to the model perform a forward pass then call loss.backward(). That will trigger the full backward hooks and you’ll get your derivatives (although it will be for all samples so you’ll need to take a mean over that to get the correct shape)

Thank you for getting back. I was able to get it to work without the hooks using retain_grad and backward. But, it’s always good to know an alternative approach. Thank you for all the help!