Is layer_activation (register_forward_hook) the same as gradient?

I was wondering if the intermediate layer output initialised by register_forward_hook is the same as the gradient of the intermediate output wrt the image?

No, the forward activations are not the gradients unless I’m misunderstanding your question. This tutorial could be useful as it shows which values are used during the forward and backward pass.

Thanks!

My end goal is to calculate the gradient of layer activations/intermediate layer outputs wrt the input image. However, because of a next step I know that the end result needs to be the same size of my layer activations/intermediate layer outputs array. With the following code I seem to be getting an end result that is the same size as my input image. Hence the confusion with gradients and layer activations. This is the code that I am using:


 model.eval()
    tcav = {}
    for ind, (img, label) in enumerate(loader):
        img.requires_grad=True
        img = img.to(device, dtype=torch.float)
        output = model(img)
        
        layer_activation = activation[L] #activation array of layer L
        loss = -torch.mean(layer_activation)
        loss.backward(retain_graph=True)
        
        gradients = tuple_of_tensors_to_tensor(torch.autograd.grad(loss, img))[0]
        grads = normalize(gradients.cpu().detach().numpy().ravel())
        tcav[ind] = layer_activation#grads

Do you perhaps know what I need to do?