Calculating the gradient of one hidden unit wrt to another hidden unit

I want to calculate the gradient of the penultimate layer in a classifier wrt one of the hidden layers. Is there a provision for that

Iā€™m not sure I understand your use case correctly.
Do you want to compute the gradients of the parameters of the penultimate layer w.r.t. the forward activation of a hidden layer?

Yes. In essence what i wanted to do was understand how a particular hidden unit contributes to the value of a hidden unit in the penultimate layer. This could be gauged by the gradient of the penultimate layer parameters wrt to the hidden unit parameters

Something like this might work:

model = models.vgg16()

activation = {}
def get_activation(name):
    def hook(model, args, output):
        activation[name] = output
    return hook


x = torch.randn(1, 3, 224, 224)
out = model(x)

grad = torch.autograd.grad(outputs=activation["fc"], inputs=model.features[0].weight, grad_outputs=torch.ones_like(activation["fc"]))

which calculates the gradients in model.features[0].weight, which is the kernel of the first conv layer w.r.t. to the activations created in the penultimate linear layer.

1 Like

Thanks a lot Peter. This is really helpful