Get RuntimeError when using `register_full_backward_hook`

Hello, I encountered an error (or to say that’s a bug) when I’m using hook on the pretrained vgg16 model (from torchvision).

I’m trying to get the grads of the intermediate layer (specifically, model.features[15]) while it just raiseError. I googled for a long time and found a temporary solution: using forward_hook and hook the output again to get the grad, it just returns None.

I don’t know if there’s some wrong operations of mine or it may be a possible phenomenon?

Here is my code demo:

import numpy as np
import torch
from torchvision import models

def get_layers_with_logits(x, model):
    operations = []
    '''
    def hook(module, input, output):
        def h(grad):
            operations.append(grad)
        output.register_hook(h)
    handle = model.features[15].register_forward_hook(hook)
    '''
    def hook(module, input, output):
        print(output)
        operations.append(output)
    handle = model.features[15].register_full_backward_hook(hook)
    
    logits = model(x)
    handle.remove()
    return logits, operations

def main():
    test = torch.rand((1, 3, 224, 224))
    test.requires_grad = True
    with torch.enable_grad():
        model = models.vgg16(pretrained=False, num_classes=1000)
        model.load_state_dict(torch.load('./models_torch/vgg16-397923af.pth'))
        model.eval()
        
        logits, operations = get_layers_with_logits(test, model)
        feature = operations[0]

main()

and I am using:

PyTorch: 1.10.0
torchvision 0.11.1
Python 3.7.9
OS: Windows 10

When running your code I’m getting:

RuntimeError: Output 0 of BackwardHookFunctionBackward is a view and is being modified inplace. This view was created inside a custom Function (or because an input was returned as-is) and the autograd logic to handle view+inplace would override the custom backward associated with the custom Function, leading to incorrect gradients. This behavior is forbidden. You can fix this by cloning the output of the custom Function.

which seems to point to the inplace nn.ReLU. Replacing the inplace version with an out-of-place version via:

model.features[15] = nn.ReLU()

seems to solves this issue.
However, I don’t quite understand your use case, as you are registering a backward hook while you are wrapping the code into a no_grad() context (which would disallow a backward call) and are also removing the hook before a backward call is even used.
Are you looking for forward hooks to get the output of features[15]?

Thank you for your reply! Now I understand what mistake I’ve made due to the lackness of knowledge using Pytorch…