Modifying intermediate layer output using Hooks

Hi everyone, I would like to discuss a few things regarding the register_forward_hook method.
I am trying to modify the output of an intermediate layer of a pre-trained model (for ex: ResNet50’s layer4[2].conv2) and pass the modified output from this layer further forward to get the final modified result. There is very less documentation on this and I am stuck on this for a long time now. Please help me out on this.

I am trying to assign a different variable to the output of this layer and then pass it on forward. Could someone please help with this?

You can directly manipulate the output and return it:

def hook(module, input, output):
    print("manipulating output")
    output = output * 1000000
    return output

model = models.resnet50().eval()
x = torch.randn(1, 3, 224, 224)

ref = model(x)
print(ref.abs().sum())
# tensor(16421.0430, grad_fn=<SumBackward0>)
ref = model(x)
print(ref.abs().sum())
# tensor(16421.0430, grad_fn=<SumBackward0>)

model.layer4[2].conv2.register_forward_hook(hook)
ref = model(x)
# manipulating output
print(ref.abs().sum())
# tensor(1.0414e+10, grad_fn=<SumBackward0>)

@ptrblck Hi

I have tried to do this and have been able to do this.
Problem arises when I try to assign a different custom variable of the same shape. The custom variable is the output from another pre-trained model and I wanted to see the output on the main model that I am placing the hooks on. Can you please check if this is possible to achieve and how?

def hook_1(module, input, output):
    modified_output = custom_operation(model_2, target_layer_name)
    output[0] = modified_output
    return output

hook_handle = model.layer4[2].conv2.register_forward_hook(hook)

output_modified = model(input_tensor)
# This output doesn't change even after the hook_handle is applied for the same input tensor

I can’t figure why it isn’t modifying the intermediate layer

It still works for me using a custom operation:

def custom_op(shape):
    x = torch.randn(*shape)
    x = x * 100000000
    return x

def hook(module, input, output):
    print("manipulating output")
    output = custom_op(output.shape)
    return output

model = models.resnet50().eval()
x = torch.randn(1, 3, 224, 224)

ref = model(x)
print(ref.abs().sum())
#tensor(25028.9023, grad_fn=<SumBackward0>)

ref = model(x)
print(ref.abs().sum())
# tensor(25028.9023, grad_fn=<SumBackward0>)

model.layer4[2].conv2.register_forward_hook(hook)
ref = model(x)
# manipulating output
print(ref.abs().sum())
# tensor(1.1515e+10, grad_fn=<SumBackward0>)

Could you provide a code snippet reproducing the issue?

Hi @ptrblck ,
This is the code to reproduce the issue:

model1 = resnet50(pretrained = True).eval()
model2 = resnet50(pretrained = True).eval()
model3 = resnet50(pretrained = False).eval()

modified_output = []
def hook_get_output(module, input, output):
    modified_output.append(output)

def hook_modify(module, input, output):
    output[0] = output[0] - modified_output[0]
    return output

hook_1 = model3.layer4[2].conv2.register_forward_hook(hook_get_output)
hook_2 = model1.layer4[2].conv2.register_backward_hook(hook_modify)

input_tensor = torch.randn(1, 3, 224, 224)
actual_output = model1(input_tensor)
modified_output = model2(input_tensor)

flag = True
for i in range(modified_output.shape[0]):
    if actual_output[0][i]!=modified_output[0][i]:
        flag = False
print(flag)
# True

Please try this code snippet

Hi @pyschia , have you been able to solved the problem yet?

I think the code snippet I wrote doesn’t call the forward function on the model3 and hence there is no change in the output of the pre-trained models 1 and 2.

This is an issue at my end I believe.

Please share your end of code if you are facing similar issues.