Forward hooks on sequential nodes giving same output

I’m working on visualizing the convolutions in my model. I use the following approach to add forward hooks to each module. These hooks record the input and output to two dicts.

node_out = {}
node_in = {}

#function to generate hook function for each module
def get_node_out(name):
  def hook(model, input, output):
    node_in[name] = input[0][0].detach()
    node_out[name] = output[0].detach()
  return hook

hook_handles = {}
for name, module in model.named_modules():
    hook_handles[name] = module.register_forward_hook(get_node_out(name))

The model above was already trained, in eval mode, and a prediction made on a single sample. The model starts with the following double convolution. I noticed that the output from the first BatchNorm1d layer was all non-negative and then confirmed that it was actually identical to the subsequent ReLU layer. I pulled all the necessary parameters (running mean, running std etc.) to do the BatchNorm1d manually and confirmed that it should have negative values given the node_in values to that layer. Is there something invalid about the way I’m generating/registering the hooks here that would cause this? Is there another approach recommended?

nn.Sequential(
        nn.Conv1d(in_c, out_c, padding=padding, kernel_size=kernel_size, bias=False),
        nn.BatchNorm1d(out_c),
        nn.ReLU(inplace=True),
        nn.Conv1d(out_c, out_c, padding=padding, kernel_size=kernel_size, bias=False),
        nn.BatchNorm1d(out_c),
        nn.ReLU(inplace=True))

I tried to follow the examples below in my implementation:

Thanks!

No, your hooks look correct.

Yes, you could disable inplace layers, since they are changing the activations inplace.
In your case, the output of batchnorm will be changed by the ReLU(inplace=True) layer directly, so that the output of batchnorm and the input/output of the relu layer are all the same tensor after the relu was applied on it.

1 Like