I think it depends on your use case and probably also coding style.
E.g. if I would be working on a new model architecture, where now different features will be returned, I would override the forward. This would make sure I can initialize the model using its new definition without any manipulation on the model itself.
On the other hand, if I just want to check some intermediates e.g. for debugging, I would use hooks as I can directly add them to the model without any changes to it.
Also, I believe that hooks are not scriptable right now.
@ptrblck Thanks for the prompt reply.
I still not sure I understand that correctly.
For my case, the DepthwiseSeparableConv is defined in another nn.moduel and didn’t show in the forward() definition of the main network (i.e., Unet_Netzero)
If I want to access pretrained.layer1.3.0.bn1, is the following correct?
model.pretrained.layer1.3.0.bn1.register_forward_hook(get_activation(‘bn1’))
I’m not sure I understand this question completely. If the module isn’t used in the forward method, there won’t be any forward activations and the hook won’t capture anything.
I see that you have printed the values of print(activation['fc2']). I can read that F.relu() is applied afterwards.
How to get the values of F.relu(self.fc2)?
If you want to use forward hooks as well, you could replace the functional F.relu with the nn.ReLU module and register the hook to it.
If not, you can store the activation output of F.relu directly in e.g. a dict inside the forward method.
Do you have an example of “store the activation output of F.relu directly in e.g. a dict inside the forward method.” I guess you have already explained it somewhere?
Instead of getting the output of an intermediate layer, is there a way to get the input of an intermediate layer? For example if my forward function looks like this, where fc1 and fc2 are just the linear layers:
def forward(self,x,x1):
x = F.relu(self.fc1(x))
x_cat = torch.cat((x,x1))
x = self.fc2(x_cat)
return x
Hi! is there any possibility to do the same with YOLOv7 architecture ? I’d like to get the feature maps of each layer using yolov7 model (below is example of model layers)
what about the loading of the optimizer?
Since there is the new fc layer(because the feature extraction), we can not load the original optimizer, right?
So, what can we do?
Yes, this might be the case and you could create an optimizer missing the new fc layer, load its state_dict, and create a separate optimizer for the newly added fc layer.