Hi,
I am interested in obtaining features from the intermediate layers of my model, but without modifying the forward() method of the model, as it is already trained. And also I don’t want to split it, because I am interested in getting the prediction, and the features from other upper layers as well in the same forward pass.
I have read about the register_forward_hook, but I haven’t found any example on how to use it.
I am trying to extract feature outputs of the intermediate layers of pre-trained VGG 16 architecture and concatenate them. The in-built models in pytorch doesn’t have names for all its layers for VGG architecture. Therefore, I am unable to use register_forward_hook. Is there any other alternate way?
I am trying to use something like below but I am not sure if gradients will be accumulated at the intermediate layers when I do back propagation as it is building up two computation graphs during instantiation of my model.
Thank you. This would be the least error prone of all. Also, when I use register_forward_hook, do I need to worry about backward_hook or it will be taken care off automatically? Some more examples on register_hook would be appreciated. Thank you.
Hey @smth, I have looked over a lot, but couldnt find an example of this method. Could you please point to any of it? I am just starting pyTorch, and couldnt find much tuts on how about using this function for transfer learning.