How can I get the output at the 2nd block of a resnet i.e. the layer that puts out a tensor with 128 channels?
The code below converts the model into a list then uses torch.nn.Sequential which destroys the residual connections in the resnet.
self.encoder = models.resnet18(pretrained=True)
modules = list(self.encoder.children())
self.encoder = torch.nn.Sequential(*(list(modules)[:-4]))
The residual connections won’t be destroyed, as they are implemented in the
Bottleneck layers. However, while wrapping the model in an
nn.Sequential container might work for your use case, I would recommend to use forward hooks and return the desired activation.
Here is an example of how to use forward hooks.
Thank you. But that would mean that memory is still being used for the later layers during the forward pass right ? I am training a model that uses the output of resnet’s 2nd block and would not want to waste memory.
Yes, that’s right.
If you don’t need the following layers, your approach should work.
As a quick test you could use your
nn.Sequential approach and compare it to the activation from the forward hook of the original model.
Thank you ! I have verified that the outputs are equal.