Accessing intermediate layers of a pretrained network forward?

sorry for playing the necromancer here, but I have the same issue.
I am still new to pytorch, but if I understand your proposed solution correctly, then your proposed solution here:

computes only exactly one forward pass, right?
That is, conv1_1 will be executed exactly once, and the result will be reused to compute conv1_2?
This should be fine iff I don’t want to compute pre-relu activations. For conv outputs before ReLU I’d need to save x.copy() I guess, because ReLU is computed in-place?

EDIT:
Also, this only works with the convolutional features, right? If I wanted to extract e.g. the fc6 features of a vgg19 network (that is, the first fc layer after the conv blocks) I’d have to extract all submodules from net.classifier and build my own nn.Sequential with those?