Hi, I want to get outputs from multiple layers of a pretrained VGG-19 network. I have already done that with this approach, that I found on this board:
class AlexNetConv4(nn.Module):
def __init__(self):
super(AlexNetConv4, self).__init__()
self.features = nn.Sequential(
# stop at conv4
*list(original_model.features.children())[:-3]
)
def forward(self, x):
x = self.features(x)
return x
Initializing a new net for every output I am interested in, occupies a lot space on my GPU, therefore I would like to follow this approach, via the forward method:
The problem is, that I donāt know how to get the names of the convolutions in a pretrained VGG-Net that I got from the torch vision models.
Hope someone can help me out with that!
I have searched a lot, but I canāt find a way, to get the names of the different layers, of the pretrained pytorch vgg19 model.
Thanks for your help!
I am training a network for the image synthesis, based on this paper https://arxiv.org/abs/1707.09405 and get my loss from a vgg 19 net. The layers I am currently interested in are 3,7,8,15,22 and 32. The ideas is, to get a more complex loss function, by comparing the detected features of a reference and a synthesised image
sorry for playing the necromancer here, but I have the same issue.
I am still new to pytorch, but if I understand your proposed solution correctly, then your proposed solution here:
computes only exactly one forward pass, right?
That is, conv1_1 will be executed exactly once, and the result will be reused to compute conv1_2?
This should be fine iff I donāt want to compute pre-relu activations. For conv outputs before ReLU Iād need to save x.copy() I guess, because ReLU is computed in-place?
EDIT:
Also, this only works with the convolutional features, right? If I wanted to extract e.g. the fc6 features of a vgg19 network (that is, the first fc layer after the conv blocks) Iād have to extract all submodules from net.classifier and build my own nn.Sequential with those?
hello. what I want to do is to feed some input to alexnet and get the middle layer output of specific layer(feature maps) then save them and feed them to another CNN as training data. for example the second network is autoencoder. now my question is how to feed the middle layer outputs to this new network as training data?