ResNet50 training using additional intermediate layers

Hello!

I’m trying to implement a custom network by utilizing the pre-trained ResNet50 network from torchvision.

What I would like to achieve is taking the features after the first 2 stages (so essentially the self.layer1 if I understood the architecture right) and use that output along with the swapped out self.fc layer so I can concatenate them or even use them in two different routes.

It would look something like this in the forward method (ideally):

def forward(self, x):
    fc_out, layer1_out = (unkown)(x)
    return fc_out, layer1_out

It’s not exact, but you get the idea, both layers could be used for forward pass and backward pass also.

To understand the context a little more, I’m using the convolutional network with image sequences (4D inputs) and then I utilize the output features in a many-to-many RNN network. I would like to add the aforementioned intermediate layer to the training either as a skip connection by concatenating with the last fc layer, or feeding it to a second RNN layer.

So you see, it seems a little bit complicated but I hope there is a simple and elegant solution for this problem.

Thanks in advance!

Anyone have any guesses? Might this be achieveable by forward or backward hooks?

I know that additional layers can be added at the end of the built-in models. Not sure if you can add intermediate layers using built-in functions. You can create your own ResNet50 model. Please see the following link on ResNet50.

https://dhruvs.space/posts/understanding-resnets/