How to forward input through some layers of a pretrained model and not the others?

Hi,

So, I am wondering how I can pass input through select layers of a model and not the others. For example, I have a pretrained UNet, but I am only interested in the output of each layer of the encoder before downsampling. To get the intermediate layer outputs, I use hooks, but how do I make sure that after the input passes through all encoder layers, the network stops does not waste time decoding the encoded output. The solution I have right now is to create a separate model called UNet_Encoder with the same layer names as the original UNet and then load the weights in. I wonder if there is a less hackish way to overcome this issue.

I think deriving another class from your original UNet and changing the forward method accordingly is a valid approach.
If you know about this use case before writing the initial model, you could also separate the encoder and decoder in submodules and use a passed flag in the forward method to use different branches.

1 Like

Thanks for such a quick response! It’s clear now, I just wanted to make sure there isn’t a more efficient way of doing this.

There are many ways to do it. ptrblk kindly suggested one of them.
You can also do use your networks children() method , get the modules you want, add them to a nn.ModuleList(), and then send them to a nn.Sequential() and thats it.
you have your model.

1 Like