How can I get the middle layer’s output of the trained model?

I have a train model,the architecture of the model as shown in the following:

def forward(self, x):
      x = F.relu(F.max_pool2d(self.conv1(x), 2))
      x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
      x = F.relu(F.max_pool2d(self.conv3(x), 2))
      x = F.relu(F.max_pool2d(self.conv4_drop(self.conv4(x)), 2))

      x = x.view(-1, 160)

      return x

How can I get the middle layer’s output of in the test processing?

Can anyone give me some suggestions?
Thank you so much.

You don’t have to define all of your layers to be x. Try something like the following:

def forward(self, x):
      x1 = F.relu(F.max_pool2d(self.conv1(x), 2))
      x2 = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x1)), 2))
      x3 = F.relu(F.max_pool2d(self.conv3(x2), 2))
      x4 = F.relu(F.max_pool2d(self.conv4_drop(self.conv4(x3)), 2))

      x5 = x4.view(-1, 160)

      return x5

This way you can get at the output of the middle layers (x1 ... x4)

1 Like

This is very helpful,thank you.

variation on richard’s approach: simply return the xns you want, like:

return x4, x3, x2, x1

There is no rule/law that says that network modules need to return one and only one tensor.

(You can also return a list or a dictionary too by the way; I mean, you can return anything you like in fact; this is true for input parameters too by the way: you can have as many or as few input parameters as you want, and of any type you find works well for you)