Feature extraction from trained neural network for my new test data

I trained the following architecture, i need to extract the intermediate layer output from label predictor layer before the fully connected output, which is supposed to be of size 64. I do not want to retrain the model using forward hook. Is it possible to get the intermediate layer output for new test samples.

class DANN(nn.Module):
def init(self):
super().init()
self.num_features = num_features

self.feature_extractor = nn.Sequential(

       nn.Linear(self.num_features,32), nn.ReLU(True),     
       nn.Linear(32,64), nn.ReLU(True),        
       nn.Linear(64,128), nn.ReLU(True),
       nn.Linear(128,128), nn.ReLU(True) )

 

self.label_predictor = nn.Sequential(nn.Linear(128,128),

                                     nn.ReLU(True),
                                     nn.Linear(128,64),
                                     nn.ReLU(True) ,nn.Linear(64,3) )

                              

self.domain_classifier = nn.Sequential(

       nn.Linear(128,256),  nn.ReLU(True),
       nn.Linear(256,64),  nn.ReLU(True),
       nn.Linear(64,2),
       nn.LogSoftmax(dim=1) )

def forward(self,x, grl_lambda = 1.0):
x = x.expand(x.data.shape[0], num_features)

  features = self.feature_extractor(x)
  features_grl = GradientReversalFn.apply(features, grl_lambda)
  label_pred = self.label_predictor(features)
  domain_pred = self.domain_classifier(features_grl)
 
  return label_pred, domain_pred

I’m not sure how “retrain” is related to forward hooks, but in case you don’t want to use hooks you could write a custom nn.Module and override the forward method with your logic (i.e. return the desired intermediate tensors additionally yo the model output).