Please refer to the below custom model class’s forward method:
def forward(self, x): x = self.layer1(x) temp = self.layer2(x) x = self.layer2(x) x = self.out(x) return x, temp
I am trying to extract an intermediate layer’s output using this(layer-2 output). I am aware of the forward hooks and that hooks are the best way for this task, but out of curiosity, I am asking this. Will the above code snippet will end up doing something unexpected?
train function looks like this:
def train(X, Y): model = MyModel() output, intermediate_output = model(X) loss = some_criterion(output, Y) loss.backward() . . ....
To be precise I want to ask that will there be any change in convergence point(assuming everything idle) because of the tweaked forward method?
The original forward method is given below for reference:
def forward(self, x): x = self.layer1(x) x = self.layer2(x) x = self.out(x) return x
As per my current understanding of PyTorch, I don’t think there should be any change in the model’s convergence point i.e. model should behave the same way it was doing before the forward method was tweaked. Am I missing something here?