How to do something in the forwards pass ONLY if the model is in train mode?

Hi,

I am collecting the activations after the hidden layers.
I would like to do it only in the training forward pass and not in the validation forward pass.

This is my forward function:

def get_activations(self, l, x):
        if self.track:
            self.activations[l]['mean'].append(float(x.mean()))
            self.activations[l]['var'].append(float(x.mean()))

def forward(self, x):
        x = self.sig(self.fcI(x))
        for l in range(self.n_lay):
            x = self.sig(self.fcH[l](x))
            self.get_activations(l, x)
        x = self.fcO(x)
        return x

How could I specify get_activations to be run only if the network in in train mode?

@timeit
def train(model, criterion, optimizer, results, EPOCHS, LR):
    for epoch in range(EPOCHS):    
        
        # Training
        model.train()
        train_epoch(model, tr_loader, criterion, optimizer, LR, results)  ### Do it here
        
        # Validation
        model.eval()
        valid_epoch(model, ts_loader, criterion, results)          ### Do not do it here

Thank you!

I usually use a flag in forward (def forward(self, x, train = True)), but maybe there is a better way to do it.

Hi @JuliousHurtado ,

Thank you for the response.
In that approach, how do you specify the model whether train should be True or False?
I mean, I never explicitly call the forward function in the inference. I simply y = model(X).

Thanks

In the same way you call the model, but you add the flag:

train = True
y = model(X, train)

or something like that

Isn’t there any attribute of the network itself that can be accessed to identify it?
When we do model.train() or model.eval() I assume something in the model must change so batchnorm and dropout layers knows how to behave in the inference pass.

Is this correct?

Yes, you are correct. You could use self.training to get the current internal state.

2 Likes