Which one to use model.train() or model.eval()?

I am freezing a trained Autoencoder’s encoder module and training a classifier on top of its extracted features. My question is when doing so what is the correct way to address batch normalization and dropout layers? should I use autoencoder.eval() or autoencoder.train()?

########################################################
# during training of the classifier which on is the correct way? 
autoencoder.eval() # or 
autoencoder.train()  
#######################################################
for epoch in range(1, num_epochs+1):
        classifier.train(True) 
        for batch_idx, sample in enumerate(train_loader):
            data, target = sample['image'], sample['label']
            data, target = data.to(device), target.to(device)
                        optimizer.zero_grad()
            features = autoencoder(data, only_encode=True)
            output = classifier(features)
            loss = criterion(output, target)
            loss.backward()
            optimizer.step()

I assume you don’t want the auto-encoder to be trained together with the classifier.
Then you freeze its layers using:

# disable gradients (prevent training)
for p in autoencoder.parameters():  # reset requires_grad
    p.requires_grad = False

and additionally, set it to eval mode using

autoencoder.eval()

.eval() puts batch norm to inference mode, disables dropout etc.

If you leave the autoencoder in train mode it will still have the momentum from the batchnorm (from the last couple training steps) and dropout and usually, you don’t want this.

Igor_Susmelj, Thank you so much for your reply.