I am freezing a trained Autoencoder’s encoder module and training a classifier on top of its extracted features. My question is when doing so what is the correct way to address batch normalization and dropout layers? should I use autoencoder.eval()
or autoencoder.train()
?
########################################################
# during training of the classifier which on is the correct way?
autoencoder.eval() # or
autoencoder.train()
#######################################################
for epoch in range(1, num_epochs+1):
classifier.train(True)
for batch_idx, sample in enumerate(train_loader):
data, target = sample['image'], sample['label']
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
features = autoencoder(data, only_encode=True)
output = classifier(features)
loss = criterion(output, target)
loss.backward()
optimizer.step()