Hi, I am confused about the BatchNorm layer behaviour during testing:
In previous answer (The behavior of the BN layer in train and eval mode), it is mentioned that if I set model.eval(), the running stats (mean, std) will be used to do normalization.
What is the running stats? Are those values fixed from the trained model? Or from the new test data?
For example, during test, I set model to model.eval()
, then iterate through each test data and save the predicted value. (Example code below)
Is this the correct way?
In this case, the running stats (mean, std) are changed after every test sample? Would this cause any problem?
Thank you!
import torch
def main():
# Read trained model
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = MyModel()
model = torch.nn.DataParallel(model)
model.load_state_dict(torch.load(train_model_path, map_location=device))
model.to(device)
# Setting to eval mode
model.eval()
# Read data
test_data = CustomDataset(image_dirs, test_csv)
print('Number of test samples', len(test_data))
preds = []
for image_input, label in test_data:
# Evaluation
with torch.no_grad():
output = model(image_input)
# Prediction
_, pred = torch.max(output, 1)
# Compute some scores
outputs_sm = F.softmax(output, dim=1)
pred_score = outputs_sm[:, 1].item()
pred_dx = pred.item()
print(img_id, pred_score)
if __name__ == "__main__":
main()