Accuracy in the test set is highly reduced using model.eval()

Hey all

I’m using a EfficientNetB0 as backbone of my model and fine tuning it to three classes using a nn.Softmax() as a head with nn.Linear(). The model is performing as I expected in the training and validation sets. Even though, when I load the model and run it with the test set, the accuracy is highly decreased.

During the validation set, I am using

with torch.set_grad_enabled(False): for i, data in enumerate(val_dl, 0):
Which works fine.

During the test, I am using

model.eval() for data in test_dl:
Which works really bad.

When i try to use with torch.set_grad_enabled(False): it returns an error that has to do with the Batch norm.

I have no idea why it’s happening, I’ve checked the results of the validation step and it’s working well?

Maybe the problem is when i save the model?

save_model = True
    if save_model:
        print(f"--> Saving the model at 'saves/{model_name}.pth'")
        torch.save(model, os.path.join('saves', model_name+'.pth'))

When I load the model?

model = torch.load('saves/' + model_name + '.pth')