I’m loading a model and running inference,
model = torch.load("model_140.pt")
Or should it rather be
model = torch.load("model_140.pt").to(device) model.eval() with torch.no_grad(): model(image)
Currently I use
eval() because of some dropout and batchnorm layers, and no grad because it was stated elsewhere that this would not create intermediate tensors meant for the backpropagation step.
Is the second approach best or it does not matter when loading a saved model ?