Should model.eval() be used when doing inference?

I’m loading a model and running inference,

model = torch.load("")

Or should it rather be

model = torch.load("").to(device)
with torch.no_grad():

Currently I use eval() because of some dropout and batchnorm layers, and no grad because it was stated elsewhere that this would not create intermediate tensors meant for the backpropagation step.

Is the second approach best or it does not matter when loading a saved model ?

1 Like

model.eval() simply tells the model to get into evaluation phase. Means certain layers will now not function the way they does in training. like Layer Normalization/Dropout

torch.no_grad() simply tells not to take the track of gradients of layers/weights

So as of I know, it will be a good option to use both in combination as it will not only be a fair path but torch.no_grad() will also help in reducing CPU/cuda memory usages

1 Like