Behavior of model.eval(). Does it disable gradient operations?

Hi there,

Consider the following circumstance:

y = model(x)
loss = criterion(y, label)
# backward() and step() the loss, abbreviate.

I just want to calculate the gradient and update parameters when things like BN and Dropouts are disabled. Are there any potential issues here?


model.eval() does not disable the gradient calculations, but only changes the behavior of some layers (such as batchnorm and dropout), so I wouldn’t expect to see any issues with Autograd.