Behavior of model.eval(). Does it disable gradient operations?

Hi there,

Consider the following circumstance:

model.eval()
y = model(x)
loss = criterion(y, label)
# backward() and step() the loss, abbreviate.

I just want to calculate the gradient and update parameters when things like BN and Dropouts are disabled. Are there any potential issues here?

Thanks~

model.eval() does not disable the gradient calculations, but only changes the behavior of some layers (such as batchnorm and dropout), so I wouldn’t expect to see any issues with Autograd.