No, you shouldn’t use it during training as it will disable the gradient calculation as previously explained. Wrap the forward pass of the entire model in the guard during inference:
# inference
model.eval()
with torch.no_grad():
out = model(x)
No, you shouldn’t use it during training as it will disable the gradient calculation as previously explained. Wrap the forward pass of the entire model in the guard during inference:
# inference
model.eval()
with torch.no_grad():
out = model(x)