When in model.eval()
phase, this is what I’m doing now that I’m in 0.4:
with torch.no_grad():
for i, input in enumerate(data_loader):
input_var = input.requires_grad_().to(gpu)
output = model(input_var)
Feel redundant to say with torch.no_grad():
and then .requires_grad_()
Is this the right format? I don’t care to save gradient because I’m not training