Question about no_grad in 0.4

When in model.eval() phase, this is what I’m doing now that I’m in 0.4:

with torch.no_grad():
   for i, input in enumerate(data_loader):
       input_var = input.requires_grad_().to(gpu)
       output = model(input_var)

Feel redundant to say with torch.no_grad(): and then .requires_grad_()

Is this the right format? I don’t care to save gradient because I’m not training

Hi .requires_grad_() should be used only to make the tensor require gradients. In this case you don’t want to require gradients so you should remove it.
Otherwise this is the right usage for torch.no_grad().

1 Like