Yes, the reason that it is not on the doc yet is that it is not in any official release. We will make sure to document such things properly before a new release.
We should definitely update it soon. Thanks for reminding us. I will ask people who are more familiar with the change to write the doc.
That said, you are compiling from github master, i.e., not a release. Although we are usually pretty good at keeping the doc in sync with master, there can sometimes be some delays.
I faced same issue, not only it gives warning it also consumes much more memory during validation-- because it computes grads unnecessarily. However, with little code change as shown in the post, it fixes both issues.
If I have code written by someone else (see source), is there a way to replace this with something that is not in master?
I can’t use master because it won’t run with my CUDA compute 5.0 GPU.
In fact,you can read the official document,the torch.no_grad() in the part of doc which called torch.autograd,https://pytorch.org/docs/stable/autograd.html#locally-disable-grad
In addition,the Variable() function return a torch tensor,which is equal tensor(requires_grad=False)
The Variable API has been deprecated: Variables are no longer necessary to use autograd with tensors. Autograd automatically supports Tensors with requires_grad set to True . Below please find a quick guide on what has changed:
Variable(tensor) and Variable(tensor, requires_grad) still work as expected, but they return Tensors instead of Variables.