Torch.no_grad()

In the past, we were able to separate a variable from the current state of a network by doing:

states = Variable(torch.Tensor(width, height), volatile=True)

It seems there is an API change to with torch.no_grad() but I could not find the documentation on docs pages.

It would seem that this is the correct way not to calculate gradients on a variable going forward:

        >>> x = Variable(torch.Tensor([34, 54]), requires_grad=True)
        >>> with torch.no_grad():
                    y = x * 2

Am I correct?

7 Likes

Yes, the reason that it is not on the doc yet is that it is not in any official release. We will make sure to document such things properly before a new release.

It got pulled when I compiled a docker image this evening.

People can already see it. The earlier the documentation is updated, the better.

1 Like

We should definitely update it soon. Thanks for reminding us. I will ask people who are more familiar with the change to write the doc.

That said, you are compiling from github master, i.e., not a release. Although we are usually pretty good at keeping the doc in sync with master, there can sometimes be some delays.

2 Likes

I faced same issue, not only it gives warning it also consumes much more memory during validation-- because it computes grads unnecessarily. However, with little code change as shown in the post, it fixes both issues.

1 Like

for what version is this out? It seems that 0.3.1 yields errors:

AttributeError: module 'torch' has no attribute 'no_grad'

it’s only on master.

If I have code written by someone else (see source), is there a way to replace this with something that is not in master?
I can’t use master because it won’t run with my CUDA compute 5.0 GPU.

I’m having the same issue @Brando_Miranda

The function was added in 0.4.0. You are probably using an older version of PyTorch.
You can find the install instructions on the website.

1 Like

I am having the same warning of using:
data, target = Variable(data, volatile=True), Variable(target)

should it be changed to:

with torch.no_grad():            
     data, target = Variable(data), Variable(target)
4 Likes
  1. You shouldn’t use Variable wrappers anymore.
  2. Use torch.no_grad() to wrap around the entire inference code. (or use torch.set_grad_enabled
2 Likes

Can u please show how to use with torch.no_grad():

And replace volatile

I am testing a GitHub project and they have used volatile =False as a function argument
So do I need to replace it with requires_grad=True

In fact,you can read the official document,the torch.no_grad() in the part of doc which called torch.autograd,https://pytorch.org/docs/stable/autograd.html#locally-disable-grad
In addition,the Variable() function return a torch tensor,which is equal tensor(requires_grad=False)

The Variable API has been deprecated: Variables are no longer necessary to use autograd with tensors. Autograd automatically supports Tensors with requires_grad set to True . Below please find a quick guide on what has changed:

  • Variable(tensor) and Variable(tensor, requires_grad) still work as expected, but they return Tensors instead of Variables.
2 Likes