Volatile = now has no effect. Use `with torch.no_grad():` instead

for batch_idx, image in enumerate(data_loader):
        image = Variable(image, volatile=True)

This is my original code
and I get this message.
volatile = now has no effect. Usewith torch.no_grad():instead.

I searched about this and then I find that there is no need use to Variable().
then I just use

for batch_idx, image in enumerate(data_loader):
        Just use image ...

or change like this ?

with torch.no_grad():
       image = Variable(image)
1 Like

Using

with torch.no_grad():
    for batch_idx, image in enumerate(data_loader):
        # perform your operations on image without wrapping it in a variable

Should work since Variables and Tensors have been merged in v 0.4

3 Likes

I used torch.no_grad() as you mentioned, but now I am getting the error in loss.backward():

element 0 of tensors does not require grad and does not have a grad_fn.

Instead of inputs = Variable(inputs, volatile=True), I used:

inputs = Variable(inputs, requires_grad=True)

it doesn’t show any errors, but I am not sure that is correct?

If you are using with torch.no_grad():, you explicitly say you don’t want to calculate gradients for the operations inside this block, so loss.backward() won’t work. Usually you use it for the validation/test dataset.
If you need to call loss.backward() you shouldn’t use it.

2 Likes

I am testing a project.
There is a function
def tensor2var(self,tensor,volatile=False):
Some code…
var=torch.autograd.Variable(tensor,volatile=volatile)

When I run this I get warning
Volatile is depricated.use with torch.no_grad():

Can u please elaborate how to use torch.no_grad(): here in this function.

That might be a bit tricky, as the volatile argument is not an attribute of the data anymore but of the workflow. You could probably write some workaround using torch.set_grad_enabled, but I’m not sure if this will fit your use case.

The proper way now would be to wrap the complete method in a with torch.no_grad() block. Would that work for you?

3 Likes

Thank u for quick reply.
I will try and let u know.

I read this on pytorch
Variable has been removed in new version of python. Tensors are the same as the old variables.

So do I need to remove variable from this code
var=torch.autograd.Variable(tensor,volatile=volatile)

I didn’t understand

Yes, Variables are deprecated in newer PyTorch versions.
Instead of setting volatile=True, you would now have to wrap your code in a torch.no_grad() block:

with torch.no_grad():
    x = torch.randn(1, 3, 224, 224)
    output = model(x)

This will make sure to avoid storing the intermediate activations (same as with volatile=True).

1 Like

Ok,I got it now.
Thank u so much.

Hello
As u suggested I replaced code as follows
def tensor2var(self,tensors,requires_grad=True)
Some code…
With torch.no_grad():
var=torch.autograd.Variable(tensor)

And where this function is called there also I replaced volatile by requires_grad=True

And it worked.

Thank u so much …

Good to hear it’s working now.
However, you don’t need to wrap your tensors in Variables anymore, so just use the tensors directly and set the requires_grad argument while creating the tensor.

Do u mean
var=(tensor,required_grad=True)
Like this…

required_grad is an attribute of tensor, so you should use it as e.g.:

x = torch.tensor([1., 2., 3.], requires_grad=True)

x = torch.randn(1, requires_grad=True)

x = torch.randn(1)
x.requires_grad_(True)
1 Like

Ok
Can we export trained pytorch model in Android studio??

I also faced this issue in the following line:

return Variable(tensor(np.concatenate(datom, 0)), volatile=self.volatile), chunk_sizes

UserWarning: volatile was removed and now has no effect. Use with torch.no_grad(): instead.

any help please?

Warp your code into the torch.no_grad() guard, since Variables are deprecated:

# before
...
x = Variable(torch.randn(1), volatile=True)
return x

# now
with torch.no_grad():
    ...
    x = torch.randn(1)
return x

Hi
I have got the same error but my code is different can you help me please?
This is my error in utils.py file

content/gdrive/My Drive/DeepFakeDetection/utils.py:21: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
>   return Variable(x, volatile=volatile)

And this is my code in utils.py file

def to_var(x, volatile=False):`
> >      if torch.cuda.is_available():
> >          x = x.cuda()
> >      return Variable(x, volatile=volatile)

Thank you

Variables are deprecated since PyTorch 0.4 and with it the volatile argument.
You can use tensors now, and if you don’t need to calculate gradients (as was specified via volatile=True), you should wrap the code in a torch.no_grad() block:

with torch.no_grad():
    x = x + 1 # Autograd won't track these operations
1 Like

As you said I changed my code to this and did not received warning again
Thank you

def to_var(x, volatile=False):

 with torch.no_grad():
     
     if torch.cuda.is_available():
         x = x.cuda()
 return x