If you are using with torch.no_grad():, you explicitly say you don’t want to calculate gradients for the operations inside this block, so loss.backward() won’t work. Usually you use it for the validation/test dataset.
If you need to call loss.backward() you shouldn’t use it.
I am testing a project.
There is a function
def tensor2var(self,tensor,volatile=False):
Some code…
var=torch.autograd.Variable(tensor,volatile=volatile)
When I run this I get warning
Volatile is depricated.use with torch.no_grad():
Can u please elaborate how to use torch.no_grad(): here in this function.
That might be a bit tricky, as the volatile argument is not an attribute of the data anymore but of the workflow. You could probably write some workaround using torch.set_grad_enabled, but I’m not sure if this will fit your use case.
The proper way now would be to wrap the complete method in a with torch.no_grad() block. Would that work for you?
Yes, Variables are deprecated in newer PyTorch versions.
Instead of setting volatile=True, you would now have to wrap your code in a torch.no_grad() block:
with torch.no_grad():
x = torch.randn(1, 3, 224, 224)
output = model(x)
This will make sure to avoid storing the intermediate activations (same as with volatile=True).
Hello
As u suggested I replaced code as follows
def tensor2var(self,tensors,requires_grad=True)
Some code…
With torch.no_grad():
var=torch.autograd.Variable(tensor)
And where this function is called there also I replaced volatile by requires_grad=True
Good to hear it’s working now.
However, you don’t need to wrap your tensors in Variables anymore, so just use the tensors directly and set the requires_grad argument while creating the tensor.
Hi
I have got the same error but my code is different can you help me please?
This is my error in utils.py file
content/gdrive/My Drive/DeepFakeDetection/utils.py:21: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
> return Variable(x, volatile=volatile)
And this is my code in utils.py file
def to_var(x, volatile=False):`
> > if torch.cuda.is_available():
> > x = x.cuda()
> > return Variable(x, volatile=volatile)
Variables are deprecated since PyTorch 0.4 and with it the volatile argument.
You can use tensors now, and if you don’t need to calculate gradients (as was specified via volatile=True), you should wrap the code in a torch.no_grad() block:
with torch.no_grad():
x = x + 1 # Autograd won't track these operations