In the past, we were able to separate a variable from the current state of a network by doing:
states = Variable(torch.Tensor(width, height), volatile=True)
It seems there is an API change to
with torch.no_grad() but I could not find the documentation on docs pages.
It would seem that this is the correct way not to calculate gradients on a variable going forward:
>>> x = Variable(torch.Tensor([34, 54]), requires_grad=True) >>> with torch.no_grad(): y = x * 2
Am I correct?