This is the whole PyTorch script: https://gist.github.com/ProGamerGov/753b64404547662b9ff3d816a7f88f9f#file-test2-py-L144-L195
Specifically I am having trouble trying to translate these functions that were made for Lua’s Torch7 library, into Python’s PyTorch Library: https://github.com/jcjohnson/neural-style/blob/master/neural_style.lua#L449-L564
For example, the Lua code uses:
self.gradInput:div(torch.norm(self.gradInput, 1) + 1e-8)
And I translated it into:
self.gradInput.div_(torch.norm(self.gradInput, 1) + 1e-8) # Normalize Gradients
But I can’t get any form of it to work inside the ContentLoss or StyleLoss function with the input variable.
Trying to implement any of the other parts of the Lua ContentLoss and StyleLoss functions, also seems to result in the same errors:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
I am using the nn.Module for the ContentLoss and StyleLoss functions.
apsvieira
(Antonio Pedro)
March 1, 2018, 12:30am
2
Not really familiar with Lua Torch, but the error seems to be coming from using inplace operations like .div_
on Variables. You could try replacing those with their counterparts, like .div
and checking if it fixes your code.
From the docs:
Supporting in-place operations in autograd is a hard matter, and we discourage their use in most cases.
You can read more about it here: Autograd mechanics
Does one have to use autograd? Or are there other options in PyTorch?
Is there a different “resizeAs” in PyTorch?
AttributeError: 'Variable' object has no attribute 'resizeAs_'
The error comes from this code:
class ContentLoss(nn.Module):
def __init__(self, target, strength):
super(ContentLoss, self).__init__()
self.target = target.detach() * strength
self.strength = strength
self.crit = nn.MSELoss()
self.register_parameter('loss_mode', None)
self.normalize = 'Yes'
def forward(self, input):
#self.output = input.clone()
if self.loss_mode is None:
self.target.resizeAs_(input).clone(input)
else:
self.loss = self.crit(self.G, self.target) * self.strength
self.output = input
return self.output
def backward(self, input, gradOutput):
if self.loss_mode is None:
self.gradInput.resizeAs_(gradOutput).clone(gradOutput)
else:
if input.data.nelement() == self.target.nelement():
self.gradInput = self.crit(input, self.target)
if self.normalize == 'Yes':
self.gradInput.div_(torch.norm(self.gradInput, 1) + 1e-8) # Normalize Gradients
self.gradInput.mul_(self.strength)
self.gradInput.add_(gradOutput)
#self.loss.backward(retain_graph=retain_graph)
return self.gradInput
You have a typo. Try Variable.resize_as_
.