I’m trying to modify grad_input
in my backwards hook function, but I am getting the following error:
Traceback (most recent call last):
File "st.py", line 260, in <module>
optimizer.step(feval)
File "/usr/local/lib/python2.7/dist-packages/torch/optim/lbfgs.py", line 101, in step
orig_loss = closure()
File "st.py", line 255, in feval
loss.backward()
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 167, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/__init__.py", line 99, in backward
variables, grad_variables, retain_graph)
RuntimeError: hook 'backward' has changed the size of value
def backward(self, grad_input, grad_output):
gradInputTuple = ()
gradOutput = grad_output[0]
for input in grad_input:
print(str(input.size()) + " " + str(self.target.size()))
if input.nelement() == self.target.nelement():
self.gradInput = mse_loss2(input, self.target)
if self.normalize == 'True':
self.gradInput = self.gradInput.div(torch.norm(self.gradInput, 1) + 1e-8) # Normalize Gradients
self.gradInput = self.gradInput * self.strength
self.gradInput = self.gradInput + gradOutput
print("gradInput.size():")
print(str(self.gradInput.size()))
gradInputTuple = list(gradInputTuple)
gradInputTuple.append(self.gradInput)
gradInputTuple = tuple(gradInputTuple)
else:
self.target = gradOutput
return gradInputTuple #self.gradInput
I am trying to replicate this Lua/Torch code:
function ContentLoss:updateGradInput(input, gradOutput)
if self.mode == 'loss' then
if input:nElement() == self.target:nElement() then
self.gradInput = self.crit:backward(input, self.target)
end
if self.normalize then
self.gradInput:div(torch.norm(self.gradInput, 1) + 1e-8)
end
self.gradInput:mul(self.strength)
self.gradInput:add(gradOutput)
else
self.gradInput:resizeAs(gradOutput):copy(gradOutput)
end
return self.gradInput
end
What am I doing wrong here?