Hey guys,
I’m trying to translate a neural transfer code in Torch7 to PyTorch, and I’m confused about the custom loss layer in Torch and PyTorch.
in Torch7, I can understand the ContentLoss:updateOutput(input), but the ContentLoss:updateGradInput(input, gradOutput) made me confused.
function ContentLoss:updateGradInput(input, gradOutput)
if self.mode == 'loss' then
if input:nElement() == self.target:nElement() then
self.gradInput = self.crit:backward(input, self.target)
end
if self.normalize then
self.gradInput:div(torch.norm(self.gradInput, 1) + 1e-8)
end
self.gradInput:mul(self.strength)
self.gradInput:add(gradOutput)
else
self.gradInput:resizeAs(gradOutput):copy(gradOutput)
end
return self.gradInput
end
In PyTorch, I didn’t see this one self.gradInput:div(torch.norm(self.gradInput, 1) + 1e-8)
, which is in Torch version.
and also if I want to insert some code/operations in ContentLoss:updateGradInput(input, gradOutput)
, how can I do this in PyTorch?
thanks
ref:
neural transfer in Torch: https://github.com/jcjohnson/neural-style/blob/master/neural_style.lua#L472
neural transfer in PyTorch: http://pytorch.org/tutorials/advanced/neural_style_tutorial.html#pytorch-implementation