Autograd in Neural Transfer

Hey guys,

I’m trying to translate a neural transfer code in Torch7 to PyTorch, and I’m confused about the custom loss layer in Torch and PyTorch.

in Torch7, I can understand the ContentLoss:updateOutput(input), but the ContentLoss:updateGradInput(input, gradOutput) made me confused.

function ContentLoss:updateGradInput(input, gradOutput)
  if self.mode == 'loss' then
    if input:nElement() == self.target:nElement() then
      self.gradInput = self.crit:backward(input, self.target)
    end
    if self.normalize then
      self.gradInput:div(torch.norm(self.gradInput, 1) + 1e-8)
    end
    self.gradInput:mul(self.strength)
    self.gradInput:add(gradOutput)
  else
    self.gradInput:resizeAs(gradOutput):copy(gradOutput)
  end
  return self.gradInput
end

In PyTorch, I didn’t see this one self.gradInput:div(torch.norm(self.gradInput, 1) + 1e-8), which is in Torch version.

and also if I want to insert some code/operations in ContentLoss:updateGradInput(input, gradOutput), how can I do this in PyTorch?

thanks

ref:
neural transfer in Torch: https://github.com/jcjohnson/neural-style/blob/master/neural_style.lua#L472
neural transfer in PyTorch: http://pytorch.org/tutorials/advanced/neural_style_tutorial.html#pytorch-implementation

Hi,

The main difference with Torch7 and PyTorch is that the old one does not have autograd, so you have to write the backward pass for everything. This is not true in pytorch: when working with nn.Modules, you only need to implement the forward (called updateOutput in Torch7).