Hi,

I have been working with Torch7 and just switched to Pytorch recently. So I might be missing something very basic. I wanted to port some codes to python. While I could easily get most of the network and related stuff ported without any issues, I wanted some information regarding the criterion.

One of the code that I needed to port is a scale invariant mse criterion. The original code uses the mse errors internally, but does some processing in the forward and backward passes of the criterion. The code is:

```
-- weighted MSE criterion, scale invariant, and with mask
local WeightedMSE, parent = torch.class('nn.WeightedMSE', 'nn.Criterion')
function WeightedMSE:__init(scale_invariant)
parent.__init(self)
-- we use a standard MSE criterion internally
self.criterion = nn.MSECriterion()
self.criterion.sizeAverage = false
-- whether consider scale invarient
self.scale_invariant = scale_invariant or false
end
-- targets should contains {target, weight}
function WeightedMSE:updateOutput(pred, targets)
local target = targets[1]
local weight = targets[2]
-- scale-invariant: rescale the pred to target scale
if self.scale_invariant then
-- get the dimension and size
local dim = target:dim()
local size = target:size()
for i=1,dim-2 do
size[i] = 1
end
-- scale invariant
local tensor1 = torch.cmul(pred, target)
local tensor2 = torch.cmul(pred, pred)
-- get the scale
self.scale = torch.cdiv(tensor1:sum(dim):sum(dim-1),tensor2:sum(dim):sum(dim-1))
-- patch NaN
self.scale[self.scale:ne(self.scale)] = 1
-- constrain the scale in [0.1, 10]
self.scale:cmin(10)
self.scale:cmax(0.1)
-- expand the scale
self.scale = self.scale:repeatTensor(size)
-- re-scale the pred
pred:cmul(self.scale)
end
-- sum for normalize
self.alpha = torch.cmul(weight, weight):sum()
if self.alpha ~= 0 then
self.alpha = 1 / self.alpha
end
-- apply weight to pred and target, and keep a record for them so that we do not need to re-calculate
self.weighted_pred = torch.cmul(pred, weight)
self.weighted_target = torch.cmul(target, weight)
return self.criterion:forward(self.weighted_pred, self.weighted_target) * self.alpha
end
function WeightedMSE:updateGradInput(input, target)
self.grad = self.criterion:backward(self.weighted_pred, self.weighted_target)
if self.scale then
self.grad:cdiv(self.scale)
-- patch NaN
self.grad[self.grad:ne(self.grad)] = 0
end
return self.grad * self.alpha
end
```

The full source code is at https://github.com/shi-jian/shapenet-intrinsics/blob/master/train/Criterion.lua

But I couldn’t find any way to extend a criterion in pytorch. Is it the same as the nn.module, only this will be treated as a loss function? I would be really grateful if someone could help me with this problem. Thanks!