Having some trouble determining if torch.pow()'s interaction with variable objects is a bug or a feature.
>>> exp = torch.Tensor([[1,5],[1,5]]))
>>> base = 2
>>> torch.pow(base, exp)
2 32
2 32
[torch.FloatTensor of size 2x2]
As expected. If exp is a variable:
>>> torch.pow(base, Variable(exp))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/chase/applications/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 363, in pow
return Pow.apply(self, other)
File "/home/chase/applications/anaconda3/lib/python3.6/site-packages/torch/autograd/_functions/basic_ops.py", line 78, in forward
ctx.a_size = a.size()
AttributeError: 'int' object has no attribute 'size'
The fix is just to call torch.pow on Variable(exp).data. Is this intended? Interestingly, torch.exp() appears to work just fine on variables, but clearly torch.pow(np.e, exp) will not yield the same result.
>>> torch.pow(base, Variable(exp).data)
2 32
2 32
[torch.FloatTensor of size 2x2]
>>> torch.exp(Variable(exp))
Variable containing:
2.7183 148.4132
2.7183 148.4132
[torch.FloatTensor of size 2x2]
I want to use torch.pow in a loss function (below), so I want to pass in and out variable objects and I’m not sure operating on the variable object’s data attribute is a safe hack.
Example loss function:
def expw_mae_loss(output, target, base=10.0):
return(( (output - target).abs() * torch.pow(base, target) ).sum() / output.data.nelement() )
Update, my current solution is create a static variable without computation graph information to act as a weight:
def expw_mae_loss(output, target, base=10.0):
weights = Variable(torch.pow(base, target.data), requires_grad=False).cuda()
return(( (output - target).abs() * weights ).sum() / output.data.nelement() )
Appears to work but would appreciate clarity on the difference between torch.pow and torch.exp nonetheless.