Torch.pow incompatible with variable

Having some trouble determining if torch.pow()'s interaction with variable objects is a bug or a feature.

>>> exp = torch.Tensor([[1,5],[1,5]]))
>>> base = 2
>>> torch.pow(base, exp)

  2  32
  2  32
[torch.FloatTensor of size 2x2]

As expected. If exp is a variable:

>>> torch.pow(base, Variable(exp))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/chase/applications/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 363, in pow
    return Pow.apply(self, other)
  File "/home/chase/applications/anaconda3/lib/python3.6/site-packages/torch/autograd/_functions/basic_ops.py", line 78, in forward
    ctx.a_size = a.size()
AttributeError: 'int' object has no attribute 'size'

The fix is just to call torch.pow on Variable(exp).data. Is this intended? Interestingly, torch.exp() appears to work just fine on variables, but clearly torch.pow(np.e, exp) will not yield the same result.


>>> torch.pow(base, Variable(exp).data)

  2  32
  2  32
[torch.FloatTensor of size 2x2]

>>> torch.exp(Variable(exp))
Variable containing:
   2.7183  148.4132
   2.7183  148.4132
[torch.FloatTensor of size 2x2]

I want to use torch.pow in a loss function (below), so I want to pass in and out variable objects and I’m not sure operating on the variable object’s data attribute is a safe hack.

Example loss function:


def expw_mae_loss(output, target, base=10.0):
    return(( (output - target).abs() * torch.pow(base, target) ).sum() / output.data.nelement() )

Update, my current solution is create a static variable without computation graph information to act as a weight:


def expw_mae_loss(output, target, base=10.0):
    weights = Variable(torch.pow(base, target.data), requires_grad=False).cuda()
    return(( (output - target).abs() * weights ).sum() / output.data.nelement() )

Appears to work but would appreciate clarity on the difference between torch.pow and torch.exp nonetheless.

Hi,

I think you misread the doc for the pow function: it takes torch.pow(input, exponent) :slight_smile:
The thing is that here you try to work with an input which is not a Tensor (or Variable) and that does not work with the autograd.

I don’t think so? There are two acceptable input formats, each clarified individually in the docs (scroll to the bottom of the torch.pow section):

torch.pow(tensor, float) which raises each tensor element to the power of the float

and

torch.pow(float, tensor) which raises a fixed float to the power of each element in the tensor.

Both return a tensor. torch.pow(numpy.e, tensor) would be functionally identical to torch.exp(tensor), and exp works with variables without any issues.

I actually did not know you could use it that way :slight_smile:
I confirm that this issue is fixed in master.
Does setting base = Variable(torch.Tensor([2])) fix the problem in 0.3.1?

It works inline, but using that hack in a loss function you lose computation graph information for .backward()

I added an update at the end of my original post that seems to do the trick.

If you use it like:

def expw_mae_loss(output, target, base=10.0):
    base = Variable(torch.Tensor([base])).type_as(target)
    return(( (output - target).abs() * torch.pow(base, target) ).sum() / output.data.nelement() )

Then the gradients will flow as expected for output and target.

In you sample, I think you’re gonna get wrong gradients for target.