Tensor.sub works, Variable.sub fails with "inconsistent tensor size"

Here’s a reproduction using 0.1.10_2. This is not an in-place operation, so I don’t understand why it’s failing for the Variable and not for the Tensor.

import torch
from torch.autograd import Variable
x = torch.rand(1, 2, 3, 4)
# This succeeds.
x.sub(x.max()) 
v = Variable(x, volatile=True)
# This fails with "RuntimeError: inconsistent tensor size at /data/users/soumith/builder/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:827"
v.sub(v.max())

the output of v.max() is Variable with size 1, while x.max() is a scalar

That begs the question whether that is the correct behavior. Is it?

you could do: v.sub(v.max().data[0]).
We are thinking about introducing an autograd.Scalar type to better do this, rather than returning scalars in autograd as 1-dimensional Tensors with 1 element.

2 Likes

I like the sound of that. Keeping the semantics of Tensor and Variable nearly identical would satisfy the principle of least surprise.