Numpy, addition, inconsequences

np.float32(1.)+Tensor([1.])

works as expected (returns a Tensor)

But:

Tensor([1.])+np.float32(1.)

fails with

TypeError: add received an invalid combination of arguments - got (numpy.float32)
np.float32(1.)+Variable(Tensor([1.]))

returns a very strange numpy array:

array([[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
 2
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]], dtype=object)
Variable(Tensor([1.]))+np.float32(1.)

fails with

TypeError: add received an invalid combination of arguments - got (numpy.float32)

I expected to get Variable when adding float32 to Variable and Tensor when adding float32 to Tensor. Is it then a bug?

We do not support mixed type addition (np /torch, torch/np).

In the 1st case, np treats Tensor as an iterable, and it kind of magically worked out.

In the 3rd case, np treats Variable as an iterable and Variable’s x[1] = x, so there’s this weird recursive indexing. There’s not much we can do to fix it on the PyTorch side, but we can introduce an autograd.Scalar (which we are planning to do) and then we’ll have a proper error message generated.