Odd behavior multiplying a numpy float with a torch.Variable

Can anyone explain what is going on here?

import torch
from torch.autograd import Variable
import numpy as np
x = Variable(torch.Tensor([3.0]),requires_grad=True)
b = np.float32(3)
b*x

Returns

Out[11]: 
array([[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[[Variable containing:
9
[torch.FloatTensor of size 1]
]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]], dtype=object)

Casting b to a normal float on the other hand gives the expected result.

1 Like

It is a problem on how numpy multiplication is done unfortunately, because you do b*x, it is numpy implementation that is used. You can try doing x*b and it should fail with a nice error message from pytorch.

Actually, x*b works fine. Quite a trap though.

Unfortunately there is nothing we can do because due to the way python handle the * operator, if the left element is a numpy object, the numpy functions are going to be used and they are not aware of pytorch tensors :confused:

Fair enough, it may be worth a mention somewhere in the docs perhaps. And many thanks for clarifying!

The problem also only occurs when you multiply a numpy float with a Variable, tensors are ok actually.