Strange casting: multiplying torch int64 tensor by class float

Basically this code does not perform the typical casting operator.

import torch
a=torch.tensor(4)
print(a.dtype)#int64
b=0.5
c=b*a
print(c.dtype)#also int 64. 
print(c)#this is 0

The casting has been done differently as a normal compiled C code does. In fact if b is a torch.floatTensor() everything is done correctly. Why this behaviour?

The behavior is inherited from Lua Torch. With an operation involving a Python number and a Tensor, the number is cast to the Tensor’s type (which rounds 0.5 to 0 in this case).

We plan to change the behavior in the future. It’s tracked here:

2 Likes