Inconsistent behaviour in tensor division

In pytorch, if both tensors are long, an integer division will be performed :

tensor([242240, 226320, 186240, 171840, 165680]) / torch.tensor(694)
Out : tensor([349, 326, 268, 247, 238])

I need a regular division so I tried to convert my second tensor to float, it yields the same result :

tensor([242240, 226320, 186240, 171840, 165680]) / torch.tensor(694).float()
Out : tensor([349, 326, 268, 247, 238])

However, if the first tensor is of size 1, this works :

torch.tensor(10) / torch.tensor(10).float()
Out : tensor(1.0)

This seems really weird. Any thoughts?

It’s not the same doing

import torch

a=torch.tensor(10) / torch.tensor(10).float()
b=torch.tensor([10]) / torch.tensor(10).float()
print(a, 'Dimensions: %s'%a.ndimension(),' Type %s'%a.type())
print(b,'Dimensions: %s'%b.ndimension(),' Type %s'%b.type())
/usr/bin/python3.6 /home/jfm/.PyCharm2019.1/config/scratches/scratch.py
tensor(1.) Dimensions: 0  Type torch.FloatTensor
tensor([1]) Dimensions: 1  Type torch.LongTensor

That’s why the result is different

I agree with you that the number of dimensions is different. That doesn’t explain why the type is different though.

The constructor generates different data types depending on if the input is numerical or an array. I edited my previous reply.

The short answer is that these kind of operations depends on the dimensionality, it does not threat 0-dimensional tensors than 1-dimensional tensors of 1 element.