Why pytorch behave differently while doing floating point division with different types of divisors

a1 = torch.tensor(858.8350830078125, dtype=torch.float32).cuda()/100.0

a2 = torch.tensor(858.8350830078125, dtype=torch.float32).cuda()/torch.tensor(100.0).cuda()

print(a1 - a2)
# out: tensor(-9.5367e-07, device='cuda:0')

python: 3.8.10
pytorch: 1.8.1+cu111

The difference of ~1e-6 is within the numerical accuracy of floating point operations that are of the result size, so this magnitude of difference is “normal operations”.
Dividing by a scalar will use a different kernel than dividing by a cuda tensor. This is speculation, but one difference between the two is that the first will have the 100 passed around as fp64 before moving to fp32 on the GPU, maybe that makes some rounding difference.

Best regards

Thomas

1 Like