.item() gives different value than the tensor itself

Not sure if that’s mixed-precision related but here is what happened:

a = torch.tensor([1.1])
a.item() <-- 1.100000023841858
a <= 1.1 <-- True
a.item() <= 1.1 <-- False

I’m a bit confused here…

my env ==>
pytorch = 1.6.0
python = 3.7.6

The outputs differ in the used print precision and you will see the same output values using:

a = torch.tensor([1.1])
print(a.item())
> 
1.100000023841858
print(a)
> tensor([1.1000])

torch.set_printoptions(precision=15)
print(a)
> tensor([1.100000023841858])

so it’s unrelated to mixed-precision training.

I think the comparison using the tensor should yield True, as the right-hand side would be transformed to a tensor first and would thus be equal to the left-hand side.

The second comparison uses Python directly, so I guess it might be using float64 values (or even a higher precision?) and might be able to represent the 1.1 value in a different way, such that the comparison fails.

Thank you for the comment.

I agree on the print thing, while why tensor internally stores the value differently as it should be still confuses me. As you mentioned in your code, when print the tensor a, it shows tensor([1.100000023841858]) instead of tensor([1.1]).

Also I’m curious why 1.1 gets stored as exactly 1.100000023841858 as it happens in your environment as well.

That’s due to the limited floating point representation using 32bits.
The Single-precision floating-point format Wikipedia article explains it quite well.

Thank you! All clear now.