Strange behavior of tensor.floor()

Hi, I am using the floor function and found this strange behavior:

a=torch.tensor(1-5.9605e-8)
a.floor()
=>tensor(0.)
(a+1).floor()
=>tensor(2.)

How is floor computed and why does not (a+1).floor() output 1.0?

Hello An!

I haven’t checked the arithmetic precisely, but I believe the following
is going on:

A torch.tensor by default uses single-precision (32-bit) floating
point numbers. These have approximately 7 decimal places of
precision. Your small number, 5.9e-8, is (relative to 1) just on
the edge of this precision.

So it turns out that 1 - 5.9e-8, when represented by a 32-bit
floating-point number, is, indeed, not equal to (and a little bit less
than) 1, so floor() takes it down to 0. But (1 - 5.9e-8) + 1,
when represented by a 32-bit floating-point number, is equal
to 2, so floor() leaves it at 2.

Redo your experiment with

a = torch.tensor (1 - 5.9605e-8, dtype = torch.float64)

and you should be your expected results. (That is, when
represented by a 64-bit floating-point number, (1 - 5.9e-8) + 1
will, indeed, not be equal to (and will be a little bit less than) 2,
so floor() will take it down to 1, rather than leaving it at 2.)

Again, I haven’t done the arithmetic exactly, but if you do the
float64 experiment, but try various values of your small number around 1.e-16, you should be able to find a value where your
“strange behavior” shows up again – just pushed down to smaller
numbers by the increased precision of float64 relative to float32.

Best.

K. Frank

Thank you! It works now.