```
a=torch.tensor([2.5])
b=(a-2.4)/0.1
print(b)
print(b.long())
```

outputs:

```
tensor([1.0000])
tensor([0])
```

Seemed .long() has a numeric precision problem.

Any ideas?

```
a=torch.tensor([2.5])
b=(a-2.4)/0.1
print(b)
print(b.long())
```

outputs:

```
tensor([1.0000])
tensor([0])
```

Seemed .long() has a numeric precision problem.

Any ideas?

No – that’s just how floating-point arithmetic works. `print(b)`

rounds to 4 decimal points. If you print out `b.item()`

you’ll see: `0.999999046325683`

which rounds down to zero.

See https://www.exploringbinary.com/why-0-point-1-does-not-exist-in-floating-point/ for more explanation.

1 Like