Why tensor value changes in floating point number

As the title says, for example,
torch.tensor(0.83257929)

tensor(0.83257931)

why the value changes. how to stop this behaviour?

Hello Mainul!

This is due to floating-point round-off error. It can’t be avoided.
If you need greater precision, create your tensor as
dtype = torch.double. Note, with double precision, you will
still have round-off error, just less of it.

Best.

K. Frank