Why tensor value changes in floating point number

As the title says, for example,


why the value changes. how to stop this behaviour?

Hello Mainul!

This is due to floating-point round-off error. It can’t be avoided.
If you need greater precision, create your tensor as
dtype = torch.double. Note, with double precision, you will
still have round-off error, just less of it.


K. Frank