Is there any way to set precision to be exact values instead of floats?

I am trying to convert a program way back from pytorch 0.3 to 1.3.1. I have it very close to doing the same thing when I debug batch by batch, but over time the floating point changes make it impossible to tell where my actual functions are different and where it is just floating point errors. I know there is double() to make precision be higher, but that just changes where floating point errors happens. Is there any mode that will still allow decimal points but give exact numbers and the rounding differences just get cut off. Less precision would be fine for this as long as it exists in pytorch 0.3 and 1.3.1.

Hi Py!

No.

There is neither a built-in way to do this, nor any practical way to
do it “by hand.”

But you speak as if you are convinced that your “actual functions
are different,” even though the computational evidence you cited
in your earlier thread shows that your “actual functions” agree up
to floating-point round-off, which is as much as you can expect.

Floating-point round-off is real, and you can’t get rid of it. But, in
general, it’s not a problem. After one iteration, your two versions
agree up to floating-point round-off. After a couple more iterations,
the floating-point round-off accumulates some. And after a few
more iterations, it accumulates to the point that your optimizer’s
trajectory in parameter space starts to wander off along two
different paths. But if you had run the same version twice, but
where you initialized your model to slightly different random
weights, your optimizer’s trajectory would also wander off along
two different paths.

The loss surface in parameter space has a bunch of very
complicated, high-dimensional peaks and valleys, and very
many – for practical purposes equivalent – approximate minima.
So it’s no surprise that slightly different initial conditions – or slightly
different round-off error-- will set you off along different paths to
different – but, in practice, equally good – approximate minima.

Without evidence that something is actually wrong, I think you’re
seeing problems where there are none.

Best.

K. Frank

1 Like

Hey K. Frank!

Thanks for responding again! If you say there’s not a way to do this I’ll say this thread is solved and move back to the other one. How to debug with floating point differences (if anyone else is reading this later)