How does one start using double without unexpected bugs?

On a related issue to this, I get errors in my regression that are close to machine precision (10^-7). As I tried things out machine precision things might be a lot more subtle than I expected, so I want to make sure. Is changing all the parameters to double in my model sufficient so that EVERYTHING is now computed using double to avoid errors?

(btw read that post for more detail comments on issues if your curious)

But what I want is:

  • change code so everything uses double
  • hope that there are no unexpected (silent) bugs (e.g. I know if cpu tensors are used when gpus are expected then pytorch throws errors but I want to make sure there isn’t a bug I could be missing that occurs if I only change the model to double)