Floating point precision
Floating point precision limits the number of digits you can accurately rely on.
torch.float32 has precision between 6 and 7 digits regardless of exponent, i.e. precision issues are expected for values < 1e-6. Similarly
torch.float64 will have precision issues for values < 1e-15.
Non-determinism and reproducibility
Non-determinism is expected behavior but there are some measures to control it. Read here for more information Reproducibility — PyTorch 1.7.1 documentation
One of the first checks is to ensure the variables you’re comparing are indeed supposed to be equal. Ensure that you are reproducing results on the same platform, device and release version of PyTorch.
torch.set_deterministic(True)which will use the deterministic version of the algorithm if available (check the list of operations that can be run deterministically) . Keep in mind that this will affect performance.