What is the machine precision of pytorch with CPUs (or GPUs)?

fyi if you want to see the datatype of your tensor you might need to access it’s datatype value (assuming pytorch versions don’t change etc):

# %%

import torch

x = torch.randn(3)

print(x)
print(x.dtype)

output:

tensor([-0.8643, -0.6282,  1.3406])
torch.float32

fyi:

recall machine precision:

Machine precision is the smallest number ε such that the difference between 1 and 1 + ε is nonzero, ie., it is the smallest difference between two numbers that the computer recognizes. On a 32 bit computer, single precision is 2-23 (approximately 10-7) while double precision is 2-52 (approximately 10-16) .

I am trying to figure out if what I have is enough to recognize my current error that is 2.00e-7