Decimal places with pytorch

Hello! Here is part of a longer code:

factors = torch.from_numpy(variables)
factors = factors.cuda()
factors = factors.float()

product = torch.from_numpy(f_dependent)
product = product.cuda()
product = product.float()

data_translated = factors.clone()
data_translated[:,0] = factors[:,0]*factors[:,1]
data_translated =  np.delete(data_translated, 1, axis=1)
data_translated = np.column_stack((data_translated,product))

np.savetxt("data.txt", data_translated)

For example the first entry of factors[:,0] is 1.6893829269675686 and for factors[:,1] is 1.0080762023262633. If I multiply them together using a calculator (or just plain python) I get this: 1.7030267252922935. However when I check the data.txt file the number is this 1.703026652336120605e+00. Starting from the 7th decimal they are different and that really matters for the purpose of my code. Also the other numbers (which I don’t touch at all) change, too, starting from the 6-7th decimal place. Can someone tell me why is this happening and how can I fix it? Thank you!

Hi,

The problem is with floating point precision. A float32 gives not more than 6-7 digits of precision. You need to use float64 (or double in pytorch) if you need more.

Oh I see, thank you! However, does a NN in pytorch works with doubles (float64)? I thought it needs float32. I guess I could do the transformations back an forth twice, but is there an easier way?

All the operations support doubles. It will be a bit slower though.