Why is pytorch tensor addition giving wrong value in the following situation?

In a small code snippet in a project…I have a for loop which iterates through 3 tensor values , A = torch.tensor([-3. , 6. , -3]).

det =  torch.tensor(0)
for i in range(3):
    det += A[i]
return det

Im getting the answer as, det = 2.3842e-07, all the values in A and det are of type ‘torch.float64’

On which platform are you testing? I cannot reproduce with:

import torch

def foo(A, acc_dtype):
    det =  torch.tensor(0, dtype=acc_dtype)
    for i in range(3):
        det += A[i]

A = torch.tensor([-3. , 6. , -3])
foo(A, acc_dtype=torch.float)
foo(A, acc_dtype=torch.double)
A = torch.tensor([-3. , 6. , -3], dtype=torch.double)
foo(A, acc_dtype=torch.float)
foo(A, acc_dtype=torch.double)

Same here. I get a 0 error in 1.11.0.dev20211101+cu113.

I tried the following snippet in google colab

a = torch.tensor([2.0, 3.0])
b = torch.tensor([5.0, 6.0])

c = torch.stack([a,b], dim = 1)


tensor([[2., 5.],
        [3., 6.]])

The answer should be -3. For my problem statement the outputted answer won’t help, I need it to generate -3. Please help.


This is expected behavior. Floating point arithmetic is not exact unfortunately: Floating-point arithmetic - Wikipedia