In a small code snippet in a project…I have a for loop which iterates through 3 tensor values , A = torch.tensor([-3. , 6. , -3]).

```
det = torch.tensor(0)
for i in range(3):
det += A[i]
return det
```

Im getting the answer as, det = 2.3842e-07, all the values in A and det are of type ‘torch.float64’

albanD
(Alban D)
#2
On which platform are you testing? I cannot reproduce with:

```
import torch
def foo(A, acc_dtype):
det = torch.tensor(0, dtype=acc_dtype)
for i in range(3):
det += A[i]
print(det)
A = torch.tensor([-3. , 6. , -3])
foo(A, acc_dtype=torch.float)
foo(A, acc_dtype=torch.double)
A = torch.tensor([-3. , 6. , -3], dtype=torch.double)
foo(A, acc_dtype=torch.float)
foo(A, acc_dtype=torch.double)
```

ptrblck
#3
Same here. I get a `0`

error in `1.11.0.dev20211101+cu113`

.

I tried the following snippet in google colab

Code:

```
a = torch.tensor([2.0, 3.0])
b = torch.tensor([5.0, 6.0])
c = torch.stack([a,b], dim = 1)
print(c)
torch.det(c)
```

Output:

```
tensor([[2., 5.],
[3., 6.]])
tensor(-2.9999995232)
```

The answer should be -3. For my problem statement the outputted answer won’t help, I need it to generate -3. Please help.

albanD
(Alban D)
#5
Hi,

This is expected behavior. Floating point arithmetic is not exact unfortunately: Floating-point arithmetic - Wikipedia