Floating point addition

I found strange behaviour while adding two tensor. Here “mat” tensor is output of mathematical equation and its precision value is 0 after decimal point ( I checked by placing torch.set_printoptions(precision=10) before print). After that if i do addition for below code it gives me output “98310.1016” instead of “98310.1”. Its very simple but i don’t know from where “.1016” comes in answer instead of “.1”.

##    mat tensor([[98310., 98310., 98310.],
        [98310., 98310., 98310.],
        [98310., 98310., 98310.]], device='cuda:0')  


        e1=(torch.eye(3,3)*0.1).to(0)
        print("**e1 ",e1)
        mat=torch.add(mat,e1)

The small abs. error is most likely created due to the limited floating point precision and a different order of operations between your reference the the current code.

I agreed but if “mat” has some floating point value why its not showing even after using “torch.set_printoptions(precision=10)” before print of “mat” variable? Just forgot to mention “mat” is of size [1,15876,3,3]

Even i tried to subtract “0.0016” from final answer using below code but output remains same as input. Both datatype is FloatTensor.

e2=(torch.eye(3,3)*0.0016).to(0)
mat=torch.sub(mat,e2)

I change the code as below then its subtracting integer value but not floating point value. So output is “98309.1016 ” instead of “98309.1

e2=(torch.eye(3,3)*1.0016).to(0)
mat=torch.sub(mat,e2)