I found strange behaviour while adding two tensor. Here “mat” tensor is output of mathematical equation and its precision value is 0 after decimal point ( I checked by placing torch.set_printoptions(precision=10) before print). After that if i do addition for below code it gives me output “98310.1016” instead of “98310.1”. Its very simple but i don’t know from where “.1016” comes in answer instead of “.1”.
The small abs. error is most likely created due to the limited floating point precision and a different order of operations between your reference the the current code.
I agreed but if “mat” has some floating point value why its not showing even after using “torch.set_printoptions(precision=10)” before print of “mat” variable? Just forgot to mention “mat” is of size [1,15876,3,3]
Even i tried to subtract “0.0016” from final answer using below code but output remains same as input. Both datatype is FloatTensor.