Floating point results in Android, TensorFlow and Pytorch are different

Hi, I am trying to compare the floating point numbers on Android, Tensorflow and Pytorch. What I have observed is I am getting same result for Tensorflow and Android but different on Pytorch as Android and Tensorflow are performing round-down operation. Please see the following result:


import tensorflow as tf
a=tf.convert_to_tensor(np.array([0.9764764, 0.79078835, 0.93181187]), dtype=tf.float32)

session = tf.Session()
result = session.run(a*a*a*a)



import torch as th


a=th.from_numpy(np.array([0.9764764, 0.79078835, 0.93181187])).type(th.FloatTensor)

result = a*a*a*a



for (index in 0 until a.size) {
   var res = a[index] * a[index] * a[index] * a[index]


The result is as follows:

Android: [0.9091739, 0.3910579, 0.7538986]
TensorFlow: [0.9091739, 0.3910579, 0.7538986]
PyTorch: [0.90917391, 0.39105791, 0.75389862]

You can see that PyTorch value is different. I know that this effect is minimal in this example but when we are performing training, and we are running for 1000 rounds with different batches and epochs, this different can be accumulated and can show un-desirable results. Can anyone point out how can we fix to have same number on three platforms.


The difference seems to come from the different precision in the printing options.
If you use precision=7, you’ll get the same values. Alternatively, increase the precision printing options in TF and your Android application to compare more decimals.