A trained model gives slightly different results when evaluated on a K-40 and a Titan-X, for the same dataset.
Is this expected behavior? The difference in final accuracy is < 0.05%.
Also, claimed to be the case here: Different results get on different machine
One cause I can think of is that cudnn might select different algorithm for different models.