Performance drop between two devices


I am loading a resnet model and running images through that model. I am using forward_hooks to extract the internal activations from the model and then calculating a score with those. All this works unsupervised with out any training and quite well. Now I am trying to execute the same code locally on my laptop and the performance drops significantely. I copied the entire conda environment from the server and serialized the model to disk and loaded it locally to make sure I am not using different python versions or models. I also triple checked to use the same data locally. Now the only thing that is different any more (imho) is the hardware this model is executed. Can this lead to different activations? Is this possible?

Best, Bernhard

I assume you are referring to the “model performance”, i.e. a model metric, not the speed of the model.
In that case, how large are the differences? Different hardware can yield absolute errors due to the limited floating point precision, but should not introduce larger differences.