Different inference time for the same model, trained on different datasets

I have two (‘.pkl’) instances of a model, each trained on a different dataset. Inference time of one however is considerably (X15) higher than the other one. Any suggestions as to how this could be explained?

I thought I found the issue but still no luck

In case you are using the CPU, try to use torch.set_flush_denormal(True) to check if this would change anything.

Thanks for the suggestion. I am indeed running on GPU. Here is a bit more detail:
I tested the model on two different machines, same GPU + environment. On one machine, it works as expected, with runtime of both saved models the same. On the other machine however, one program is much slower that the other!