Torchscript - is it better to script the model in the same environment?

I converted a pytorch model into torchscript in one server (using torch.jit.trace)
and checked that torchscript model is faster than a torch model.

I copied the weight path to the other server (computer specs are different but virtual environment is same) and loaded it.
However, inference speed of torchscript model was slower than the original torch model in the server.
Do I have to script the model in the given server?
Does scripting work differently with different gpu?

I don’t know what exactly is currently saved from a scripted model as the backend moves quite fast at the moment. Could you run another test and see if the speed would change if you script the model again on the target system?