Torchscript - saving and loading modules makes different outputs than scripting it directly

Hello,
I thought scripting a module with same precision (fp32) would make the exact same output as the original eager module.
It did when I converted directly but when I stored scripted modules using torch.jit.save() and then loaded again using torch.jit.load(), the output of loaded scripted modules became slightly different.

Does torch.jit.save() and load() change some operations?

Scripting a model tries to optimize the model e.g. by fusing different operations which would then create the expected small numerical errors caused by the limited floating point precision.