Inference time for .ptl is more than .pt

IThere are two scenarios
1)
script_cell = torch.jit.script(self.model, [torch.rand(1, 3, 960, 640)])
script_cell.save(“model1.pt”)

traced_cell = torch.jit.trace(self.model, [torch.rand(1, 3, 960, 640)])
traced_script_module = optimize_for_mobile(traced_cell)
traced_script_module._save_for_lite_interpreter(“model2.ptl”)

Here I am observing that inference time of model2.ptl is slower than model1.pt. Could you please help me here? I believe it should be faster because we use save_for_lite_interpreter in case of model2.ptl

Are you comparing both models on a smartphone and if so how large is the difference in their execution time?

@ptrblck there is a difference of 2.5 seconds. Yes I am comparing the time on a Mobile Phone - MI note 7 pro 6GB RAM