IThere are two scenarios
1)
script_cell = torch.jit.script(self.model, [torch.rand(1, 3, 960, 640)])
script_cell.save(“model1.pt”)
traced_cell = torch.jit.trace(self.model, [torch.rand(1, 3, 960, 640)])
traced_script_module = optimize_for_mobile(traced_cell)
traced_script_module._save_for_lite_interpreter(“model2.ptl”)
Here I am observing that inference time of model2.ptl is slower than model1.pt. Could you please help me here? I believe it should be faster because we use save_for_lite_interpreter in case of model2.ptl