I tried to export my torch model into a pt file(while my model is located on the gpu), using this code-
traced_script_module = torch.jit.trace(self.model, gray_img_tens, strict=False)
traced_script_module.save("model.pt")
But when I try to activate this model on the same input, using-
tracedPT_outputs = traced_script_module.forward(gray_img_tens)
I recieve different output. The output is usually close to the original, but I can’t rely on that. The mispredictions are accumulated during the code, and th final result is far than expected.
- The problem occures just on the gpu!
When I try to do the same on the cpu, the output I recieve is the same.
In this case, the latency is very high, and I can’t rely on that.