After training a model, I have decided to serialize it using torch.jit.trace
and then optimize_for_mobile
and finally _save_for_lite_interpreter
.
However my end goal is to utilize this model in another python program (using torch.load()
) running on a much weaker PC, NOT on a mobile device or in a C++ program.
Is there any point in doing the serialization, optimization, and saving it to be compatible with the lite interpreter? All examples I’ve seen online only address targeting Android/iOS or C++ programs.