Are there any best practices for using torchscript with pytorch-mobile? Im currently trying to implement a conv-net with pytorch-mobile on android. The model is very slow in comparison to an identical tensorflow trained tflite model. Perhaps this might have something to do with the torchscript tracing.
Any thoughts would be appreciated
I tries freeze the graph using
torch.utils.mobile_optimizer.optimize_for_mobile which is new for pytorch 1.6.0. It seems to improve performance a little but not significantly.
Did you find anything new? We are experiencing a very similar pattern in terms of performance while comparing a TorchScript model against its TFLite counterpart
cc: @kimishpatel if you have any thoughts on this
Home | PyTorch is updated recently and can be a good reference.
How do you export the model and how do you use the mobile runtime?