I’m using U-2-Net on mobile.
I converted the model to TorchScript, and used it succesfully on Android. The inference takes about 1 sec.
Now, if I use the same model on iOS, the inference is about half a minute. I disable autograd, and run the model in inference mode.
Any idea what could cause such a big performance difference using the same model?
(I use 1.6.0 nightly build on Android, and 1.5.0 production build on iOS)