I have a complicated model I’m trying to run on mobile, with one component that is structurally simple but computationally expensive (deep convolutional structure), and the rest structurally complicated enough that there’s no way to run it with the ops supported on the
metal backend currently.
Because of our deployment model it would be substantially preferable to have these as part of the same torchscript model object. I know I can use a torchscript model as an internal component of a larger model that I then script, but when I freeze that outer model presumably the entire thing will be configured to use a common device. Has anyone figured out how to run different components on different devices for mobile without separating those into different torchscript models?