Dynamic graphs on mobile

is it possible to keep a dynamic computational graph on mobile?
i see an opportunity for RNN, if you process arbitrary-length input, you cannot do the whole computation at once on your neural accelerator (think a video stream for example).

When porting to ONNX a RNN, i managed to port it Feed-Forward style meaning the RNN do 1/ N steps with hidden states and give back output and updated hidden states. The bottleneck of this approach is that you need to transfer a lot of data between host & accelerator (for instance when using ConvLSTM for large video)

Is there a way to simply run pytorch code on mobile side without passing by this feed-forward hack?

1 Like