PyTorch Mobile speed_benchmark_torch crashes on mobile optimized model

I’m trying to benchmark MobileNet v2 but hit a crash. This is what I did:
Compiled the benchmark binary off 4ed7f36ed using:


Export MobileNet v2 using:

import torch
from torch.utils.mobile_optimizer import optimize_for_mobile
model = torch.hub.load(‘pytorch/vision:v0.6.0’, ‘mobilenet_v2’, pretrained=True)

scriptedm = torch.jit.script(model), “”)
optedm = optimize_for_mobile(scriptedm), “”)

(python3.7) user1$ adb shell “/data/local/tmp/speed_benchmark_torch --model=/data/local/tmp/” --input_dims=“1,3,224,224” --input_type=“float”
Starting benchmark.
Running warmup runs.
Main runs.
Segmentation fault however works fine. Device is Pixel 3a. Not sure what’s wrong.

Same problem as long as I use float inputs. The benchmark on quantized networks seem to be more robust.