PyTorch Mobile speed_benchmark_torch crashes on mobile optimized model

I’m trying to benchmark MobileNet v2 but hit a crash. This is what I did:
Compiled the benchmark binary off 4ed7f36ed using:

BUILD_PYTORCH_MOBILE=1 ANDROID_ABI=arm64-v8a ./scripts/build_android.sh -DBUILD_BINARY=ON

Export MobileNet v2 using:

import torch
from torch.utils.mobile_optimizer import optimize_for_mobile
model = torch.hub.load(‘pytorch/vision:v0.6.0’, ‘mobilenet_v2’, pretrained=True)
model.eval()

scriptedm = torch.jit.script(model)
torch.jit.save(scriptedm, “mobilenet_v2_scripted.pt”)
optedm = optimize_for_mobile(scriptedm)
torch.jit.save(optedm, “mobilenet_v2_opted.pt”)

(python3.7) user1$ adb shell “/data/local/tmp/speed_benchmark_torch --model=/data/local/tmp/mobilenet_v2_opted.pt” --input_dims=“1,3,224,224” --input_type=“float”
Starting benchmark.
Running warmup runs.
Main runs.
Segmentation fault

mobilenet_v2_scripted.pt however works fine. Device is Pixel 3a. Not sure what’s wrong.

Same problem as long as I use float inputs. The benchmark on quantized networks seem to be more robust.