Error when running any model converted for Vulkan on Android C++ libtorch

I built the 1.8.0 prototype Android libtorch from source with Vulkan backend enabled and it runs all torchscript models fine but if I run any model that has been converted to Vulkan and put it on GPU with tensor.vulkan() then I get the following error during inference. I’ve even tried the mobilenet2-vulkan.pt model mentioned in the documentation but still, the same error. Note that the call to at::is_vulkan_available() returns True.

     return F.conv2d(input, weight, self.bias, self.stride,
            ~~~~~~~~ <--- HERE
                     self.padding, self.dilation, self.groups)
 RuntimeError: expected scalar type Float but found UNKNOWN_SCALAR

There is a similar issue on Github right now: Android API to run model on Vulkan backend · Issue #57893 · pytorch/pytorch · GitHub

In that thread I linked a MobileNet V2 model that you can try. If that works, I would suggest re-building pytorch with USE_VULKAN=ON as mentioned in the thread.