FP16 support in pytorch mobile

Is there a way to run scripted half precision model in pytorch mobile? If not, will this be supported in the future? and is there way to at least convert saved fp16 scripted model to fp32 on mobile device inside pytorch mobile?

I’m not sure if the mobile device architecture would benefit from float16 dtypes (the x86 CPU architectures do not see a benefit from it which is also the reason float16 ops are mostly not supported on the CPU in PyTorch).

I don’t know about future plans and if some devices are planning to support float16 on mobile.

I would assume transformations are still possible on mobile, i.e. would model.to(dtype) work?

While arm cpus have support for fp16, we dont support that on mobile runtime to directly take advantage of fp16 features of mobile cpus.

In terms of saving fp16 and converting to fp32 on model load is something that is not quite supported natively. I have not explored this but it might be possible to do this entirely in torchscirpt without pytorch runtime needing to support this natively.

model.to likely wont work since on server with full pytorch runtime gets torch::jit::Module as the type returned by torch::jit::load whereas on mobile we have torch::jit::_load_for_mobile which returns torch::jit::mobile::Module.