To me the error being around the Backend sounds like you have your model on GPU but need CPU.
So I’m not in a position to make official pronouncements, but so the bulk of PyTorch mobile is “just libtorch”, and that is reasonably stable. So there might be bugs and I would expect that mobile gets better over time (maybe the API can be made nicer, there certainly seems some room for faster), but I would expect that what runs today will continue run well. Also people here will try to help you when you run into something blocking you.
Caffe2 on the other hand doesn’t have support anymore.
Best regards
Thomas