Amazon Elastic Inference supports PyTorch and currently uses a modified version of 1.3.1. However, the JIT it uses seems to be a really old version as it says it requires version 1 torchscript models. The models that I’m able to create locally are version 3, and it seems to be that way no matter what version of PyTorch I use. When trying to script() or trace() the models on AWS, it seems to not actually work.
Ideally, I’d be able to convert the models I have locally to a version 1 torchscript models ao I could copy them to my AWS server and run inference with them. Does anybody know how to do that?