Hi guys,
We are testing out pytorch 2.0 internally and it seems that we cannot export a pytorch compiled model into torchscript using .trace() or .script()
Has anyone ever tried to serve a compiled torch model?
Thanks,
Yuzhui
Hi guys,
We are testing out pytorch 2.0 internally and it seems that we cannot export a pytorch compiled model into torchscript using .trace() or .script()
Has anyone ever tried to serve a compiled torch model?
Thanks,
Yuzhui
So question why do you need to export it are you deploying to a mobile runtime?
If not what’s wrong with just using python in prod? And if that’s not OK you can take a look at torch.export.export()
in the nightlies
As you probably know the web browser won’t run python code. So if your model runs in the backend you can just use python, unless the server is also in NodeJS in which case you have to convert it to ONNX.
If instead the model runs in the client i.e web application, there is no other way but to turn it into ONNX.
The steps are:
torch.export.onnx(...)
You can also further optimize the inference by using onnxruntime python package to conver the onnx model to ORT. This is secondary though.
onnxruntime-web
which is in the npm repository.You may face some errors related to WASM, in which case you will find how to fix it by Googling or asking.