Hello. I have a relatively simple model I’m interested in using in production, in particular it is a graph convolutional network CTR-GCN, used for skeletal action recognition, with not much variation in
control statements (if statements).
I’m wondering given the two choices of:
-
Using TorchScript to scipt the model, save a checkpoint, and then load it in C++ as in this
tutorial :
Loading a TorchScript Model in C++ — PyTorch Tutorials 1.11.0+cu102 documentation -
Manually port the code to LibTorch C++ and train a new model (the training process is sufficiently fast, this is a minor concern)
At inference time, will there be a significant advantage to either of these approaches in terms of
performance? Any insights much appreciated.