Is it possible to deploy a model of C++ Frontend on to target board?

I have a model designed by PyTorch C++ front-end, after training and save the model parameters, can I deploy the model on to target board(e.g. NVIDIA Orin) directly by just cross-compiling the model. As I know, for a python PyTorch model, it is usually exported to onnx file and may be later export to tensorrt engine file, then using tensorrt runtime to do inference during deployment on board in C++. I just wonder if the model is already in C++, can I just use this C++ code during deployment on board with C++ without model exporting, serialization and deserialization, etc.