Training deserialization in C++

Hi all,

I am training a LNN with pytorch 1.1.0 and using the training to perform an inference on C++.
Due to various compatibility issues with CERN ROOT (long story) I cannot use the C++ torch libraries, so I re-implemented the entire forward process in C++ using matrix algebra.

In order to do so, I wrote a python script to convert the training .pt file in three separate csv files (weights, biases, norm) to be fed to the C++ script. This works fine, but is obviously very annoying.

I want to directly load and deserialize the pytorch .pt model in C++ without using any torch-related library. I had a look at the source code for the load() function and it looks it is based on the pickle library, but I failed in loading the .pt file using pickle.

Could anyone help me understand how the deserialization process is performed in pytorch?

The root of the annoyance is not using libtorch. :slight_smile:
I’m assuming here you want state dicts (rather than, say, JIT models).
But so PyTorch’s save function is implemented in torch/serialization.py and for storage (Tensor doesn’t store it’s own memory, but delegates to a TensorStorage), these will be deserialized by the code in torch/csrc/generic/serialization.cpp.

Maybe the quickest way is is to look beyond pure PyTorch and store in some more common format, e.g. form numpy (don’t know if there is good C++ support for those) or hdf5.

Best regards

Thomas

Hi @tom, thanks for the reply.

Storing info using another format is more or less my current solution, but I wanted to avoid having two copies of each training file.
I’ll have a look at the code you posted, maybe I can come up with a solution.

Best,
Gabriele

@Anthair Do you mind sharing the compatibility issues you encountered with CERN ROOT? We might be able to find a solution for it.