Porting saved neural nets from c++ GPU to other places

So for reasons that I shall not discuss here in detail (e.g. better multi-node support on supercomputers), I adopted the c++ framework for actually training a neural network. I used the GPU-enabled c++ framework installed using libtorch. (I cannot port this program to Python because too much code is involved. )

Now, after training a network using pure c++ that was created by inheriting torch::nn::module and training it, it was saved using torch::save. I have been able to again load the saved module without problems in the same c++ program.

My problem is that as soon as I try to run the saved module elsewhere, I run into troubles. E.g. loading the net into c++ libtorch without CPU support causes errors. Same for loading it into Python (either with or without GPU support). My end-goal is running inference with the neural net in places without GPU support.

I read this:


However, I am mainly interested in starting from C++. Is what I want to do supported, or not? If yes, how? ( I could of course manually save all weights to a file, and load again from other platform, but obviously I would rather not go that route.)