Laod a model to c++

I’m working on integrating a PyTorch model into a C++ application using LibTorch. I have a class PositionEvaluation_Board_Game that inherits from torch::nn::Module. I’ve defined the constructor and forward methods, and I’ve overloaded the function call operator to use this class as a model.

I’m trying to add a method that loads a model from a file path to initialize an object of this class, similar to how I’ve done it in Python. However, I’m encountering issues with loading the model and assigning it to the current instance of the class.

Here’s the method I’m trying to implement in C++:

void PositionEvaluation_Board_Game::load_model(const std::string& model_path) {
if (!std::ifstream(model_path).good()) {
throw std::runtime_error("Model file does not exist: " + model_path);
}
try {
auto module = torch::jit::load(model_path);
*this = std::move(module);
std::cout << "Model loaded successfully from: " << model_path << std::endl;
} catch (const c10::Error& e) {
std::cerr << "Failed to load model: " << e.what() << std::endl;
throw;
}
}
Issues:

  1. Compilation Error: I receive errors indicating that *this = module; is not valid because there is no overloaded assignment operator that can assign a torch::jit::Module to PositionEvaluation_Board_Game.

  2. Conceptual Error: I’m unclear on the correct way to load and use a trained PyTorch model in C++. Specifically, I’m confused about how to integrate the loaded torch::jit::Module with my custom module class that extends torch::nn::Module.

Questions:

  1. How do I correctly load a TorchScript model in C++ and assign it to an instance of a class derived from torch::nn::Module?

  2. Is there a straightforward way to use this loaded model for inference in C++?

(this is how i impemented this in python)

def load_model(self, model_path):
    """
    Loads the model weights from a given file path.

    Args:
        model_path (str): Path to the PyTorch model file.
    """
    if os.path.exists(model_path):
        self.model_state = torch.load(model_path,weights_only=True)
        print("Model loaded successfully from:", model_path)
        # Assuming 'self.model' is the attribute where the actual PyTorch model is stored
        # If your model architecture is defined within this class, apply the state like below
        # self.model.load_state_dict(self.model_state)
    else:
        raise FileNotFoundError("Computer's model does not exist at the specified path.")

Thank you!!

Hi Jonatan,
actually it is not possible to generate a torch::nn::Module from a torch::jit::script::Module what you get from torch::jit::load(). The better way here could be to use the script::Module as a member of your class.
For inference you need to convert your tensor to a vector of IValue:
torch::jit::script::Module my_module = torch::jit::load(…);
torch::Tensor input_data = …
std::vectortorch::jit::IValue v_input;
v_input.push_back(input_data);
torch::Tensor output = my_model(v_input).to_tensor();

Best regards
Chris

Thank you very much!!!
It worked