Load the model and tensor in C++ (libtorch) for inference

I’m new to C++ frontend. Here is my current code

#include <torch/script.h> 

#include <iostream>
#include <memory>

int main(int argc, const char* argv[]) {
  torch::jit::script::Module module;
  try {
     module = torch::jit::load(argv[1]);
  catch (const c10::Error& e) {
    std::cerr << "error loading the model\n";
    return -1;
    std::vector<torch::Tensor> sample_input;
    torch::load(sample_input, argv[2]);
    std::cout << "Loaded Successfully\n";
    std::vector<torch::jit::IValue> inputs;

    at::Tensor output = module.forward(inputs).toTensor();
    std::cout << output << '\n';

I want to run the executable as

./example-app ../model.pt ../sample_input.pth

The code doesn’t compile (make) .
In the documentation, I saw usage of torch::load but during make I noticed the error message ‘load’ is not a member of ‘torch’.
Also the inputs would not append the tensor, not sure what IValue is.

Could you please provide the fix?

You probably need to #include <torch/all.h> for it to pick up torch::load.

This comment has more info about the save/load APIs.

1 Like