Broken API in libtorch 1.2

Hi,
My old code which worked perfectly:
std::shared_ptr<torch::jit::script::Module> module = torch::jit::load(s_model_name);
Fails with libtorch 1.2. If you upgrade you have to change the code to:
torch::jit::script::Module module = torch::jit::load(s_model_name, torch::kCUDA);

You also have to change this:
torch::Tensor out_tensor = module->forward({input_tensor}).toTensor();

To this:
torch::Tensor out_tensor = module.forward({input_tensor}).toTensor();

It seems at FB QA exists only for PyTorch but not libtorch.

1 Like

In the documentation of the PyTorch C++ API it states

At the moment, the C++ API should be considered “beta” stability; we may make major breaking changes to the backend in order to improve the API, or in service of providing the Python interface to PyTorch, which is our most stable and best supported interface.

Guess that applies here…

Yes the PyTorch C++ API is still considered “beta” stability. Specifically, we changed torch::jit::load(s_model_name) to return torch::jit::script::Module instead of std::shared_ptr<torch::jit::script::Module> because torch::jit::script::Module is now a reference type, and this change is documented in the release note ("[jit] script::Module is now a reference type" in https://github.com/pytorch/pytorch/releases/tag/v1.2.0).