Runtimeerror : running errors on libtorch about torch::jit::load

Hi, All

I have received a running time in C++ on libtorch about torch::jit as:
terminate called after throwing an instance of ‘c10::Error’
what(): torch::jit::load() received a file from torch.save(), but torch::jit::load() can only load files produced by torch.jit.save()
Exception raised from load at /pytorch/torch/csrc/jit/serialization/import.cpp:347 (most recent call first):

but in the C++ code, I have only used torch::load and torch::save. I didnot use torch::jit::load()
Any suggestions on how to fix the issue please ?

Thanks,

The error message seems misleading in this case. Could you post a minimal code snippet to reproduce this issue?

Hi, ptrblck

Thanks in advance

This is the minimal code snippet that triggers the errors:
int main(){
CustArgsSSL args1;

auto newgan = ppGAN(MnistLabel(10), MnistUnlabel(), MnistTest(), args1);//

}

struct ppGANImpl: torch::nn::Module{

ppGANImpl(torch::data::datasets::MNIST dataset1, torch::data::datasets::MNIST dataset2, torch::data::datasets::MNIST dataset3, CustArgsSSL &args)
        : discriminator(register_module("discriminator", Discriminator1())),doptim(torch::optim::Adam(discriminator->parameters(),torch::optim::AdamOptions(args.lr_default).betas({args.momentum,0.999})) )
        ,
        generator(register_module("generator", Generator1(100, 28*28))),goptim(torch::optim::Adam(generator->parameters(),torch::optim::AdamOptions(args.lr_default).betas({args.momentum,0.999}))

// generator(generator1)
)
{
if (!args.savedir.empty()){

        torch::load(generator,"/path/G.pkl");

        torch::load(discriminator,"/path/D.pkl");
    }
    else {


        torch::save(generator,"/path/G.pkl");
        torch::save(discriminator,"/path/D.pkl");
    }

And this is the debug information

terminate called after throwing an instance of ‘c10::Error’
what(): torch::jit::load() received a file from torch.save(), but torch::jit::load() can only load files produced by torch.jit.save()
Exception raised from load at /pytorch/torch/csrc/jit/serialization/import.cpp:347 (most recent call first):

template <typename Value, typename… LoadFromArgs>
void load(Value& value, LoadFromArgs&&… args) {
serialize::InputArchive archive;
archive.load_from(std::forward(args)…);
archive >> value;
}

Thanks for the update. I don’t know, how your models are defined, but this minimal code snippet works fine:

#include <torch/torch.h>
#include <iostream>

struct NetImpl : torch::nn::Module {
  NetImpl(int64_t N, int64_t M)
      : linear(register_module("linear", torch::nn::Linear(N, M))) {
    another_bias = register_parameter("b", torch::randn(M));
  }
  torch::Tensor forward(torch::Tensor input) {
    return linear(input) + another_bias;
  }
  torch::nn::Linear linear;
  torch::Tensor another_bias;
};
TORCH_MODULE(Net);


int main() {
  auto net = Net(4, 5);
  for (const auto& p : net->parameters()) {
    std::cout << p << std::endl;
  }

  auto net2 = Net(4, 5);

  torch::save(net, "tmp.pt");
  torch::load(net2, "tmp.pt");
  for (const auto& p : net2->parameters()) {
    std::cout << p << std::endl;
  }
}

and I can properly save and load the model.

Thank you very much ptrblck for the working example.

I double checked it and find that it is because I try to load the python “.pkl” file. It seems to me that in C++, only “.pt” file can be loaded and saved.

Thanks again

Did you get the error just by changing the file extension? I.e. is my code snippet also failing, if you use .pkl?
pickle files are serialized Python objects, but I’m unsure if PyTorch checks the file extension or not.

Actually your code is good even if change the file extension.

So it is probably due to other reasons …