Error in torchScript based on Pytorch tutorial

I’m new to Pytorch and it C++ API. I followed the this tutorial in order to learn how to load and process a .pt file in C++.

Unfortunately, while the code is built correctly, and loads the pt file without errors, when I try to execute something from the saved module, it raises the following error:


terminate called after throwing an instance of 'std::runtime_error'
  what():  The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
  File "code/__torch__.py", line 11, in forward
    if bool(torch.gt(torch.sum(input), 0)):
      weight = self.weight
      output = torch.mv(weight, input)
               ~~~~~~~~ <--- HERE
    else:
      weight0 = self.weight

Traceback of TorchScript, original code (most recent call last):
  File "torchScript.py", line 10, in forward
    def forward(self, input):
        if input.sum() > 0:
            output = self.weight.mv(input)
                     ~~~~~~~~~~~~~~ <--- HERE
        else:
            output = self.weight + input
RuntimeError: vector + matrix @ vector expected, got 1, 2, 4

Aborted (core dumped)

How can I fix it ?
Thank you in advance

Did you just rerun the entire tutorial or did you change the inputs to another structure?

@ptrblck thank you for your answer. I followed the tutorial without any change. I converted the torch script via annotation and not via tracing.
My torchscript:

import torch

class MyModule(torch.nn.Module):
    def __init__(self, N, M):
        super(MyModule, self).__init__()
        self.weight = torch.nn.Parameter(torch.rand(N, M))

    def forward(self, input):
        if input.sum() > 0:
            output = self.weight.mv(input)
        else:
            output = self.weight + input
        return output

my_module = MyModule(10,20)
sm = torch.jit.script(my_module)

sm.save("my_module_model.pt")

My cpp code:

#include <torch/script.h> // One-stop header.

#include <iostream>
#include <memory>

int main(int argc, const char* argv[]) {
  if (argc != 2) {
    std::cerr << "usage: example-app <path-to-exported-script-module>\n";
    return -1;
  }


  torch::jit::script::Module module;
  try {
    // Deserialize the ScriptModule from a file using torch::jit::load().
    module = torch::jit::load(argv[1]);
  }
  catch (const c10::Error& e) {
    std::cerr << "error loading the model\n";
    return -1;
  }

  // Create a vector of inputs.
  std::vector<torch::jit::IValue> inputs;
  inputs.push_back(torch::ones({1, 3, 224, 224}));

  // Execute the model and turn its output into a tensor.
  at::Tensor output = module.forward(inputs).toTensor();
  std::cout << output.slice(/*dim=*/1, /*start=*/0, /*end=*/5) << '\n';

  std::cout << "ok\n";
}

My setup:
Pytorch: 1.9.0 (building from source)
Cuda: 10.2
Cudnn: 8.2.2
g++: 8.4
Python 3.6 (Anaconda)

Thanks for raising this issue! Could you create an issue on GitHub so that we could track and fix it, please?

Yes, of course, I’ ll create the issue as soon as possible.

1 Like