Torch::jit::load fails with Unknown Type Name Error

I am trying to implement Fast Neural Style Transfer on iOS application. To do that, I started with an implemented version of Fast Neural Style Transfer from here and attempt to follow Pytorch Mobile tutorial by @xta0 to run the model on my device.

After generating the model.pt file, I tried to load it following the tutorial code using TorchModule. However, I got this error when loading the module.

Unknown type name ‘torch.torch.classes.xnnpack.TransposeConv2dOpContext’: Serialized File “code/torch/transformer/___torch_mangle_391.py”, line 18
annotations[“prepack_folding._jit_pass_packed_weight_11”] = torch.torch.classes.xnnpack.Conv2dOpContext annotations[“prepack_folding._jit_pass_packed_weight_12”] = torch.torch.classes.xnnpack.Conv2dOpContext annotations[“prepack_folding._jit_pass_packed_weight_13”] = torch.torch.classes.xnnpack.TransposeConv2dOpContext ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <— HERE
annotations[“prepack_folding._jit_pass_packed_weight_14”] = torch.torch.classes.xnnpack.TransposeConv2dOpContext annotations[“prepack_folding._jit_pass_packed_weight_15”] = torch.torch.classes.xnnpack.Conv2dOpContext (lldb)

Within Xcode, the code fails here

    private lazy var module: TorchModule = {
    // Verified that filePath exist.
    if let filePath = Bundle.main.path(forResource: "mobile_model2", ofType: "pt"),
        // The problem is at TorchModule init setup.
        let module = TorchModule(fileAtPath: filePath) {
        return module
    } else {
        fatalError("Can't find the model file!")
    }
}()

Code fails specifically at torch::jit::load

- (nullable instancetype)initWithFileAtPath:(NSString*)filePath {
  self = [super init];
  if (self) {
    try {
      // CODE FAILS HERE, when trying to load!
      _impl = torch::jit::load(filePath.UTF8String);
      _impl.eval();
    } catch (const std::exception& exception) {
      NSLog(@"%s", exception.what());
      return nil;
    }
  }
  return self;
}

This means that the C++ load method call wasn’t able to load my model.pt. What could possibly caused this?

In order to verify that this issue is not solely caused by my model, I tried loading this with another pretrained model resnet18. And as expected, I also receive errors while attempting to load this model using TorchModule, although with a different set of errors.

**Unknown builtin op: aten::_add_relu_.**

**Could not find any similar ops to aten::_add_relu_. This op may not exist or may not be currently supported in TorchScript.**
2 Likes

I have the same issue, Pytorch-android 1.8.1 build from source. model exported by pytorch 1.8.1. have you solved this problem?

I think the issue is with the builds. Probably ops are not linked in. @xta0 any idea what might be the issue. @IvanKobzarev on the android side.

@hangrymangry what’s your pytorch version? Are you using cocoapods or building from source? @kimishpatel TransposeConv2dOpContext is from the xnnpack rewrite, right?

Yes it is from xnnpack but likely the issue with op registration since, as mentioned, even aten::_add_relu_ cannot be found.

Sorry to resurrect this, but is there any update on it? I built LibTorch as a static lib for arm64 Mac, and get the same thing when I build from Xcode. Weirdly, it doesn’t seem to be an issue when using the recommended CMake stuff, but that isn’t really viable for me, and for me it’s erroring on this block :

m_models = {
    {MODEL_TYPE::BASS, torch::jit::load(modelPath + "/Bass.pt")},
    {MODEL_TYPE::DRUMS, torch::jit::load(modelPath + "/Drum.pt")},
    {MODEL_TYPE::VOCALS, torch::jit::load(modelPath + "/Vocal.pt")},
     {MODEL_TYPE::OTHERS, torch::jit::load(modelPath + "/Other.pt")}
};

Terminal output is:
Screenshot 2022-08-12 at 15.57.12

Sorry but if did not mention repro steps, can you please do so. I can try to look at it later.

1 Like

My bad, I actually managed to fix it by linking with
-Wl,-force_load,/path/to/libtorch.a
-Wl,-force_load,/path/to/libtorch_cpu.a
after I just started trying to brute force it after googling for 3 hours haha