Libtorch C++ static linking requires aten::mul operator registration

https://github.com/pytorch/pytorch/issues/14367 also mentioned this problem.

Due to some limitation, -Wl,--whole-archive option cannot be used when linking. The executable binary generated fails to run, with error “DeviceGuardImpl for cpu is not available”.

Below are two lines added to register the Device. There are new errors with aten::mul missing. After checking the source code under jit/ and aten/, I did not get a clue of how to register all operators.

Could someone please provide some standard initialization code for static library users?

#include <torch/script.h> // One-stop header.
**#include <ATen/detail/CPUGuardImpl.h>  //ADDED first line**


#include <iostream>
#include <memory>

int main(int argc, const char* argv[]) {
  if (argc != 2) {
    std::cerr << "usage: example-app <path-to-exported-script-module>\n";
    return -1;
  }

  **C10_REGISTER_GUARD_IMPL(CPU, at::detail::CPUGuardImpl); //ADDED second line**

  // Deserialize the ScriptModule from a file using torch::jit::load().
  std::shared_ptr<torch::jit::script::Module> module = torch::jit::load(argv[1]);

  assert(module != nullptr);
  std::cout << "ok\n";
}

Below is the error message:

terminate called after throwing an instance of 'torch::jit::script::ErrorReport'
  what():  
unknown builtin op: aten::mul
Could not find any similar ops to aten::mul. This op may not exist or may not be currently supported in TorchScript
:

def mul(a : float, b : Tensor) -> Tensor:
  return b * a
         ~~~~~ <--- HERE
def add(a : float, b : Tensor) -> Tensor:
  return b + a
def ne(a : float, b : Tensor) -> Tensor:
  return b != a
def eq(a : float, b : Tensor) -> Tensor:
  return b == a
def lt(a : float, b : Tensor) -> Tensor:
  return b > a
def le(a : float, b : Tensor) -> Tensor:
Aborted
1 Like

Anybody knows? Please help.

I have the same issue: Unknown builtin op: aten::mul

Created an issue https://github.com/pytorch/pytorch/issues/27726