Linking static libtorch libraries

I built static libtorch from sources on windows successfully.
But now when I compile my application, I must explicitly link with onnx, onnx_proto, caffe2_ptoros, mkldnn, mkl, protobuf libraries (my compile options). This is not convenient, but it works. But for fbgemm library this doesn’t work. I get link errors:

torch_cpu.lib(Context.cpp.obj) : error LNK2019: unresolved external symbol "__declspec(dllimport) bool __cdecl fbgemm::fbgemmSupportedCPU(void)" (__imp_?fbgemmSupportedCPU@
fbgemm@@YA_NXZ) referenced in function "public: class std::vector<enum c10::QEngine,class std::allocator<enum c10::QEngine> > __cdecl <lambda_8dd2a8e98f77cf43df1066af8f7a0e
c7>::operator()(void)const " (??R<lambda_8dd2a8e98f77cf43df1066af8f7a0ec7>@@QEBA?AV?$vector@W4QEngine@c10@@V?$allocator@W4QEngine@c10@@@std@@@std@@XZ) [D:\reps\libtorch_tes
t\deploy\example-app.vcxproj]
torch_cpu.lib(Quantizer.cpp.obj) : error LNK2019: unresolved external symbol "__declspec(dllimport) void __cdecl fbgemm::fbgemmPartition1D(int,int,int,int &,int &)" (__imp_
?fbgemmPartition1D@fbgemm@@YAXHHHAEAH0@Z) referenced in function "void __cdecl fbgemm::Dequantize<signed char>(signed char const *,float *,int,struct fbgemm::TensorQuantiza
tionParams const &,int,int)" (??$Dequantize@C@fbgemm@@YAXPEBCPEAMHAEBUTensorQuantizationParams@0@HH@Z) [D:\reps\libtorch_test\deploy\example-app.vcxproj]
torch_cpu.lib(Quantizer.cpp.obj) : error LNK2019: unresolved external symbol "__declspec(dllimport) void __cdecl fbgemm::Quantize<signed char>(float const *,signed char *,i
nt,struct fbgemm::TensorQuantizationParams const &,int,int)" (__imp_??$Quantize@C@fbgemm@@YAXPEBMPEACHAEBUTensorQuantizationParams@0@HH@Z) referenced in function "void __cd
ecl at::parallel_for$omp$1<class <lambda_11019aac2f2763b0156a5bd7dbaca06d> >(__int64,__int64,__int64,class <lambda_11019aac2f2763b0156a5bd7dbaca06d> const &)" (??$parallel_
for$omp$1@V<lambda_11019aac2f2763b0156a5bd7dbaca06d>@@@at@@YAX_J00AEBV<lambda_11019aac2f2763b0156a5bd7dbaca06d>@@@Z) [D:\reps\libtorch_test\deploy\example-app.vcxproj]
torch_cpu.lib(Quantizer.cpp.obj) : error LNK2019: unresolved external symbol "__declspec(dllimport) void __cdecl fbgemm::Quantize<unsigned char>(float const *,unsigned char
 *,int,struct fbgemm::TensorQuantizationParams const &,int,int)" (__imp_??$Quantize@E@fbgemm@@YAXPEBMPEAEHAEBUTensorQuantizationParams@0@HH@Z) referenced in function "void
__cdecl at::parallel_for$omp$1<class <lambda_7b353a87181e064310fa419329247f98> >(__int64,__int64,__int64,class <lambda_7b353a87181e064310fa419329247f98> const &)" (??$paral
lel_for$omp$1@V<lambda_7b353a87181e064310fa419329247f98>@@@at@@YAX_J00AEBV<lambda_7b353a87181e064310fa419329247f98>@@@Z) [D:\reps\libtorch_test\deploy\example-app.vcxproj]
torch_cpu.lib(Quantizer.cpp.obj) : error LNK2019: unresolved external symbol "__declspec(dllimport) void __cdecl fbgemm::Quantize<int>(float const *,int *,int,struct fbgemm
::TensorQuantizationParams const &,int,int)" (__imp_??$Quantize@H@fbgemm@@YAXPEBMPEAHHAEBUTensorQuantizationParams@0@HH@Z) referenced in function "void __cdecl at::parallel
_for$omp$1<class <lambda_ac0c20011d29a38651cff09b26e44b16> >(__int64,__int64,__int64,class <lambda_ac0c20011d29a38651cff09b26e44b16> const &)" (??$parallel_for$omp$1@V<lamb
da_ac0c20011d29a38651cff09b26e44b16>@@@at@@YAX_J00AEBV<lambda_ac0c20011d29a38651cff09b26e44b16>@@@Z) [D:\reps\libtorch_test\deploy\example-app.vcxproj]
torch_cpu.lib(THBlas.cpp.obj) : error LNK2019: unresolved external symbol "__declspec(dllimport) void __cdecl fbgemm::cblas_gemm_i64_i64acc(enum fbgemm::matrix_op_t,enum fb
gemm::matrix_op_t,int,int,int,__int64 const *,int,__int64 const *,int,bool,__int64 *,int)" (__imp_?cblas_gemm_i64_i64acc@fbgemm@@YAXW4matrix_op_t@1@0HHHPEB_JH1H_NPEA_JH@Z)
referenced in function THLongBlas_gemm [D:\reps\libtorch_test\deploy\example-app.vcxproj]
D:\reps\libtorch_test\deploy\Release\example-app.exe : fatal error LNK1120: 6 unresolved externals [D:\reps\libtorch_test\deploy\example-app.vcxproj]

This means that the libtorch wants to link to the fbgemm as dynamic library. But fbgemm built as static library without __declspec(dllexport) .


And you’ll need to add -DFBGEMM_STATIC=1 to the compiler flags for your project.

This solved one problem. But now in runtime, when called

torch::jit::script::Module module = torch::jit::load(filename);

where filename is a traced torchscript model. I get a new error.:

Unknown builtin op: aten::mul.
Could not find any similar ops to aten::mul. This op may not exist or may not be currently supported in TorchScript.
:
  File "<string>", line 3

def mul(a : float, b : Tensor) -> Tensor:
  return b * a
         ~~~~~ <--- HERE
def add(a : float, b : Tensor) -> Tensor:
  return b + a
'mul' is being compiled since it was called from 'Conv2d.forward'
D:\Soft\Miniconda3\envs\caffe2_to_pytorch\lib\site-packages\torch\nn\modules\conv.py(346): _conv_forward
D:\Soft\Miniconda3\envs\caffe2_to_pytorch\lib\site-packages\torch\nn\modules\conv.py(349): forward
D:\Soft\Miniconda3\envs\caffe2_to_pytorch\lib\site-packages\torch\nn\modules\module.py(534): _slow_forward
D:\Soft\Miniconda3\envs\caffe2_to_pytorch\lib\site-packages\torch\nn\modules\module.py(548): __call__
C:/Users/PycharmProjects/caffe2_to_pytorch/pytorch.py(58): forward
D:\Soft\Miniconda3\envs\caffe2_to_pytorch\lib\site-packages\torch\nn\modules\module.py(534): _slow_forward
D:\Soft\Miniconda3\envs\caffe2_to_pytorch\lib\site-packages\torch\nn\modules\module.py(548): __call__
D:\Soft\Miniconda3\envs\caffe2_to_pytorch\lib\site-packages\torch\jit\__init__.py(1027): trace_module
D:\Soft\Miniconda3\envs\caffe2_to_pytorch\lib\site-packages\torch\jit\__init__.py(875): trace
C:/Users/PycharmProjects/caffe2_to_pytorch/pytorch.py(201): <module>
C:\Program Files\JetBrains\PyCharm Community Edition 2019.2\helpers\pydev\_pydev_imps\_pydev_execfile.py(18): execfile
C:\Program Files\JetBrains\PyCharm Community Edition 2019.2\helpers\pydev\pydevd.py(1412): _exec
C:\Program Files\JetBrains\PyCharm Community Edition 2019.2\helpers\pydev\pydevd.py(1405): run
C:\Program Files\JetBrains\PyCharm Community Edition 2019.2\helpers\pydev\pydevd.py(2054): main
C:\Program Files\JetBrains\PyCharm Community Edition 2019.2\helpers\pydev\pydevd.py(2060): <module>
Serialized   File "code/__torch__/torch/nn/modules/conv.py", line 9
    x: Tensor) -> Tensor:
    _0 = self.bias
    input = torch._convolution(x, self.weight, _0, [1, 1], [0, 0], [1, 1], False, [0, 0], 1, False, False, True)
                                                                                                           ~~~~ <--- HERE
    return input

or where file is the same scripted model:

Couldn't find an operator for aten::dropout(Tensor input, float p, bool train) -> Tensor. Do you have to update a set of hardcoded JIT ops? 
(lookupByLiteral at C:\...\torch\csrc\jit\runtime\operator.cpp:72)

Dynamic build is working fine.

set_target_properties(app PROPERTIES LINK_FLAGS "-WHOLEARCHIVE:torch_cpu.lib")

It helped me.