A simple PyTorch C++ tester has Linker errors

My setup is:
OS - Microsoft Windows 10 Enterprise 2016 LTSB
GPU - Quadro M2000M
CUDA vers - 9.0, 10.0 & 11.0
Visual Studio - 2017, 2019
Python - 3.6.8

I’m working with Segnet model that was downloaded from here:

I wanted to export it from Python to C++ interface.

I learned from this link how to do this:

I added the required Python commands for generating the segnet.pt file.

While I’m using the PyTorch 1.1.0 pre built binaries that I installed using the pip3 command everything works the Python and C++ both sides.
This is the version:


When I’m trying to work with PyTorch 1.5.0 the Python side works OK which means that I successfully generate the *.pt file but the C++ part has a linker errors.
This is the version:

pip3 install torch==1.5.0+cu101 torchvision==0.6.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html

I know that this version require CUDA 10.1 and I have only 10.0 but based on my understanding it isn’t the root cause of my problems described here in this topic (correct me if I wrong please).
The version was successfully installed and the Python service:
device = torch.device(“cuda” if torch.cuda.is_available() else “cpu”)
return “cuda”.

I know that PyTorch libraries structure and names was changed and I updated my C++ projects configuration accordingly.

Despite the fact that I added the torch.lib as an input to the linker I’m getting a lot of linker errors (see below).
I even tried to add the torch_cuda.lib but it didn’t help.
Only when I added the torch_cpu.lib all linker errors were removed and the *.exe was built OK.

But during run time when I’m getting to use the function torch::jit::load the following message raised:

*Cannot initialize CUDA without ATen_cuda library. PyTorch splits its backend into two shared libraries: a CPU library and a CUDA library; this error has occurred because you are trying to use some CUDA functionality, but the CUDA library has not been loaded by the dynamic linker for some reason. The CUDA library MUST be loaded, EVEN IF you don’t directly use any symbols from the CUDA library! One common culprit is a lack of -Wl,–no-as-needed in your link arguments; many dynamic linkers will delete dynamic library dependencies if you don’t depend on any of their symbols. You can check if this has occurred by using ldd on your binary to see if there is a dependency on _cuda.so library. (initCUDA at C:\w\b\windows\pytorch\aten\src\ATen/detail/CUDAHooksInterface.h:63)

These are the libraries that I added as input to my linker configuration for version 1.5.0:

  • torch.lib

  • torch_cpu.lib;

  • c10.lib;

  • c10_cuda.lib;

  • cudart_static.lib

These are the libraries that I added as input to my linker configuration for version 1.1.0 (Without any linker errors and\or run time exceptions):

  • caffe2.lib

  • torch.lib

  • c10_cuda.lib

  • c10.lib

  • cudart_static.lib

I can share my Python and C++ code if it can help to reproduce my problems.

Please advise how can I solve the linker and run time errors for version 1.5.0,

Linker errors examples:
error LNK2001: unresolved external symbol "__declspec(dllimport) public: __cdecl c10::Error::Error(struct c10::SourceLocation,class std::basic_string<char,struct std::char_traits,class std::allocator > const &)

error LNK2001: unresolved external symbol "public: virtual bool __cdecl torch::autograd::AutogradMeta::requires_grad(void)const

error LNK2001: unresolved external symbol "__declspec(dllimport) public: __cdecl at::Tensor::Tensor(void)

error LNK2001: unresolved external symbol "__declspec(dllimport) public: __cdecl c10::Storage::Storage(class caffe2::TypeMeta,__int64,class c10::DataPtr,struct c10::Allocator *,bool)

and so on…