I am trying to copy a Sequential object consisting of the Conv2d, BatchNorm2d, LeakyReLU modules from the CPU to the GPU by doing object->to(…) using Pytorch C++ 1.6 and I get the message that’s in the title of this thread.
The exact same code works fine using Pytorch 1.4.
Could anyone help me figure out what the problem is ?
Thanks
P.S. I am using Visual Studio 2017 and the operating system is Win 10.
P.S.1 I tried the following code:
auto test = torch::nn::Conv2d(torch::nn::Conv2dOptions(1, 1, 1));
test->to(torch::Device(torch::kCUDA));
and I got a different message. It is “PyTorch is not linked with support for cuda devices”. I did link both torch_cuda.lib and c10_cuda.lib.
P.S.2. I’ve found a solution to this problem. I find it a bit strange that I have to force the linker to link against a library by directly adding a symbol to the symbol table. Microsoft do describe this linker option as a useful feature for including a library object that otherwise would not be linked to the program. I guess it’s just that I’ve never had to do this up until now.
Thanks for this issue I successfully fixed the same error on my Windows system.
But, I also have a Linux:
Ubuntu 18.04 - NVIDIA Jetson Xavier AGX with JetPack 4.4.
PyTorch 1.6 for ARM.
When I tried to used the same solution described above and added the
/INCLUDE:?warp_size@cuda@at@@YAHXZ to the linker option a build error was raised:
g++: error: /INCLUDE:?warp_size@cuda@at@@YAHXZ: No such file or directory
g++: error: /INCLUDE : error : No such file or directory
When I tried to change the /INCLUDE to -INCLUDE the build process was successfully completed but the original problem came bask and report on the aten::empty_strided problem as described above.
Can you please help me to understand how to use this flag on Linux system?