Bug? 1.5 c++ front end: torch::cuda::is_available() =false

I just installed torch 1.5 c++ front end (Windows, Visual Studio 2019). Cuda 10.1, latest
Nvidia drivers. This is in DEBUG version of the library.

Following code:

main() {
CudaPresent = torch::cuda::is_available();
}

CudaPresent is FALSE.

When I use torch 1.4 C++ front end, everything else is the same on the same PC, code is working fine
and returns CudaPresent = TRUE

Any help super appreciated.
thanks!

1 Like

Just found in some other post, adding a Linker option -INCLUDE:?warp_size@cuda@at@@YAHXZ
worked for me too.
Why is so complicated?

1 Like

Well, it is because torch_cuda and torch_cpu are separated to make the linker happy. (target executable/library has a size limit) However, calling torch::cuda::is_available(); doesn’t rely on any symbol in torch_cuda so the linker will optimize it away for your executable. As the result, you see FALSE there. Adding -INCLUDE:?warp_size@cuda@at@@YAHXZ forces the linker to include torch_cuda. If you use CMake, then you don’t need to set it manually. I just don’t expect so many users that create the VS project on their own. So we are working on to move this flag into the header to avoid confusing the user.

1 Like

Many thanks for prompt response. I was waiting for this VS integration capability for several years and it looks great right now. Thank you for your excellent work!

@artemmikheev at what linker option did you add “-INCLUDE:?warp_size@cuda@at@@YAHXZ” ?

It is under Linker>All Options>Additional Options

Thank you,I already tried that, but torch::cuda::is_available() is still 0.Any advices?