LibTorch with cuda raising an exception on windows

Hi,

i am on win 10 vs 2017 LibTorch 1.3 with debug cuda 10.1. I want to train a model with pytorch c++. With cpu is ok but when i’am trying to use Gpu i got

Unhandled exception at 0x00007FFD2A759129 in train.exe: Microsoft C++ exception: c10::Error at memory location 0x000000085539C5E0

cuda is availabe because i check it with if (torch::cuda::is_available()).

Code to reproduce the error


struct TestImpl : nn::Module {


	TestImpl() : conv1(register_module("conv1", nn::Conv2d(nn::Conv2dOptions(1, 32, 4).stride(2).padding(1)))){

		register_parameter("prelu1", prelu1.fill_(0.25));

	
	
	}

	torch::Tensor forward(torch::Tensor x) {
	
		x = torch::prelu(x, prelu1);

		return x;
	
	}


	nn::Conv2d conv1;

	torch::Tensor prelu1 = torch::ones({ 1 });



};
TORCH_MODULE(Test);


int main{
Test testModel;
testModel->to(torch::DeviceType::CUDA); //Exception Here
}

Someone have an idea how to solve?

Thanks