Upsample module crashes on forward call

Hi,
I was trying to use torch::nn::Upsample module in torchlib 1.12 (1.12.0+cu116). It crashes on the forward call.

The minimal reproducable example. I tested this on the cpu.

	torch::Tensor tensor = torch::zeros({ 1, 3, 512, 512 }); 
	std::vector<double> scl = {2.0};
	torch::nn::Upsample upsample_module(torch::nn::UpsampleOptions().scale_factor(scl).mode(torch::kNearest).align_corners(false));
	torch::Tensor output = upsample_module(tensor);

I am not sure if I am using it right way, however, the documentations says only

Example:

Upsample model(UpsampleOptions().scale_factor({3}).mode(torch::kLinear).align_corners(false));

Tested on:
Windows 10
torchlib 1.12
Cuda 11.6
RTX 3090

I know it’s late, but perhaps useful for others as well.
I implemented the upsample layer by:

torch::nn::Upsample(torch::nn::UpsampleOptions().scale_factor(std::vector<double>({2})).mode(torch::kBilinear))

Initially I had troubles with the input for .scale_factor(), which in the end needs to be a vector.
Whereas in python it can just be a number like in the example you example provided.

Perhaps if you implement the upsample layer like the above code the issue will be fixed?