auto size = img_tensor.sizes();
int w = size[0], h = size[1];
c10::ArrayRef<int64_t> hw_t({h, w});
c10::ArrayRef scale_t({384. / h, 384. / w});
img_tensor = at::_upsample_nearest_exact2d(img_tensor, hw_t, scale_t);
below is the error dump
terminate called after throwing an instance of ‘c10::Error’
what(): Must specify exactly one of output_size and scale_factors
Exception raised from compute_output_size at …/aten/src/ATen/native/UpSample.cpp:18 (most recent call first):
if I change the code to
int w = size[0], h = size[1];
c10::ArrayRef<int64_t> hw_t({h, w});
c10::ArrayRef scale_t({384. / h, 384. / w});
img_tensor = at::_upsample_nearest_exact2d(img_tensor, c10::nullopt, scale_t);
I got this error
what(): Expected static_cast<int64_t>(scale_factors->size()) == spatial_dimensions to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
The issue you’re encountering is due to the mismatch between the number of spatial dimensions and the size of the scale_factors array. Your input image tensor seems to have 2 spatial dimensions (height and width), so the scale_factors array should have 2 elements as well.
Here’s how you can fix it:
First, make sure that you’re correctly extracting the height and width dimensions from the tensor shape. You should ensure that the first dimension of your tensor is the height and the second is the width.
auto size = img_tensor.sizes();
int h = size[0], w = size[1];
Then, create the scale_factors array with the correct number of elements: