When I am trying to upsample my input using the
nn.functional.interpolate function (nearest mode), I get the following error
return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors)
RuntimeError: input tensor has spatial dimension larger than the kernel capacity
I think it is mainly because my input tensor is big, but I don’t have any memory issues. Is there any way to solve this problem without shrinking the input tensor?