Upsampling input with nearest neighbors' pixel value

Hello,
When I am trying to upsample my input using the nn.functional.interpolate function (nearest mode), I get the following error

return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors)
RuntimeError: input tensor has spatial dimension larger than the kernel capacity

I think it is mainly because my input tensor is big, but I don’t have any memory issues. Is there any way to solve this problem without shrinking the input tensor?

This is a limitation in the CUDA launch config for the current algorithm. As a workaround you could use the CPU for these large shapes and create a feature request on GitHub.