I would like to smooth/blur each part of an image to an extent dependent on a separate input tensor specific for that image.
So, depending on the part of the given input tensor I can smooth the corresponding part of the image.
Is there a way to do this using Conv2d/Unfold/Fold functional calls? Or another way?
It seems like I need a conv function with dynamic kernel weights?
Sorry for reviving this old topic but I think this can be a relatively common use case that can help implementing bilateral filter or even learnable input-conditioned filter.
More specific proposal may be to extend F.conv2d so that weight parameter can be a 6D tensor with dimensions (out_channels, in_channels, kH, kW, iH, iW). So the current conv2d producing for the pixel (i, j)
output[i,j] = sum(crop(input, i, j, kH, kW) * weight)
can be generalised to
output[i,j] = sum(crop(input, i, j, kH, kW) * weight[..., i, j])
It is probably difficult to implement it now in a vectorised way without resorting to writing new CUDA kernels.
Another useful API may be generic filter similar to the one in scipy. It allows to avoid re-implementing boundary checks at least.