Non local denoising

I want to implement non-local denoising of an image. The algorithm is to break the image into windows, then extract patches within windows and then compare the dissimlarity between patches to compute the weight matrix.

Can anyone please help me to implement the same using pytorch or tensorflow for single image?

unfold can be used to get patches on your input tensor.
Here is a small example:

x = torch.randn(1, 3, 12, 12)  # batch, c, h, w
kh, kw = 3, 3  # kernel size
dh, dw = 3, 3  # stride
patches = x.unfold(2, kh, dh).unfold(3, kw, dw)
print(patches.size())
> torch.Size([1, 3, 4, 4, 3, 3])
patches = patches.contiguous().view(
    patches.size(0), patches.size(1), -1, kh, kw)
print(patches.shape)
> torch.Size([1, 3, 16, 3, 3])

This code will create 16 3-by-3 patches, which can be used for further calculation.
Note that you can adjust the kernel size and stride for your use case.

Thank you very much for this code. I tried this and it works as you mentioned. When I change the stride to dh, dw to 1, 1 (stride) we get 100 3 by 3 patches. But I need 144 patches according to input size. Could you please suggest the modififcation to be done ?

100 patches will be created as you will get 10 patches in each spatial dimension.
If you need 12, you could add padding via F.pad to the input before the patch generation.

Note that these “outer” patches will have the padding values, which might or might not influence your further calculations.

ok i will try that. thank you