I’m implementing Gaussian kernel as a layer, could you please confirm me if this is ok or there is something wrong. I have the feeling that something is not going well,
self.seq = nn.Sequential(
nn.Conv2d(3, 3, 21, stride=1, padding=0, bias=None, groups=3)
def forward(self, x):
n[10,10] = 1
k = scipy.ndimage.gaussian_filter(n,sigma=3)
for name, f in self.named_parameters():
The code looks alright.
Your kernel would look like this:
Do you see any unwanted behavior?
Yes I have a strange pattern in the output. The output of the kernel and after trianing the network on L2 between the generated LF image and the original LF image using this kernel seems ok. but when I check the image after applying this kernel it has strange pattern.
It seems like either the network learned how to produce the inverse of the kernel, even if I added random noise before blurring.
if you zoom in the image before gaussian kernel you’ll realize these patterns.
I assume you are working on some kind of super resolution model and are using your gaussian layer somewhere in this model?
Could you crop a small part of the image and show these artifacts?
I guess my image viewer is interpolating by default and I cannot see any special artifacts.
your assumption is correct, here is part of the image
this can show you clearly the pattern in the floor.
Did you create these crops from the second image you’ve posted?
I’ve opened it in Gimp and cannot see such patterns.
Anyway, do you have any overlapping conv kernels? I guess your model is creating these artifacts as described here.
the first image in the first post is the model output “supposed SR image” before applying Gaussian kernel.
the second image is the blurred image after applying Gaussian kernel, and it doesn’t have the artifact because of the kernel and because the model is learnt to produce images, which after applying the kernel they match the original blurred image.
I’m aware of the checkerbord problem, but I think the reason here is beside this problem is the Gaussian kernel.
The only reason or interpretation I see, is that the model learned the gaussian kernel and it is producing output with similar pattern “noise” that is erased by the kernel.
Just to confirm, you were right, changing upsampling layer solved the issue.