I’m implementing Gaussian kernel as a layer, could you please confirm me if this is ok or there is something wrong. I have the feeling that something is not going well,
class GaussianLayer(nn.Module):
def __init__(self):
super(GaussianLayer, self).__init__()
self.seq = nn.Sequential(
nn.ReflectionPad2d(10),
nn.Conv2d(3, 3, 21, stride=1, padding=0, bias=None, groups=3)
)
self.weights_init()
def forward(self, x):
return self.seq(x)
def weights_init(self):
n= np.zeros((21,21))
n[10,10] = 1
k = scipy.ndimage.gaussian_filter(n,sigma=3)
for name, f in self.named_parameters():
f.data.copy_(torch.from_numpy(k))
Yes I have a strange pattern in the output. The output of the kernel and after trianing the network on L2 between the generated LF image and the original LF image using this kernel seems ok. but when I check the image after applying this kernel it has strange pattern.
It seems like either the network learned how to produce the inverse of the kernel, even if I added random noise before blurring.
I assume you are working on some kind of super resolution model and are using your gaussian layer somewhere in this model?
Could you crop a small part of the image and show these artifacts?
I guess my image viewer is interpolating by default and I cannot see any special artifacts.
the first image in the first post is the model output “supposed SR image” before applying Gaussian kernel.
the second image is the blurred image after applying Gaussian kernel, and it doesn’t have the artifact because of the kernel and because the model is learnt to produce images, which after applying the kernel they match the original blurred image.
I’m aware of the checkerbord problem, but I think the reason here is beside this problem is the Gaussian kernel.
The only reason or interpretation I see, is that the model learned the gaussian kernel and it is producing output with similar pattern “noise” that is erased by the kernel.