Hi,
I wanted to initialize a Conv2D Kernel which returns the same image after running the image through it. This is the code I am currently using
wts = np.zeros((3,3,3,3))
nn.init.dirac_(torch.from_numpy(wts))
with torch.no_grad():
conv_layer.weight = nn.Parameter(torch.tensor(wts,dtype=torch.float))
new_img = conv_layer(img_tensor).detach()
However, whenever I run the image through the Conv Layer, the new image always is disturbed in certain pixels. The means of the two images are also different. Could you guide me as to the best way to create a Convolution Filter with weights that return the same image back? I know for the single channel image case, the weights would simply be
[0, 0, 0
0, 1, 0
0, 0, 0]
but I’m unsure what it should be for a 3 channel image