filt = np.zeros((3, 3))
filt[1, shift+1] = -1
filt[1, filt.shape - 1] = 1
The above code generates a 3x3 filter that does a simple forward gradient. In order to do a simple correlation, I can do the following:
from scipy.signal import correlate2d
from scipy import misc
im = misc.imread('grascale.png') # a grayscale image
imout = correlate2d(im,filt,mode='same',boundary='symm')
The above code ensures the output is the same size of input ‘im’ and also symmetrically reuses the values to fill the boundary pixels during convolution. How can I do the same using conv2d in pytorch? Also, is it enough to set bias to false and weight.data to filt value?
Hi, do you have any idea about that yet ? @sreeni5493
Yes. This is solvable. Conv2d is the same as correlate2d. Checked for random inputs and the results deviate in the order of power -8 (which means basically the results are the same). It does a valid convolution. So your best choice is to reflect boundaries after convolution or before convolution using torch nn functional pad. Do not do any interpolation either before or after convolution to change the size of the image. This leads to poor results. Best bet is to do reflect padding either post or pre convolution. Also, yes set weight.data to the filter that you wish to use. bias should be set to false or zero.
Thanks for your reply. Could you show some codes? I am new to pytorch.
Hi, someone know to elaborate on the boundary conditions? Say, what the idea of wrap? symmetry?