Implementing custom non-zero conv padding

Is there a way to implement your own padding algorithm? For example use reflect padding instead of the seemingly solely available one: zero padding.

Numpy offers it for example: https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html

It’s going to be included in the core soon: https://github.com/pytorch/pytorch/pull/856/files#diff-c66288b9ce36978f377a1a20d32ec53dR537

1 Like

That would be a welcome addition.

Do you know if there is a way to apply a Gaussian convolution individually onto each feature map?
Something to use after nearest neighbor upsampling, ex use in building laplacian pyramids.

It could certainly be done in a hack-y way where you cut up variables, but that doesn’t seem like it would be very efficient.

Numpy has apply_along_axis but it doesn’t look like pytorch has that unless I’m missing something.

I’m looking for a way to apply a custom (maybe learned too) convolution onto each feature map individually.

So if you have a (64,32,128,128) tensor you want a say, (5,5) filter to be applied to all 64*32 (128,128) images.

Maybe I’m not thinking in arrays right, what if I reshape it to (64*32,1,128,128) and do convolutions the usual way? Then reshape back? Seems like it should work, would reshaping it naively break things though?

Hmmm yes I think that reshaping them like that would work.

Is there a way to use these new padding options directly in conv2d somehow? Is that integration planned? These functions also aren’t featured in the docs.
It’s not exactly clear how one would go about using this padding prior to conv2d, can you give an example? The one in the tests doesn’t appear to be informative.

Does it return an enlarged tensor based on the requested padding?

No. Conv2d supports only basic zero-padding. If you need anything more complicated use F.pad.

I don’t understand the problem, can’t you apply F.pad to the input, and pass the output to Conv2d?