From what I know, Pytorch doesn’t support this as an inbuilt option, TensorFlow does. Checkout this discussion which mentions how dynamic loading makes it hard.
However, there could be ways to hack it by combining asymmtric padding layers with conv2d layers. I wouldn’t bother doing it, unless super useful and just go with the inbuilt padding options. More discussion here.