I have a layer with an input of
torch.Size([64, 32, 100, 20])
In Keras I was using this
conv_first1 = Conv2D(32, (4, 1), padding="same")(conv_first1)
which lead to an output shape the same as an the input shape
If I use the below in pytorch I end up with a shape of 64,32,99,20
self.conv2 = nn.Conv2d(32, 32, (4, 1), padding=(1,0))
and If I instead use padding (2,0) it becomes 64,32,101,20
What should be used in order to end up with
input_shape == output_shape
64,32,100,20 = 64,32,100,20