I’m trying to replicate a Keras model which starts out like this:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) (None, 720, 1, 1) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 720, 1, 128) 1152
_________________________________________________________________
The input is (batch_size, 720, 1, 1)
and then a conv2d layer is applied on it using 128 filters and kernel size of 8. In trying to replicate this in pytorch, I have:
import torch
a = torch.randn(32,720, 1, 1)
print('a:', a.size()) # a: torch.Size([32, 720, 1, 1])
torch.nn.Conv2d(720, 128, kernel_size=8, stride=1)(a)
But I’m getting the following error…
RuntimeError: Calculated padded input size per channel: (1 x 1). Kernel size: (8 x 8). Kernel size can’t greater than actual input size at /pytorch/aten/src/THNN/generic/SpatialConvolutionMM.c:48
Any ideas what I’m doing wrong and why this is working on keras and not on pytorch?