Conv1d outputs the vector of size one less than the input... why?

I cannot solve the following mystery for hours:


class Convolution(nn.Module):
    def __init__(self, args):
        super(Convolution, self).__init__()
        self.windows = [1,2,3,4,5]
        self.window_convolutions = \
            nn.ModuleList( [nn.Conv1d(args.cov_dim, args.mem_dim, i) for i in self.windows ])


    def forward(self, input, args):
        for window in self.windows:
            print("input shape", input.shape)
            print("params ", args.cov_dim, args.mem_dim, window)

            input = input.view(1, input.size()[0], input.size()[1]).transpose(1, 2)

            print("input shape", input.shape)

            conv_model = self.window_convolutions[window]
            convolved = conv_model(input)[0].transpose(0, 1)

            print("convolved shape ", convolved.shape)

Gives the following output:

input shape torch.Size([97, 150])
params  150 150 1
input after view  torch.Size([1, 150, 97])
convolved shape  torch.Size([96, 150])

Notice that convolved shape is one less on zero dim. Why?
Some mystery for me…

You are dropping one dimension, because you are indexing the batch dimension in this line:

convolved = conv_model(input)[0].transpose(0, 1)

That is not the case. I removed [0].transpose(0,1)
so now I have only
convolved = conv_model(input)

and my output is

input shape torch.Size([171, 150])
params  150 150 1
input after view  torch.Size([1, 150, 171])
convolved shape  torch.Size([1, 150, 170])

Any other ideas what it might be?

output of convolution for stride=1 is input - kernel_size + 1. This is a valid convolution, not same convolution.

If you want output width to be same as input width, you need to add additional implcit padding to input, given by Conv1d’s padding argument: https://pytorch.org/docs/stable/nn.html#conv1d