How is my Conv1d dimension reducing when I have padding?

My conv module is:

        return torch.nn.Sequential(
            torch.nn.Conv1d(
                in_channels=in_channels,
                out_channels=in_channels,
                kernel_size=2,
                stride=1,
                dilation=1,
                padding=1
            ),
            torch.nn.ReLU(),
            torch.nn.Conv1d(
                in_channels=in_channels,
                out_channels=in_channels,
                kernel_size=2,
                stride=1,
                dilation=2,
                padding=1
            ),
            torch.nn.ReLU(),
            torch.nn.Conv1d(
                in_channels=in_channels,
                out_channels=in_channels,
                kernel_size=2,
                stride=1,
                dilation=4,
                padding=1
            ),
            torch.nn.ReLU()
        )

And in forward, I have:

down_out = self.downscale_time_conv(inputs)

inputs has a .size of torch.Size([8, 161, 24]). Iā€™d expect down_out to have the same size, but instead it has: torch.Size([8, 161, 23])

Where did that last element go?

The output shape is given in the formula in the docs.
Based on this formula, the output is expected, as the layers will return:

L_out = math.floor((L_in + 2 * padding - dilation * (kernel_size - 1) - 1) / stride + 1)

# 1st layer
(24 + 2 * 1 - 1 * (2-1) - 1) / 1 + 1 = 25
# 2nd layer
(25 + 2 * 1 - 2 * (2-1) - 1) / 1 + 1 = 25
# 3rd layer
(25 + 2 * 1 - 4 * (2-1) - 1) / 1 + 1 = 23
1 Like