# MaxPool1D shape calculation

Hi,

I am trying to implement a 1D CNN network for 1D signal processing. I managed to implement a simple network taking some input and giving me an output after processing in a conv1D layer followed by a fully connected relu output layer.

However, I wanted to apply MaxPool1d and I get in trouble with the size of its output, necessary to calculate the input size of the fully connected output layer. It seems that if stride = kernel_size, then for odd input length, my implementation of the formula provided in the docs does not allow to calculate properly the output size of MaxPool1d.

The following code illustrates this:

``````import numpy as np
import torch

# Some fake X input of dimension (100 samples, 1 channel, 1999 length)
X = torch.FloatTensor(np.ones((100,1,1999)))

# init parameters
input_channels = 1

# First conv layer parameters
out_channels_conv1 = 10 # number of kernels
Conv1_dilation = 1
Conv1_kernel_size = 3
Conv1_stride = 1

# MasPool parameters
MaxPool_kernel_size = 3
MaxPool_stride = 3
MaxPool_dilation = 1

input_size = X.shape

# first conv layer
out1 = conv1(X)

# calculating the output size of Conv1 with the formula from Pytorch docs
L_out = ((input_size + 2*Conv1_padding - Conv1_dilation*(Conv1_kernel_size-1) -1)/Conv1_stride +1)
if int(L_out) == out1.shape:
print("Length at output of conv1 equals calculated length so everything looks good.\n Now pushing output of conv1 in MaxPool1d...")

# Now pushing data through MaxPool1d
MP1 = torch.nn.MaxPool1d(MaxPool_kernel_size,stride=MaxPool_stride)
out2 = MP1(out1)
print("Observed length is {}".format(out2.shape))
Ah thanks, I did not notice the round down operation I misread it for brackets 