Equivalent of Keras GlobalMaxPooling1D

I’m trying to translate my Keras code to PyTorch. In my Keras code I use GlobalMaxPooling1D after the last 1D convolutional layer:

result = GlobalMaxPooling1D()(previous_result).

In Pytorch I’m trying to use MaxPool1d. I guess the stride should be 0 but I’ve no idea about the value of kernel_size. Should it be the same size as the kernel in the last convolutional layer?

It should be equivalent to torch.nn.AdaptiveMaxPool1d

1 Like

Thanks for the answer. So, in this case the output_size should match the input number of features?

The “output_size” is just output size, not about the feature map, not batch_size. You can check the example in documentation. By printing the shape of input and output, you should get what you want.

1 Like

A bit late to the party, but I just created this:

class GlobalMaxPooling1D(nn.Module):

def __init__(self, data_format='channels_last'):
    super(GlobalMaxPooling1D, self).__init__()
    self.data_format = data_format
    self.step_axis = 1 if self.data_format == 'channels_last' else 2

def forward(self, input):
    return torch.max(input, axis=self.step_axis).values

Based on
https://github.com/keras-team/keras/blob/master/keras/layers/pooling.py#L557

1 Like