How to replicate F.adaptive_max_pool using F.max_pool


I am trying to replicate adaptive pooling using normal pooling and calculating kernel_size and padding dynamically but it cant get it to work.

The following code made me doubt that it was that straightforward at all

> a = Variable(torch.FloatTensor([[[1,1,2,8,1,1,3]]]))
> F.adaptive_max_pool1d(a, output_size=out_size).data.numpy()
array([[[8., 8.]]], dtype=float32)

which seems to imply that the 8 is in both pooling windows (!?) or something completely different is happening under the hood.
I saw in the source that it’s a thnn function, and I will try to read that later, but I don’t know C

Does anyone know more about how the algorithm used here?

Cheers, Johannes


the code above confuses me, because no matter how I pad, the result is different from the adaptive pooling:

#right padding 
> b = F.pad(a, pad=(0,1));
array([[[1., 1., 2., 8., 1., 1., 3., 0.]]], dtype=float32)
> F.max_pool1d(b, kernel_size=4).data.numpy()
array([[[8., 3.]]], dtype=float32)

#left padding 
> b = F.pad(a, pad=(1,0));
array([[[0., 1., 1., 2., 8., 1., 1., 3.]]], dtype=float32)
> F.max_pool1d(b, kernel_size=4).data.numpy()
array([[[2., 8.]]], dtype=float32)