Pooling using idices from another max pooling

I need to implement a pooling layer, which will pool from a given tensor, based on the indices generated by the max pooling on another tensor. For example,

import torch
import torch.nn as nn

# Define a tensor
X = torch.rand(2, 3, 4, 4)
X[0, 0, :, :]                                                                                                                                                                         
tensor([[0.9889, 0.2736, 0.6786, 0.8688],
        [0.5038, 0.4131, 0.8648, 0.4014],
        [0.1842, 0.8754, 0.5671, 0.6143],
        [0.1987, 0.8137, 0.6612, 0.5943]])

# Define another tensor
Y = torch.rand(2, 3, 4, 4)
Y[0, 0, :, :]                                                                                                                                                                         
tensor([[0.7160, 0.2788, 0.1997, 0.4487],
        [0.0241, 0.2625, 0.3674, 0.5985],
        [0.1600, 0.4911, 0.1724, 0.0827],
        [0.6873, 0.8463, 0.5221, 0.8582]])

# Define a maximum pooling layer        
max_pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
        
# Apply maximum pooling on tensor `X`
X_p = max_pool(X)
X_p[0, 0, :, :]                                                                                                                                                                      
tensor([[0.9889, 0.8688],
        [0.8754, 0.6612]])

Now, what I would like to do is to pool from tensor Y using the indices of the maximum values of tensor X. The pooling result on tensor Y should be the following:

Y_p[0, 0, :, :]                                                                                                                                                                      
tensor([[0.7160, 0.4487],
        [0.4911, 0.5221]])

Thank you!

1 Like

I suggest you use the functional API for pooling in the forward pass so that you don’t have to redefine the layers each time. Then you are flexible to just pass the size that you need during your forward pass. I.e.,

x = torch.nn.functional.max_pool2d(x, your_kernel_size, your_stride )

Hi @rasbt, thanks for your answer, but I do not understand what you’re suggesting. What is the difference between torch.nn.fanctional's max_pool2d and torch.nn's MaxPool2d? I mean, to my understanding, what you wrote will do the maximum pooling on x, but how I would use the appropriate indices in order to pull from another tensor y?

Inspired by @ptrblck’s answer in MaxPool2d indexing order :

import torch, torch.nn as nn

def retrieve_elements_from_indices(tensor, indices):
    flattened_tensor = tensor.flatten(start_dim=2)
    output = flattened_tensor.gather(dim=2, index=indices.flatten(start_dim=2)).view_as(indices)
    return output


# define data variables
maxpool = nn.MaxPool2d(2,2, return_indices=True)
data1 = torch.randn(1,2,4,4)
data2 = torch.randn(1,2,4,4)

# maxpool data1
output1, indices = maxpool(data1)

# retrieve corresponding elements from data2 according to indices
output2 = retrieve_elements_from_indices(data2, indices)
9 Likes

@InnovArul Never knew about the return_indices argument… Now everything makes sense! Thank you!

Hi @rasbt, thanks for your answer, but I do not understand what you’re suggesting. What is the difference between torch.nn.fanctional 's max_pool2d and torch.nn 's MaxPool2d ? I mean, to my understanding, what you wrote will do the maximum pooling on x , but how I would use the appropriate indices in order to pull from another tensor y ?

Oh, I misread your question. I somehow thought your question was more about how to dynamically change the pooling sizes based on the input. MaxPool2d and max_pool2d would do the same thing. But with MaxPool2d you instantiate it as an object instance (of a class) so you can’t conveniently change the pooling size during the forward pass.

I see! Many thanks in any case!

In case anyone needs this for 1d pooling:
data2.gather(dim=2, index=indices)
is all you need.