Adding own Pooling Algorithm to Pytorch

Dear All,

I am very new to CNN and want to add my own developed pooling function to CNN instead of using traditional pooling methods like max-pooling and average pooling functions. Can you suggest some way how to integrate my own pooling function to existing PyTorch code of CNN?

Thanks
Achyut

Hello :slight_smile:

You can make your own class that implements the pooling of your choice. It needs to inherit from the pytorch Module class. Here is an example

class GeneralizedMeanPooling(Module):
    """Applies a 2D power-average adaptive pooling over an input signal composed of several input planes.
    The function computed is: :math:`f(X) = pow(sum(pow(X, p)), 1/p)`
        - At p = infinity, one gets Max Pooling
        - At p = 1, one gets Average Pooling
    The output is of size H x W, for any input size.
    The number of output features is equal to the number of input planes.
    Args:
        output_size: the target output size of the image of the form H x W.
                     Can be a tuple (H, W) or a single H for a square image H x H
                     H and W can be either a ``int``, or ``None`` which means the size will
                     be the same as that of the input.
    """

    def __init__(self, norm, output_size=1, eps=1e-6):
        super(GeneralizedMeanPooling, self).__init__()
        assert norm > 0
        self.p = float(norm)
        self.output_size = output_size
        self.eps = eps

    def forward(self, x):
        x = x.clamp(min=self.eps).pow(self.p)
        return F.adaptive_avg_pool2d(x, self.output_size).pow(1. / self.p)

    def __repr__(self):
        return self.__class__.__name__ + '(' \
            + str(self.p) + ', ' \
            + 'output_size=' + str(self.output_size) + ')'
4 Likes

Thanks! Olof Harrysson

1 Like

If this implementation is used while training a model with backprop, will self.p be learned? If not, how should the implementation be changed to make self.p learned following the Gem and Groknet papers?

Hi @hendryx,

I didn’t write that module and I don’t know how the original Gem module from the papers is implemented. In this one, the self.p attribute won’t be trainable since it’s just a float. I found another implementation that changes the self.p to be trainable, maybe you can have a look at that and tell us if it matches the original papers?

Code Source

class GeM(nn.Module):
    def __init__(self, p=3, eps=1e-6):
        super(GeM,self).__init__()
        self.p = nn.Parameter(torch.ones(1)*p)
        self.eps = eps

    def forward(self, x):
        return self.gem(x, p=self.p, eps=self.eps)
        
    def gem(self, x, p=3, eps=1e-6):
        return F.avg_pool2d(x.clamp(min=eps).pow(p), (x.size(-2), x.size(-1))).pow(1./p)
        
    def __repr__(self):
        return self.__class__.__name__ + '(' + 'p=' + '{:.4f}'.format(self.p.data.tolist()[0]) + ', ' + 'eps=' + str(self.eps) + ')'
1 Like

Thanks for the quick reply @Oli! The implementation you’ve shared looks correct.

I wrote a quick test to confirm that self.p is trained. Here we can see that p gets close to 1, which is the closed-form solution.

import torch

from mtm.util.util import GeM


t1 = torch.rand([1, 64, 8, 8])
t2 = t1.mean(dim=[-1, -2])
gem = GeM()
print(gem.p)

optimizer = torch.optim.SGD(gem.parameters(), lr=0.01, momentum=0.9)

for i in range(100):
    optimizer.zero_grad()

    outputs = gem(t1).squeeze(-1).squeeze(-1)
    loss = torch.norm(t2 - outputs, 2)
    loss.backward()
    optimizer.step()

print(gem.p)

which prints:

Parameter containing:
tensor([3.], requires_grad=True)
Parameter containing:
tensor([0.9920], requires_grad=True)
1 Like