Pooling kernel size larger than input's size caused crush

The code is like this:

In [29]: m = nn.AvgPool2d(60, stride=60)

In [30]: input = autograd.Variable(torch.randn(1, 3, 49, 64))

In [31]: output = m(input)
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-31-b822ecf3f6dc> in <module>()
----> 1 output = m(input)

/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)
    222         for hook in self._forward_pre_hooks.values():
    223             hook(self, input)
--> 224         result = self.forward(*input, **kwargs)
    225         for hook in self._forward_hooks.values():
    226             hook_result = hook(self, input, result)

/usr/local/lib/python2.7/dist-packages/torch/nn/modules/pooling.pyc in forward(self, input)
    503     def forward(self, input):
    504         return F.avg_pool2d(input, self.kernel_size, self.stride,
--> 505                             self.padding, self.ceil_mode, self.count_include_pad)
    506 
    507     def __repr__(self):

/usr/local/lib/python2.7/dist-packages/torch/nn/functional.pyc in avg_pool2d(input, kernel_size, stride, padding, ceil_mode, count_include_pad)
    262     """
    263     return _functions.thnn.AvgPool2d.apply(input, kernel_size, stride, padding,
--> 264                                            ceil_mode, count_include_pad)
    265 
    266 

/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/thnn/pooling.pyc in forward(ctx, input, kernel_size, stride, padding, ceil_mode, count_include_pad)
    358             ctx.stride[1], ctx.stride[0],
    359             ctx.padding[1], ctx.padding[0],
--> 360             ctx.ceil_mode, ctx.count_include_pad)
    361         return output
    362 

RuntimeError: Given input size: (3x49x64). Calculated output size: (3x0x1). Output size is too small at /pytorch/torch/lib/THNN/generic/SpatialAveragePooling.c:64

So the pooling kernel size must be larger than the input’s size?

Pooling kernel size must be SMALLER than input size.

If you want global pooling, you can do something like:

input.mean(dim=2).mean(dim=2) so that it takes the mean over the H and W dimensions.

Hi.

I want to change the code

        for s in setting:
            self.features.append(nn.Sequential(
                nn.AdaptiveAvgPool2d(s),
                nn.Conv2d(in_dim, reduction_dim, kernel_size=1, bias=False),
                nn.BatchNorm2d(reduction_dim, momentum=.95),
                nn.ReLU(inplace=True)
            ))

to

        for s in setting:
            self.features.append(nn.Sequential(
                nn.AvgPool2d(60 / s, stride=(60 / s)),
                nn.Conv2d(in_dim, reduction_dim, kernel_size=1, bias=False),
                nn.BatchNorm2d(reduction_dim, momentum=.95),
                nn.ReLU(inplace=True)
            ))

so that the AvgPool2d can be converted to caffe's pooling layer as caffe has no AdaptiveAvgPool2d.

caffe can handle the cases where kernel sizes larger than the feature map sizes, which I think is the reason why pytorch has AdaptiveAvgPool2d.