F.avg_pool2d() different behavior in different pytorch versions

Code:

import torch                                                      
import torch.nn.functional as F                                  
from torch.autograd import Variable                              
x = torch.arange(0,9).view(1,1,3,3)                                 
x = x.cuda().float()
o = F.avg_pool2d(Variable(x), kernel_size=3, stride=1, padding=1)
>>> x
(0 ,0 ,.,.) = 
  0  1  2
  3  4  5
  6  7  8
[torch.cuda.FloatTensor of size 1x1x3x3 (GPU 0)]

PyTorch ā€˜0.5.0a0+d365158ā€™: (Github latest source as of 19-Jun-2018 8.57 IST) [WRONG]

o = 
tensor([[[[2.0000, 2.5000, 3.0000],
          [3.5000, 4.0000, 4.5000],
          [5.0000, 5.5000, 6.0000]]]], device='cuda:0')

PyTorch ā€˜0.1.12+6f6d70fā€™: [CORRECT]

o = 
Variable containing:
(0 ,0 ,.,.) = 
  0.8889  1.6667  1.3333
  2.3333  4.0000  3.0000
  2.2222  3.6667  2.6667
[torch.cuda.FloatTensor of size 1x1x3x3 (GPU 0)]

It seems the old pytorch version(ā€˜0.1.12+6f6d70fā€™) is giving the right behaviour. Does it mean the bug is re-introduced/files not synched?

Answer:

There is a parameter in F.avg_pool2d() to handle this (count_include_pad=True).

According to the documentation https://pytorch.org/docs/stable/nn.html#torch.nn.AvgPool2d, by default count_include_pad=True. But by default count_include_pad=False.
EIther the documentation has to be modified or default parameter has to be count_include_pad=True

Hi,

thank you for researching this!

Iā€™m not 100% certain what the default should be, but consistency between doc and implementation would certainly be good. :slight_smile:

I submitted an issue for this in github:

Best regards

Thomas

1 Like