I need to use ADAPTIVEAVGPOOL2D so that the spatial dimension across all batches be same. I understand the case where it is used to decrease the dimensions. The question I have is regarding the increase in dimension.
layer = nn.AdaptiveAvgPool2d((1,64))
input = torch.randn(1, 3, 64, 16) # B, C, H, W
output = layer(input)
print(output.shape)
torch.Size([1, 3, 1, 64])
As seen in the code, height is reduced to 1 but width is increased to 64. Can someone explain how pooling achieves this? Is padding used internally?