Quantized MaxPool2d and AdaptiveAvgPool2d

Hi all,

I have been experimenting with the post static quantization feature on VGG-16.
I know that torch.quantization.convert() will automatically remap every layer in the model to its quantized implementation.

However, i noticed that, a few types of layer is not converted, which is:

nn.MaxPool2d() , nn.AdaptiveAvgPool2d() and nn.Dropout()

I believe nn.Dropout() should not be an issue, whether its quantized or not.
However, i am not sure if nn.MaxPool2d() and nn.AdaptiveAvgPool2d() would do any difference if it is not quantized.

I have seen nn.quantized.MaxPool2d() being mentioned here and tried to remap my layer to this module. But, it seems like it is still referring to nn.modules.pooling.MaxPool2d() when i check the layer type after reassigning.

I have also seen nn.quantized.functional.MaxPool2d() and nn.quantized.functional.AdaptiveAvgPool2d() being mentioned in the Quantization documentation. But i have read from the forum, and found that, it is not conventional to directly call functional, instead, its module or its wrapper class should be called.

So, i would like to ask, is there any effect to my quantized model performance if i don’t change the nn.MaxPool2d() and nn.AdaptiveAvgPool2d() to their quantized version?

Should i just leave nn.MaxPool2d() and nn.AdaptiveAvgPool2d() as it is?

Or, if i should change to their quantized implementation, how should i do it?

Thanks.

1 Like

You do not need to change MaxPool2d() and adaptiveAvgPool2d() from nn to nn.quantized. These operations do not require calibration and are automatically converted to quantized operations when convert is called. Under the hood, these modules call the appropriate function when quantized values are passed as input.

2 Likes