3D Pooling Layers in FP32 in Graph Mode

Hello. I am trying to post quantize a 3D ResNet using the new graph mode quantization. As far as I understand, both AdaptiveAvgPool3d and MaxPool3d have no quantized kernel and should be marked as “non traceable”. From here ((prototype) FX Graph Mode Quantization User Guide — PyTorch Tutorials 1.8.1+cu102 documentation), I thought the following should do the trick:

prep_config_dict = {
    "non_traceable_module_class": [nn.MaxPool3d, nn.AdaptiveAvgPool3d]
}
prepared_model = prepare_fx(
    model_to_quantize, qconfig_dict, prepare_custom_config_dict=prep_config_dict)

Yet, it does not. Neither did some experiments using “non_traceable_module_name”.

Appreciate any help I can get. Thank you in advance.

@jerryzh168 I thought the maxpool and avgpool was supported, can you comment on this, please?

Given I am using a 3D ResNet, I also need to use the 3D versions of every model. As far as I can tell, only the Average Pooling 3D is supported. Neither Adaptive Average Pooling 3D nor Max Pooling 3D are supported.

you don’t need to mark them as non-traceable module I think, they are leaf modules and will not be traced by default, non traceable module is typically used to mark a submodule as untraceable (e.g. a submodule contains conv - linear - other_ops - etc.)

adaptive advg pol3d pytorch/quantization_patterns.py at master · pytorch/pytorch · GitHub and Maxpool3d: pytorch/quantization_patterns.py at master · pytorch/pytorch · GitHub are supported, these ops works for both float and quantized inputs.