Place minmaxobserver on activation layer

When I apply default qconfig to my model, it automatically puts minmaxobserver on conv and batchnorm layers, but not activation layers. But I need to get the min/max values of those layers too. Is there a simple way to do that?

import torch.ao.quantization as Q
testmodel = nn.Sequential(
    nn.Conv2d(10,10,3),
    nn.BatchNorm2d(10),
    nn.PReLU(10),
)
testmodel.qconfig = Q.default_qconfig
Q.prepare(testmodel, inplace=True)
print(testmodel) # no observer on prelu

What’s the activation layers refer to? nn.PRelu?

Yes. If it was relu or sigmoid I could infer the value range from the batchnorm output, but that’s not the case for prelu

I found that some activation modules (including nn.Hardtanh, nn.Hardswish) get an observer, but others don’t. I’m not sure where the difference lies.

It seems to has to do with the function
torch.ao.quantization.quantization_mappings.get_default_qconfig_propagation_list(),
which lists all quantizable modules. I can override it by specifying the allow_list in torch.ao.quantization.prepare()