When I apply default qconfig to my model, it automatically puts minmaxobserver on conv and batchnorm layers, but not activation layers. But I need to get the min/max values of those layers too. Is there a simple way to do that?
import torch.ao.quantization as Q
testmodel = nn.Sequential(
nn.Conv2d(10,10,3),
nn.BatchNorm2d(10),
nn.PReLU(10),
)
testmodel.qconfig = Q.default_qconfig
Q.prepare(testmodel, inplace=True)
print(testmodel) # no observer on prelu
It seems to has to do with the function
torch.ao.quantization.quantization_mappings.get_default_qconfig_propagation_list(),
which lists all quantizable modules. I can override it by specifying the allow_list in torch.ao.quantization.prepare()