I wish to perform quantization with a configuration in which both parameters and activations are quantized symmetrically. Here is my code:
rn18 = models.resnet18().eval()
data = torch.randn(1, 3, 224, 224)
qconfig = torch.ao.quantization.QConfig(
activation=torch.ao.quantization.observer.HistogramObserver.with_args(
qscheme=torch.per_tensor_symmetric, dtype=torch.qint8
),
weight=torch.quantization.default_per_channel_weight_observer,
)
qconfig_mapping = torch.ao.quantization.QConfigMapping().set_global(qconfig)
prepared = prepare_fx(rn18, qconfig_mapping, data)
for _ in range(10):
prepared(data)
quantized_rn18 = convert_fx(prepared)
quantized_rn18.graph.print_tabular()
The printed result shows that the model is not quantized.
How to solve the problem?