How to apply per_tensor_symmetric activation quantization?

I wish to perform quantization with a configuration in which both parameters and activations are quantized symmetrically. Here is my code:

rn18 = models.resnet18().eval()
data = torch.randn(1, 3, 224, 224)
qconfig = torch.ao.quantization.QConfig(
    activation=torch.ao.quantization.observer.HistogramObserver.with_args(
        qscheme=torch.per_tensor_symmetric, dtype=torch.qint8
    ),
    weight=torch.quantization.default_per_channel_weight_observer,
)
qconfig_mapping = torch.ao.quantization.QConfigMapping().set_global(qconfig)
prepared = prepare_fx(rn18, qconfig_mapping, data)
for _ in range(10):
    prepared(data)
quantized_rn18 = convert_fx(prepared)
quantized_rn18.graph.print_tabular()

The printed result shows that the model is not quantized.
How to solve the problem?

You could use torch.ao.quantization.default_symmetric_qnnpack_qconfig as defined in pytorch/qconfig.py at b8580b08976db89203f2ea7dda0f012520e9471a · pytorch/pytorch · GitHub

rn18 = torchvision.models.resnet18().eval()
data = torch.randn(1, 3, 224, 224)
qconfig = torch.ao.quantization.default_symmetric_qnnpack_qconfig
qconfig_mapping = torch.ao.quantization.QConfigMapping().set_global(qconfig)
prepared = prepare_fx(rn18, qconfig_mapping, data)
for _ in range(10):
    prepared(data)
quantized_rn18 = convert_fx(prepared)
quantized_rn18.graph.print_tabular()

I use the default_symmetric_qnnpack_qconfig, but the printed result shows that model is still not quantized.