Layernorm cann't be quantized in QAT?

The structure of the model after training the converter through QAT is shown below. I noticed that there are no parameters such as scale or zero_point for layernorm. Does this mean that layernorm has not been quantized? Can QAT be used to quantize layernorm? I am using PyTorch 1.13.0 and it seems that layernorm cannot be quantized.
(bn1): QuantizedLayerNorm((8, 32), eps=1e-05, elementwise_affine=True)
(prelu1): ReLU()
(bn2): QuantizedLayerNorm((16, 16), eps=1e-05, elementwise_affine=True)
(prelu2): QuantizedPReLU()
(bn3): QuantizedLayerNorm((32, 8), eps=1e-05, elementwise_affine=True)
(prelu3): QuantizedPReLU()
(dbn3): QuantizedLayerNorm((16, 16), eps=1e-05, elementwise_affine=True)
(dprelu3): QuantizedPReLU()
(dbn2): QuantizedLayerNorm((8, 32), eps=1e-05, elementwise_affine=True)
(dprelu2): QuantizedPReLU()
(dbn1): QuantizedLayerNorm((1, 65), eps=1e-05, elementwise_affine=True)
(dprelu1): QuantizedPReLU()
(quant): Quantize(scale=tensor([0.1354]), zero_point=tensor([0]), dtype=torch.quint8)