Is bias quantized while doing pytorch static quantization?

Please help with the issues observed here How to generate a fully-quantized model? with

weighted_int8_dtype_config = DTypeConfig(
    input_dtype=torch.quint8,
    output_dtype=torch.quint8,
    weight_dtype=torch.qint8,
    bias_dtype=torch.qint32)

conv_config = BackendPatternConfig(torch.nn.Conv2d).add_dtype_config(weighted_int8_dtype_config) 
backend_config = BackendConfig("full_int_backend").set_backend_pattern_config(conv_config)