Per channel setting for QAT Quantization

Torch 2.4.1, Ubuntu 22.04 environment.
I created quantizer instances with XNNPACKQuantizer and x86_inductor_quantizer, and used get_symmetric_quantization_config() for XNNPACK and the appropriate config function for x86. I then set both is_qat and is_per_channel to True using the set_global function.

Issue:
After running prepare_qat_pt2e, when I check the returned model, only the weights are still set to per-tensor. I’d like to understand why it isn’t changing to per-channel and how to fix it. Conversely, when is_per_channel is set to False, everything, including the weights, changes to per-tensor.