QAT specific layers of a model

Hello,
I am working in NAS and want to quantize (QAT) specific layers of models inside the search space.
The way I am thinking is to hook layers with a torch quantizer with the layers as soon as they are created, before being passed to a sequential module. I am not sure how to do it. One post suggested passing qconfig by directly accessing the qconfig field of layer but that does not seem to exist. Kindly let me know how this can be done. Also if anyone thinks there are other methods to deal with this, please tell me.

Thanks in advance

current recommended flow is this: (prototype) PyTorch 2 Export Quantization-Aware Training (QAT) — PyTorch Tutorials 2.2.1+cu121 documentation, here is an example to set configuration by module name: pytorch/test/quantization/pt2e/test_xnnpack_quantizer.py at main · pytorch/pytorch · GitHub