Quantization in Federated learning

Based on these tutorials:

Does Pytorch support quantization for Federated learning?
If yes, how to do quantization during training without any discrepancy between local and global’s weights ?

we haven’t tested this before, but quantization should work for any pytorch model as long as some constraints are met, e.g. the model needs to be symbolically traceable to use fx graph mode quantization, it feels like a question for federated learning itself, could you describe the question in a bit more detail? preferably with some examples?

when I test torch.quantization before applying federated learning, it works and quantize the models before the training! (when apply get_default_qat_qconfig and prepare_qat_fx) the model weights are quantized from 128 to 78, but when weights are aggregated in the global model after training using convert_fx, errors appears! (TypeError: ‘GraphModule’ object is not subscriptable) check this post for the code and the error please: python - TypeError: 'GraphModule' object is not subscriptable in PyTorch quantization - Stack Overflow
Why model’s wights are changed before applied last API (convert_fx)? Does this API quantizes more and makes the discrepancy between global and local models when aggregated?

Basically, before aggregating I can check the model device (e.g., using this command: next(global_model.parameters()).device), but after quantization I can’t! (I got this error: StopIteration: )