How to set quantization aware training scaling factors?

when i use quantization aware training , The weight tensor scaling factors is a standard floating point number.
I want to convert my model as 8bit at FPGA, so the weight tensor scaling factor must be an integer power-of-two value exponent. Is there such an option? what should I do

It seems that the quantization scheme is a little bit different. You can see from this https://github.com/pytorch/pytorch/wiki/torch_quantization_design_proposal

Depending on the fixed-point arithmetic you use, you can convert float multiplier to quantized_multiplier (integer) and right shift (integer). Please checkout https://github.com/pytorch/FBGEMM/blob/master/src/QuantUtils.cc#L107-L157