How to set quantization aware training scaling factors?

when i use quantization aware training , The weight tensor scaling factors is a standard floating point number.
I want to convert my model as 8bit at FPGA, so the weight tensor scaling factor must be an integer power-of-two value exponent. Is there such an option? what should I do

It seems that the quantization scheme is a little bit different. You can see from this

Depending on the fixed-point arithmetic you use, you can convert float multiplier to quantized_multiplier (integer) and right shift (integer). Please checkout

I am facing the same issue. Did you find a way to do that?