i want to use the round ceil strategy in quantization.which options shall I set?
we don’t have anything that does ceiling rounding by default. You could probably just add .5 to all the weights and do normal quantization though, or something along those lines?
this is not configurable right now, the current rounding strategy (used in quantize ops etc.) is the same as torch.round — PyTorch 2.0 documentation
Thanks,i want to verify whether the results of my own Winograd convolution calculation match those of Pytorch quantization convolution calculation, and I found that the round strategy was not taken correctly.
What is the specific source code corresponding to the implementation of the round using the FBGEMM backend for int8 operations？I will try using Python to implement it.
I think it’s calling this https://github.com/pytorch/FBGEMM/blob/main/src/FbgemmConv.cc#L119, but you’ll need to dig through the code to see what it is actually doing