Casting to int of data min and max in HistogramObserver

Hello,
taking a look at the histogram observer, I realized that the min,max values of the incoming data are cast to int when creating or interpolating the histogram (see this line). I fail to see why that is necessary, or even a good idea.
I tried out just the histogram interpolation algorithm implemented in the observer, and for low-standard deviation data it gives a pretty bad approximation of the combined histogram; however, if the casting to int is removed, the approximation is far better.

Can anyone explain to me why the casting occurs?

Hi @MDzz , sounds like a potential bug. I filed quantization: unexpected casting of tensor min and max to int in histogram observer · Issue #83672 · pytorch/pytorch · GitHub to track investigation and resolution, thanks for reporting this.

HI Vasiliy, thank you for looking at this, happy to help:)