It seems to me that the
HistogramObserver implementation has a bug. In L 1181 the forward pass is as follows:
combined_histogram = self._combine_histograms( combined_histogram, self.histogram, self.upsample_rate, downsample_rate, start_idx, self.bins, )
_combine_histograms is defined as:
def _combine_histograms( self, orig_hist: torch.Tensor, new_hist: torch.Tensor, upsample_rate: int, downsample_rate: int, start_idx: int, Nbins: int, ) -> torch.Tensor:
combined_histogram gets mapped to
self.histogram gets mapped to
new_hist. It seems to me like it should be the other way around given the variable names.
Can someone clarify?