I have two functions for calculating entropy in first one (named: ImageEntropy_gpu) use torch.histc function and run it in GPU (nvidia 2070) and in second one (named: ImageEntropy_cpu) use numpy library and run at CPU (i7-7800x)
def ImageEntropy_gpu(img):
sz=img.view(-1).size()[0]
hist_probability=torch.histc(img.view(-1), bins=256)/sz
nonzero_probability=hist_probability[hist_probability>0]
entropy =-torch.sum( torch.mul(nonzero_probability, torch.log2(nonzero_probability))).item()
return round(entropy,4)
def ImageEntropy_cpu(img):
marg = np.histogramdd(np.ravel(img), bins = 256)[0]/img.size
marg = list(filter(lambda p: p > 0, np.ravel(marg)))
entropy = -np.sum(np.multiply(marg, np.log2(marg)))
return round(entropy,4)
CPU version function (ImageEntropy_cpu) run 20 time FASTER than GPU version (ImageEntropy_gpu)!!!
can someone explain WHY this happen, may be something is wrong in these two function ??