Maximise (Shannon) Entropy of Network Output


for some reason, I know that the output (i.e. the pixel values) of my network should follow a random uniform distribution. So I would like to aid the convergence by enforcing the output not only to have the correct values but also to be uniformly distributed. My idea was to have the loss as a combination of let’s say L2 between output and target as well as a regulariser / prior so as to enforce the output to be uniform.
I am wondering whether this is possible, really simple or hard since the prior does not care about the actual values but only about its distribution. Any ideas or further reading?