Fool the classifier: loss functions driving towards uniform distribution

For a classification task, I would need a loss function which is higher when the distribution over the classes is far from the uniform distribution, and vice-versa it would be lower when the distribution over the classes of a sample is uniform.

So, the aim is to force the model to generate features which make the samples evenly belonging to each class for an already trained classifier.

A viable way I thought is to use the KL Divergence function torch.nn.functional.kl_div() against the uniform distribution.

Is there any more straightforward way in PyTorch?