Best Way to Calculate/Approximate The Diversity of a Class-Probability Population for Novelty Search

I am performing Novelty Search; hence, I need to calculate the diversity of a population (vectors) in a batch. The diversity is generally measured by the average k-Nearest distance of the population in the literature. What would be the best and fastest way to calculate that in PyTorch? Is there a better and more efficient way to approximate the diversity, preferably without the k parameter? Also, in my case, the population consists of class probabilities. Thus, I believe that KL or JS divergence may be preferable over other type of distances when calculating the diversity. I would be more than glad if you can suggest methods or implementations accordingly. It would be grad having a normalized measure of diversity across the population of class probabilities. I guess that an internal step could be calculating a similarity matrix first using one of the divergence methods. Then, calculating the average of minimum pair-wise divergence within population could be meaningful. Thank you very much in advance for your help.

Edit: Currently, I am calculating the mean pairwise JS divergence as follows; using topk to get for kNN would also be possible but I am not sure what advantage that would have for the novelty search.
torch.mean(weights[:,None,:] * (weights[:,None,:]/weights).log())