KL-divergence between two multivariate gaussian

Hi,

Yes, this is the correct approach.
Just be aware that the input a must should contain log-probabilities and the target b should contain probability.

https://pytorch.org/docs/stable/nn.functional.html?highlight=kl_div#kl-div

By the way, PyTorch use this approach:


https://pytorch.org/docs/stable/distributions.html?highlight=kl_div#torch.distributions.kl.kl_divergence

Good luck
Nik