Dear all,

I am trying to compute the KL Divergence between 2 Multivariate Gaussians.

My Code is as follows:

```
from torch.distributions.multivariate_normal import MultivariateNormal
p = MultivariateNormal(mu1, torch.diag_embed(std1))
q = MultivariateNormal(mu2, torch.diag_embed(std2))
kld = ( torch.distributions.kl.kl_divergence(p, q) +
torch.distributions.kl.kl_divergence(q, p) )
```

where mu1, mu2, std1, std2 are positive vectors of the same length.

A priori all seems to be well programmed but the output of the KLD is sometimes negative.

Can you please help me with this issue?

Environment details:

python: 3.8

pytorch: 1.6.0