KL divergence too big?

Hello, I don’t get this. Why the following code outputs a huge value (~e^15)?

import torch 
from torch.distributions.kl import kl_divergence
from torch.distributions.multivariate_normal import MultivariateNormal

gaus1=[np.array([0.1, 0.1]), np.array([[0.5, 0.5], [0.5, 0.5]])]
gaus3=[np.array([0.1, 1.]), np.array([[0.5, 1.5], [0.5, 0.5]])]

gaus1_t=MultivariateNormal(torch.from_numpy(gaus1[0]), torch.from_numpy(gaus1[1])) 
gaus3_t=MultivariateNormal(torch.from_numpy(gaus3[0]), torch.from_numpy(gaus3[1])) 

kl_divergence(gaus1_t, gaus3_t)