# Different behavior of Normal and MultivariateNormal for 1D

Use torch.distributions.Normal

``````torch.manual_seed(10)

mu = torch.randn(1)
sigma = torch.rand(1)

z = torch.distributions.Normal(mu, sigma)

z.log_prob(torch.Tensor([1.]))
``````

Output is:

tensor([-27.3894])

Use torch.distributions.MultivariateNormal

``````torch.manual_seed(10)

mu = torch.randn(1)
sigma = torch.rand(1)

z = torch.distributions.MultivariateNormal(mu, covariance_matrix=torch.diagflat(sigma))

z.log_prob(torch.Tensor([1.]))
``````

Output is:

tensor(-6.1411)

I expected to get the same result for both cases. What’s wrong?

I’m no expert in distributions, but from the docs `covariance_matrix` should be a positive-definite covariance matrix, which would not be the case for:

``````torch.diagflat(sigma)
> tensor([[-1.0122]])
``````

Also, shouldn’t the covariance matrix be the variance in this case (which would also make it positive definite)?

This should yield the same result:

``````z = torch.distributions.MultivariateNormal(mu, covariance_matrix=torch.diagflat(sigma**2))
``````