# Why torch.distributions.multivariate_normal.MultivariateNormal requires positive-definite covariance_matrix rather than positive-semidefinite

When I use torch.cov to compute the covariance matrix of a batch of vector (in which the batch_size may be less than the vector length,) and then use the statistics to construct the MultivariateNormal distribution, it would raise a ValueError which indicates the covariance_matrix is not a positive-definite matrix.

``````import torch
from torch.distributions.multivariate_normal import MultivariateNormal

data = torch.rand(2,4)
data_mean = data.mean(0)
data_cov = torch.cov(data.T)
m = MultivariateNormal(data_mean, data_cov)
``````

When data (NxD) with N<D, this would happen since the covariance_matrix may be positive-semidefinite. I found that in NumPy, the MultivariateNormal distribution only requires positive-semidefinite but PyTorch need positive-definite covariance_matrix.

To achieve my goal, I need to transform the covariance_matrix to the numpy format for random sampling and then transform the sampled results back into PyTorch tensor.

Hi @geng_David ,

A Gaussian with zero-width is ill-defined (as you’re effectively dividing by zero), hence positive definite is required and not positive semi-definite.

Thanks for your reply. I understand that it would be an ill-defined normal distribution if cov-mat is positive semi-definite. I think whether providing an option like scipy does could be better in the case when I only need an approximation solution.

You can always add a small finite value along the diagonal to make it positive definite in the meantime. However, you can always open an issue and request that feature be added.

I see. Thank you so much.