Difference between `torch.linalg.eigvals` and `scipy.linalg.eigvals`

Hello I am currently sampling data from different kernels (Gaussian, Laplacian, Matern, etc)

The issue is that torch.distributions.multivariate_normal.MultivariateNormal raises a ValueError saying that my kernel is not PositiveDefinite()

I check my kernel using torch.linalg.eigvals, and some of the eigenvalues are actually negative (and complex too)

The weird thing is that all the eigenvalues I computed from scipy.linalg.eigvals are all real and positive.

I just read a page here saying about rounding floating numbers, but which one I should believe?

Thank you for your time and help in advance :smiley:

Hi shc!

Scipy is most likely computing the eigenvalues in double precision, while
pytorch is most likely doing so in single precision. Try performing your
pytorch in double precision and see if that resolves (or at least reduces)
your issue.

If not, please post a simple, fully-self-contained, runnable script that
reproduces this discrepancy, together with the output you get when you
run it.

Best.

K. Frank

1 Like

Thank you so much.
I didn’t know about precision.
I actually solved this issue with Tikhonov regularization: adding np.finfo(np.float32).eps on diagonal entries :smiley: