Getting different eigenvalues between using numpy.linalg.eigh() and torch.symeig()

I am trying to understand why am I getting different eigenvalues between using numpy.linalg.eigh() and torch.symeig().

An example is as below:

Code:

import numpy as np
import torch

arr_symmetric = np.array([[1.,2,3], [2,5,6], [3,6,9]])
arr_symmetric, arr_symmetric.dtype

Output:

(array([[1., 2., 3.],
        [2., 5., 6.],
        [3., 6., 9.]]), dtype('float64'))

Code:

tsr_symmetric = torch.tensor(arr_symmetric)
tsr_symmetric

Output:

tensor([[1., 2., 3.],
        [2., 5., 6.],
        [3., 6., 9.]], dtype=torch.float64)

Code:

w, v = np.linalg.eigh(arr_symmetric)
w, v

Output:

(array([4.05517871e-16, 6.99264746e-01, 1.43007353e+01]),
 array([[-9.48683298e-01,  1.77819106e-01, -2.61496397e-01],
        [ 2.22044605e-16, -8.26924214e-01, -5.62313386e-01],
        [ 3.16227766e-01,  5.33457318e-01, -7.84489190e-01]]))

Code:

e, v = torch.symeig(tsr_symmetric, eigenvectors=True)
e, v

Output:

(tensor([-2.6047e-16,  6.9926e-01,  1.4301e+01], dtype=torch.float64),
 tensor([[ 9.4868e-01, -1.7782e-01,  2.6150e-01],
         [ 8.6389e-16,  8.2692e-01,  5.6231e-01],
         [-3.1623e-01, -5.3346e-01,  7.8449e-01]], dtype=torch.float64))

As you can see one of the eigenvalues is different, ie. 4.05517871e-16 vs. -2.6047e-16

Why is this happening?

Hi Leo!

In short, this is due to (double-precision) floating-point round-off, and
is to be expected.

The rows of your matrix are linearly dependent – specifically,
tsr_symmetric[2] = 3 * tsr_symmetric[0] – so your matrix
has zero determinant and (at least) one zero eigenvalue.

The two eigenvalues you quote are within double-precision round-off
of zero (and of one another). That is, within round-off error, they are
equal to one another (and to zero), and not actually different.

Best.

K. Frank

2 Likes

Hi K. Frank, many thanks for your reply and it makes sense.

But what concerns me is that, that eigenvalue which is different, one is a +ve value and the other is a -ve value. This affects an algorithm that I am working on which groups the corresponding eigenvectors based on the +ve or -ve eigenvalues.

Hi Leo!

Yes, but they’re both actually (within round-off error of being) zero.

You have to think through what it means for your algorithm if an
eigenvalue is mathematically zero, even if the numerical computation
gives you a non-zero, but small value.

If your matrices are of order one and there is no reason that they
should have especially small eigenvalues, then it might be as simple
as deeming any eigenvalue within some multiple of round-off error
of zero to be zero and including it in neither your +ve-eigenvalue nor
-ve-eigenvalue group.

Welcome to the world of floating-point programming!

Best.

K. Frank

1 Like

Yeah, I definitely understand and agree with your point about coming from the world of floating-point programming!

The +ve/-ve sign discrepancy doesn’t seem to happen with numpy.linalg.eig() and torch.eig(), ie. the +ve/-ve eigenvalue signs are the same/consistent between numpy.linalg.eigh() and numpy.linalg.eig() and torch.eig(). Would be great if we could change torch.symeig() to be the same too, since torch.symeig() is written specifically for symmetric matrices, mirroring numpy.linalg.eigh(). More importantly, would be nice to have some consistency across the board

1 Like