Hi all,

Even though torch.symeig() seems to give accurate results most of the time, I ran a test case today and noticed that the eigenvalues and eigenvectors don’t make sense. My Pytorch version is 1.8.0, see following minimal reproducible example—

import torch

a = torch.tensor([[1,3,4],[2,2,4],[8,7,6]]).float()

d,q = torch.symeig(a, eigenvectors=True)

print(“matrix is”, a)

print(“smallest eigenvalue looks like”, torch.matmul(a,q[:,0])/q[:,0])

print(“matrix times smallest eigenvector”, torch.matmul(a,q[:,0]))

print(“smallest eigenvalue times eigenvector”, q[:, 0]*d[0])

print(“smallest eigenvector looks like”, q[:, 0])

print(“smallest eigenvalue looks like”, d[0])

For which the results look like—

matrix is tensor([[1., 3., 4.],

[2., 2., 4.],

[8., 7., 6.]])

smallest eigenvalue looks like tensor([ -1.6292, 0.0677, -11.9950])

matrix times smallest eigenvector tensor([-1.3801, -0.0338, 2.1880])

smallest eigenvalue times eigenvector tensor([-1.3801, 0.8133, 0.2972])

smallest eigenvector looks like tensor([ 0.8471, -0.4992, -0.1824])

smallest eigenvalue looks like tensor(-1.6292)

One could easily calculate eigenvalues and eigenvectors from such websites as (Eigenvalues and Eigenvectors), and the results doesn’t look like a numerical issue to me. Is there something I am missing here? Thanks!