Error with using torch.lobpcg

I have trouble with using torch.lobpcg with pytorch 1.9.1 .
If I use the following script:

torch.manual_seed(0)
a = torch.randn((100, 100), requires_grad=True)
a = a + a.T
torch.lobpcg(a, largest=False)

It will cause an error:

---------------------------------------------------------------------------

      1 a = torch.randn((100, 100), requires_grad=True)
      2 a = a + a.T
----> 3 torch.lobpcg(a, largest=False)
      4

~/Software/miniconda3/envs/python38/lib/python3.8/site-packages/torch/_lobpcg.py in lobpcg(A, k, B, X, n, iK, niter, tol, largest, method, tracker, ortho_iparams, ortho_fparams, ortho_bparams)
    518             B_sym = (B + B.transpose(-2, -1)) / 2 if (B is not None) else None
    519
--> 520             return LOBPCGAutogradFunction.apply(
    521                 A_sym, k, B_sym, X, n, iK, niter, tol, largest,
    522                 method, tracker, ortho_iparams, ortho_fparams, ortho_bparams

TypeError: save_for_backward can only save variables, but argument 4 is of type bool

It seems that I can not use bool for largest (The doc here: torch.lobpcg — PyTorch 1.9.1 documentation seems wrong)

After reading the source code of torch.lobpcg, I changed the code to the following:

e1, v1 = torch.lobpcg(a, largest=torch.tensor(0))

Then it works. But I am not sure whether this is correct or not because the results are different from that of torch.eig:

torch.manual_seed(0)
a = torch.randn((100, 100), requires_grad=True)
a = a + a.T
e1, v1 = torch.lobpcg(a, largest=torch.tensor(0), k=10)
e2, v2 = torch.lobpcg(a, largest=torch.tensor(1), k=10)
e3, v3 = torch.eig(a)
print('largest:')
print('lobpcg: ', e2)
print('eig:    ', torch.sort(e3[:, 0], descending=True)[0][:10])
print('smallest:')
print('lobpcg: ', e1)
print('eig:    ', torch.sort(e3[:, 0])[0][:10])

The largest eigenvalues are the same but the smallest are not. The results are:

largest:
lobpcg:  tensor([26.9840, 25.8924, 25.0941, 24.4001, 23.1643, 21.8327, 20.7435, 20.5844,
        19.2607, 18.5788], grad_fn=<LOBPCGAutogradFunctionBackward>)
eig:     tensor([26.9839, 25.8924, 25.0942, 24.4000, 23.1643, 21.8327, 20.7435, 20.5845,
        19.2606, 18.5788], grad_fn=<SliceBackward>)
smallest:
lobpcg:  tensor([-20.1911, -19.0881, -18.0355, -15.9095, -14.1135, -18.3233, -17.1983,
        -15.4659, -17.6157, -15.5931],
       grad_fn=<LOBPCGAutogradFunctionBackward>)
eig:     tensor([-27.2953, -25.4611, -25.2526, -24.5712, -23.7227, -22.9677, -22.6331,
        -22.0488, -20.8266, -20.4495], grad_fn=<SliceBackward>)

Any advice?

The first is clearly a bug and it seems that the autograd test fogets to pass “largest” (pytorch/test_autograd.py at 99c7a9f09d2a89506c661defb61610ddff8859a1 · pytorch/pytorch · GitHub). Your workaround seems legitimate, with torch.tensor(True), you’d get additional beauty points. It would be nice to file a bug for this.

Edit after the first :tea: : With the second observation: The matrix is not spd if it has negative eigenvalues, but the algorithm is only for spd matrices.

Best regards

Thomas