Improving symeig() function

Hi fellows,

I need to improve torch.symeig() in the following directions and like to get hints on where to start and how to do these:

  1. I need to make it more stable, like torch.svd(). I don’t know why svd function is more stable (maybe related to Issue #440).
  2. I need to implement a batch version of it on GPU. That means the input will be of size (B x D x D) where B is the batch size.
  3. I need to be able to implement top-k decomposition like scipy.linalg.eigh. For example, I should be compute only the top (dominant) eigenvalue and eigenvector.

Any hint about where to start is appreciated.

1 Like

Do you have examples of instability in symeig?

I don’t have a concrete example. I have a network implemented via both and the one using svd is much more stable. I will try to construct an example though.

Also, do you have an example of custom autograd function? Is there any guide available for this?

Please let me know when you do find a good example of unstable symeig!

Here’s an example of a custom autograd function: http://pytorch.org/docs/master/autograd.html#torch.autograd.Function

And here’s a guide for it:
http://pytorch.org/docs/master/notes/extending.html

@richard Thank you for the answers.
Regarding the batch version, is there a simple way to efficiently run a batch of symeig() on a single GPU? My matrices are relatively small.

I don’t think so, no. Please feel free to open a feature request at the github site for this.

1 Like

@richard I requested it at Issue #5354. Please edit it if it is not accurate.