I need to improve torch.symeig() in the following directions and like to get hints on where to start and how to do these:
I need to make it more stable, like torch.svd(). I don’t know why svd function is more stable (maybe related to Issue #440).
I need to implement a batch version of it on GPU. That means the input will be of size (B x D x D) where B is the batch size.
I need to be able to implement top-k decomposition like scipy.linalg.eigh. For example, I should be compute only the top (dominant) eigenvalue and eigenvector.
I don’t have a concrete example. I have a network implemented via both and the one using svd is much more stable. I will try to construct an example though.
Also, do you have an example of custom autograd function? Is there any guide available for this?
@richard Thank you for the answers.
Regarding the batch version, is there a simple way to efficiently run a batch of symeig() on a single GPU? My matrices are relatively small.