I am looking to find only the primary eigenvector of a real, positive-definite symmetric 20x2000x2000 tensor:
K = torch.random.randn(20, 2000, 2000)
K_sym = K+K.transpose(1,2)/2
eig, ev = torch.symeig(K_sym, eigenvectors=True)
ev = ev[:,1999,:]
ev.shape
> torch.Size[20, 2000]
This implementation is very slow, even on the GPU,
Can anyone suggest a faster way of finding the first eigenvector?
For example, power iteration may be a solution. I have based this on the solution found in torch.nn.utils.spectral_norm
:
def TopEigenvector(self, K, n_power_iterations=400, dim=1, eps=1e-12):
v = torch.ones(K.shape[0],K.shape[1],1).to(self.device)
with torch.no_grad():
for _ in range(n_power_iterations):
v = normalize(torch.bmm(K, v),dim=dim, eps=eps, out=v)
if(n_power_iterations > 0):
v = v.clone(memory_format=torch.contiguous_format)
return v
However, this relies on the dreaded for-loop, and also does not have a gradient for obvious reasons.
I have tested backprop using this function and, so far, it does not complain. Is there, however, a way that is gradient friendly?
EDIT:
The function is differentiable using the following:
def TopEigenvector(self, K, n_power_iterations=400, dim=1):
v = torch.ones(K.shape[0],K.shape[1],1).to(self.device)
for _ in range(n_power_iterations):
m = torch.bmm(K, v)
n = torch.norm(m, dim=1).unsqueeze(1)
v = m/n
return v
Slow, but way faster than the alternative.