Logdet of a big matrix: Autograd is too slow

Hi,

in this case autograd is too slow. This takes using my GPU Tesla V100-SXM2 about 0.5s. Any suggestions to accelerate it?

import torch
import time
device = "cuda" if torch.cuda.is_available() else "cpu"
w=torch.ones(8000,8000,requires_grad=True,device=device)
loss=torch.logdet(w)
time_start=time.time()
loss.backward()
print(time.time()-time_start)

Thanks!

If you have algebraic structure such as s.p.d. matrices, it might beworth doing the cholesky and take (twice) the logsum of the diagonal. This can be more efficient than the sowhat elaborate logdet backward.

Best regards

Thomas

Right, that works also when the matrix is triangular. Unfortunately this is not my case :expressionless:

How works backward exactly? is logdet executed several times?

Looking at the source,

The original PR introducing logdet has some discussion around the reasoning, but I must admit I don’t fully follow it (and maybe things have changed since).

Note that your benchmarking is wrong for CUDA: You need to torch.cuda.synchronitze() before taking times (both start and end).

1 Like

I see! Thank you Thomas! Can i for example accelerate torch.cholesky or torch.logdet by using many GPUs/CPUs simultaneously?