Difference of implementation between torch.svd and torch.linalg.svd

Hello, can I ask the difference between the implementations of torch.svd and torch.linalg.svd?
I have a laptop with 8GB of ram.

If I do:

X=torch.rand([600, 154875, 3])
X=X.reshape(-1,1)
U,S,V=torch.svd(X)

all the commands compute fine.

Instead if I try

X=torch.rand([600, 154875, 3])
X=X.reshape(-1,1)
U,S,V=torch.linalg.svd(X)

pytorch tries to allocate too much ram and the command fails.

Hi Guglielmo!

From the documentation for the two versions of svd(), torch.svd() and
torch.linalg.svd(), torch.svd() computes, by default, the “reduced” SVD:

If some is True (default), the method returns the reduced singular value decomposition.

while torch.linalg.svd() computes, by default, the full SVD:

The parameter full_matrices chooses between the full (default) and reduced SVD.

Given that your X is very large an exceedingly non-square, torch.svd()'s
default reduced SVD could well require much less memory. You might try
U,S,V = torch.linalg.svd (X, full_matrices = False) and see if
that works for you.

As an aside, torch.svd() is deprecated in favor of torch.linalg.svd(),
so if you can get torch.linalg.svd() working, you should probably use it.

Best.

K. Frank

Thank you, I should have read the documentation better. Using

U,S,V = torch.linalg.svd (X, full_matrices = False)

indeed gets the work done.