# Pytorch torch.linalg.svd returning U and V^T, which are not orthogonal

Using U,S, VT = torch.linalg.svd(M), the matrix ‘M’ is large, so I am getting the matrices U and VT as non orthogonal. When I compute torch.norm(torch.mm(matrix, matrix.t()) - identity_matrix)) its ~0.004 and also when I print M.M^T, the diagonal entries are not 1, rather 0.2 or 0.4 and non diagonals are not 0, but ~0.0023. IS there a way to get SVD with orthogonal U and V^T ?

But the singular values i.e. diagonal elements of S are near to 1 only.

``````matrix = torch.randn(4096, 4096)
u, s, vh = torch.linalg.svd(matrix)
matrix = torch.matmul(u, vh)
print('norm ||WTW - I||: ',torch.norm(torch.mm(matrix, matrix.t()) - torch.eye(matrix.shape)))
print(matrix)
``````

Changing precision to float64 helped to make ||M*M^T -I|| order of 1E-13, also I am using “gesvd” that uses QR decomposition and more stable:

``````u, s, vh =  torch.linalg.svd(matrix, driver='gesvd')
``````

but I am cautious that will it work fine or will ill-conditioned matrix fail this? So is there a way to get stable SVD in pytorch?

Hi Bhartendu!

You are testing the orthogonality of your original random matrix, `matrix`.
You should be testing the orthogonality of the result of the decomposition,
`u` and `vh`, instead.

Best.

K. Frank

@KFrank Thanks, that was typo in copy pasting the code. I checked orthogonality of matrix = u *vh and individually of u and vh. But not getting them as orthogonal.

Hi Bhartendu!

It works for me:

``````>>> import torch
>>> print (torch.__version__)
1.13.0
>>>
>>> _ = torch.manual_seed (2023)
>>>
>>> matrix = torch.randn (4096, 4096)
>>> u, s, vh = torch.linalg.svd (matrix)
>>> uut = torch.matmul (u, u.t())
>>> print ('norm ||uut - I||: ', torch.norm (uut - torch.eye (uut.shape)))
norm ||uut - I||:  tensor(0.0003)
>>> vvt = torch.matmul (vh, vh.t())
>>> print ('norm ||vvt - I||: ', torch.norm (vvt - torch.eye (vvt.shape)))
norm ||vvt - I||:  tensor(0.0003)
``````

Best.

K. Frank