Using U,S, VT = torch.linalg.svd(M), the matrix ‘M’ is large, so I am getting the matrices U and VT as non orthogonal. When I compute torch.norm(torch.mm(matrix, matrix.t()) - identity_matrix)) its ~0.004 and also when I print M.M^T, the diagonal entries are not 1, rather 0.2 or 0.4 and non diagonals are not 0, but ~0.0023. IS there a way to get SVD with orthogonal U and V^T ?
But the singular values i.e. diagonal elements of S are near to 1 only.
matrix = torch.randn(4096, 4096)
u, s, vh = torch.linalg.svd(matrix)
matrix = torch.matmul(u, vh)
print('norm ||WTW - I||: ',torch.norm(torch.mm(matrix, matrix.t()) - torch.eye(matrix.shape[0])))
print(matrix)
Changing precision to float64 helped to make ||M*M^T -I|| order of 1E-13, also I am using “gesvd” that uses QR decomposition and more stable:
u, s, vh = torch.linalg.svd(matrix, driver='gesvd')
but I am cautious that will it work fine or will ill-conditioned matrix fail this? So is there a way to get stable SVD in pytorch?