elvira29
(Elvira Akhiyarova)
June 27, 2017, 1:09pm
1
Hi all,
I need to perform the svd operation for batch of the matrices, is it possible to do this in a simple way with pyTorch? It is a standard function in numpy( https://docs.scipy.org/doc/numpy-1.12.0/reference/generated/numpy.linalg.svd.html )and I would like to know if this can be reproduced in pytorch. Thanks
fmassa
(Francisco Massa)
June 27, 2017, 2:21pm
2
There notes seem relevant for you
But there is not yet support for batched svd in pytorch. It seems that numpy just do a for loop over the batches , which would be easy to add on the CPU side, but on the GPU side it would be trickier.
ferrine
(Maxim Kochurov)
November 7, 2018, 6:08pm
3
You can use this code until it is not implemented officially (I do not have it on 0.4.1) https://gist.github.com/ferrine/0c0e03bd21323a048baab8dadc83cdcc
UPD after experiments with torch script I failed to make it faster than loop implementation
ferrine
(Maxim Kochurov)
February 5, 2019, 12:09am
4
Hi again, guys!
I have some updates on this. I’ve implemented all this stuff on top of torch.script, but got stuck with a foor loop. This works (both cpu, gpu), but could be further optimized if only there is a parallel_for loop. Any ideas or plans to implement?
# prolonged here:
if x.dim() == 2:
# 17 milliseconds on my mac to check that condition, that is low overhead
return torch.svd(x)
else:
batches = x.shape[:-2]
other = x.shape[-2:]
flat = x.view((-1,) + other)
slices = flat.unbind(0)
U, D, V = [], [], []
# I wish I had a parallel_for
for i in range(flat.shape[0]):
u, d, v = torch.svd(slices[i])
U.append(u)
D.append(d)
V.append(v)
U = torch.stack(U).view(batches + U[0].shape)
D = torch.stack(D).view(batches + D[0].shape)
V = torch.stack(V).view(batches + V[0].shape)
return U, D, V