Why are SVD, inverse, and other matrix operations only defined for 2-D tensors?

I noticed that torch.SVD and torch.inverse are only defined over 2-dimensional tensors. Why is this the case? Is there no efficient way to implement a batch SVD or batch inverse for tensors with shapes (*, M,N) or (*,M,M), respectively? Or better yet, some way to specify the two axes to use.

Perhaps there’s some reason this is not well-parallelizable?

1 Like

How would you compute a batch SVD? Anyway, can’t you just iterate over the batch dimension?

I’m not sure how I’d implement it, that’s why I asked the question. On a GPU I imagine a block would run the necessary SVD or inversion operations for each of the set of problems. Is there something inherent to these matrix operations that prohibits this type of implementation?

For a reason why you don’t want to just iterate over the batch, perhaps you have an image with a matrix representing each pixel? Then you have BxHxWxKxK, for example. You then might want to invert theKxK elements for some reason. Or run SVD over them. In this case iterating over each would be absurdly slow, but the operations at each pixel are completely independent from one another which would lend itself to parallelizaton.

2 Likes