Batched LU factorization of sparse tensors on GPU

Hi,

I need to solve many sparse linear systems using LU factorization (ideally on GPU). Is it possible within the current implementation of PyTorch? Thanks

You can use the torch.lu function for that.

Thanks. It seems that it does not work with sparse tensors (trying without batching):

import torch

i = torch.LongTensor([[0, 2], [2, 0], [1, 1]])
v = torch.FloatTensor([3, 4, 5 ])
A = torch.sparse.FloatTensor(i.t(), v, torch.Size([3,3])).cuda()

A_LU = torch.lu(A)

RuntimeError: Didn’t find kernel to dispatch to for operator ‘aten::_lu_with_info’. Tried to look up kernel for dispatch key ‘SparseCUDATensorId’. Registered dispatch keys are: [CUDATensorId, CPUTensorId, VariableTensorId]

I’m getting the same, even without cuda.

i = torch.LongTensor([[0, 1, 2], [0, 1, 2]])                                                                                                                                                                                                                                                                                                               
v = torch.FloatTensor([1,1,1])                                                                                                                                                                                                                                                                                                                             
t = torch.sparse.FloatTensor(i, v, torch.Size([3,3]))
t.to_dense()                                                                                                                                                                                                                                                                                         

tensor([[1., 0., 0.],
        [0., 1., 0.],
        [0., 0., 1.]])

t.lu()                       
                                                                                                                                                                                                                                                                                                                              
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-25-9f1f2db7f67d> in <module>
----> 1 t.lu()

~/anaconda3/envs/audio-built/lib/python3.7/site-packages/torch/tensor.py in lu(self, pivot, get_infos)
    294         r"""See :func:`torch.lu`"""
    295         # If get_infos is True, then we don't need to check for errors and vice versa
--> 296         LU, pivots, infos = torch._lu_with_info(self, pivot=pivot, check_errors=(not get_infos))
    297         if get_infos:
    298             return LU, pivots, infos

RuntimeError: Didn't find kernel to dispatch to for operator 'aten::_lu_with_info'. Tried to look up kernel for dispatch key 'SparseCPUTensorId'. Registered dispatch keys are: [CPUTensorId, VariableTensorId]

In your case, you will need the following code:

i = torch.LongTensor([[0, 1, 2], [0, 1, 2]])
v = torch.FloatTensor([1,1,1])
t = torch.sparse.FloatTensor(i, v, torch.Size([3,3]))
a = t.to_dense() #this is necessary

a.lu()

I just meant to confirm I also get such an error :slight_smile:

bamos: any thought? I noted you posted something similar in the past. Thanks.

I think the answer is that no we don’t have a LU factorization for sparse Tensors available. You will have to convert to a dense Tensor.
If you are interested in contributing an implementation for LU factorization for sparse Tensor, we would be happy to help and accept it :slight_smile:

1 Like

Thanks for your reply. I will look back into it once I have suitable bandwidth.