Dense_tensor.mm(sparse_tensor)

Hi, there

When I run the code

out = dense_tensor.mm(sparse_tensor)

it will give the error:

RuntimeError: Expected object of type Variable[torch.cuda.FloatTensor] but found type Variable[torch.cuda.sparse.FloatTensor] for argument #1 'mat2'

If I convert the sparse_tensor into dense type by using sparse_tensor.to_dense() first, then run the torch.mm it will tell me to buy a new RAM like

RuntimeError: $ Torch: not enough memory: you tried to allocate 600GB. Buy new RAM! at /pytorch/torch/lib/TH/THGeneral.c:246

How to solve this problem? PyTorch version: 0.3.0.post4
Thanks.

pytorch doesn’t support dense * sparse matrix multiplication right now. Out of curiosity, what are you using this for?

thx, and what about sparse * dense?

At least one of sparse_tensor.mm(dense_tensor), torch.mm(sparse_tensor, dense_tensor) should work.

Thx, I think I’m stuck into the trouble of this issue, torch.autograd.Variable does not support sparse operations.

Yeah, you won’t be able to do a .backward() if you use torch.mm(sparse, dense) because dense * sparse matrix multiplication isn’t implemented.

I change the code from dense.mm(sparse) into sparse.mm(dense) as you told me. Then I encounter the same problem in that issue with the error

RuntimeError: Expected object of type Variable[torch.cuda.sparse.FloatTensor] but found type Variable[torch.cuda.FloatTensor] for argument #1 'mat2'

sad~
The trouble is torch.mm(sparse, dense) works with tensors but not variables to sparse ops with autograd

Oh, my bad. torch.mm(sparse_variable, dense_variable) is implemented on the master branch but not 0.3. You can try building from source (instructions here) to use it.

Thank you very much, I will try. Another question: will this way of sparse_Variable.mm(dense_Variable) avoid the memory error?

RuntimeError: $ Torch: not enough memory: you tried to allocate 600GB. Buy new RAM! at /pytorch/torch/lib/TH/THGeneral.c:246

the dim of dense matrix is about 100k, I know this is quite large.

sparse_variable * dense_variable gives you a dense_variable as the output. If the output is too large to fit into memory then I would say no, it won’t fix it. It depends on the output size :slight_smile:

sparse torch.Size([2L, 100kL]) * dense torch.Size([100kL, 1000L]), the output size is small. So this case can work well, is right?

Your dense tensor has 10^8 elements; assuming a minimum 4 bytes per element you get 400 mb for it.
So it should be fine. I’m not sure why a dense * dense multiplication wouldn’t work for you.

in GPU, I inspect that it will cost about 6G gpu memory, just doing the torch.mm()

Hi Richard, does PyTorch support loss.backward() now if loss = torch.mm(sparse, dense)? v0.4.1 or v1.0?

Torch version 0.4.1
Weird but torch.mm(Dense, Sparse) does not work but torch.mm(Sparse, Dense) works just fine. backward() also works just fine. I’ve managed to work around this by making a transpose. Thankfully my Sparse matrix is Symmetric.