# Where to find doc for sparse tensor operation?

Hi,

I have a big sparse matrix in CUDA and I need to multiply it by a dense vector; so I am looking for a sparse x dense -> dense operation, possibly using CUDA.

I have been searching for a while, but I can’t find any documentation for sparse x dense -> dense operation; I don’t know the difference between torch.mm, torch.spmm, torch.hsmm, torch.sspmm and all the functions detailed here: http://pytorch.org/docs/master/sparse.html#torch.sparse.FloatTensor.hspmm

What are these function doing? What function should I use in my situation?

In master, we reorganized the sparse code to remove all of these extra sparse operations, so you can just use a plain old matrix multiply to go sparse x dense -> dense

``````i = torch.LongTensor([[0, 1, 1],
[2, 0, 2]])
v = torch.FloatTensor([3, 4, 5])
s = torch.sparse.FloatTensor(i, v, torch.Size([2,3]))
d = s.to_dense()
s.matmul(d.t())``````

Thank you! Are these changes already available in version 0.1.12_2?

But in your example, wound’t call .to_dense() defeat the purpose of using sparse matrices, by converting it to a full tensor?

If I have a sparse matrix and a dense vector, I want to do something like

``````res_vector = sparse_matrix.mm(dense_vector)
``````

Will that use sparse operations?

The call of `to_dense()` was just for illustrative purposes, to get a dense tensor to multiply against the sparse one I don’t think these changes are in 0.1.12. You’ll probably need master.

Assuming that mm accepts the dimensions of your arguments, mm will use sparse operations.

Alright, thank you! So in 0.1.12 torch.mm() will not take advantage of sparse operation?

Well, if it works, it is definitely doing a sparse operation (we didn’t put any performance cliff operators in.)

1 Like

Great! Thank you for your answer 