antspy
(Ant)
September 28, 2017, 2:29pm
1
Hi,
I have a big sparse matrix in CUDA and I need to multiply it by a dense vector; so I am looking for a sparse x dense -> dense operation, possibly using CUDA.
I have been searching for a while, but I can’t find any documentation for sparse x dense -> dense operation; I don’t know the difference between torch.mm , torch.spmm, torch.hsmm, torch.sspmm and all the functions detailed here: http://pytorch.org/docs/master/sparse.html#torch.sparse.FloatTensor.hspmm
What are these function doing? What function should I use in my situation?
ezyang
(Edward Z Yang)
September 28, 2017, 2:49pm
2
In master, we reorganized the sparse code to remove all of these extra sparse operations, so you can just use a plain old matrix multiply to go sparse x dense -> dense
i = torch.LongTensor([[0, 1, 1],
[2, 0, 2]])
v = torch.FloatTensor([3, 4, 5])
s = torch.sparse.FloatTensor(i, v, torch.Size([2,3]))
d = s.to_dense()
s.matmul(d.t())
antspy
(Ant)
September 28, 2017, 7:27pm
3
Thank you! Are these changes already available in version 0.1.12_2?
But in your example, wound’t call .to_dense() defeat the purpose of using sparse matrices, by converting it to a full tensor?
If I have a sparse matrix and a dense vector, I want to do something like
res_vector = sparse_matrix.mm(dense_vector)
Will that use sparse operations?
ezyang
(Edward Z Yang)
September 28, 2017, 8:08pm
4
The call of to_dense()
was just for illustrative purposes, to get a dense tensor to multiply against the sparse one
I don’t think these changes are in 0.1.12. You’ll probably need master.
Assuming that mm accepts the dimensions of your arguments, mm will use sparse operations.
antspy
(Ant)
September 28, 2017, 8:20pm
5
Alright, thank you!
So in 0.1.12 torch.mm () will not take advantage of sparse operation?
ezyang
(Edward Z Yang)
September 28, 2017, 8:35pm
6
Well, if it works, it is definitely doing a sparse operation (we didn’t put any performance cliff operators in.)
1 Like
antspy
(Ant)
September 28, 2017, 8:35pm
7
Great! Thank you for your answer