torch.matmul works on a dot b where a is sparse tensor, and b has to have dimension > 1.
So far until 0.2, sparse tensor can’t be the second operand, otherwise there will be some error like
"invalid combination of arguments".
(torch.SparseDoubleTensor source, torch.DoubleTensor mat2) is what is expected.
In order to perform autograd, you may define a dot product inheriting
torch.autograd.Function. And also squeeze or unsqueeze the dimension when necessary.
Don’t know when this will be resolved.