I am a beginner trying to learn PyTorch and there is one question bugging me. I know PyTorch support sparse x dense -> dense function in torch.mm. However, I don’t think it currently supports autograd on sparse variables (say sparse matrix). Examples are:
x = torch.sparse.FloatTensor(2,10)
y = torch.FloatTensor(10, 5)
sx = torch.autograd.Variable(x)
sy = torch.autograd.Variable(y)
torch.mm(sx, sy) # fails
Errors: TypeError: Type torch.sparse.FloatTensor doesn’t implement stateless method addmm
I tried torch.mm and torch.matmul for sparse*dense matrix multiplication but get the torch.sparse.FloatTensor doesn't implement stateless method addmm error from both.
I tried, torch.matmul works on a dot b where a is sparse tensor, and b has to have dimension > 1.
So far until 0.2, sparse tensor can’t be the second operand, otherwise there will be some error like "invalid combination of arguments". (torch.SparseDoubleTensor source, torch.DoubleTensor mat2) is what is expected.
In order to perform autograd, you may define a dot product inheriting torch.autograd.Function. And also squeeze or unsqueeze the dimension when necessary.
This patch was working fine with SGD. Today I tried Adadelta and RMSprop but got the following error : 'torch.cuda.sparse.FloatTensor' object has no attribute 'addcmul_'
I am curious if the PR @smth mentioned is there yet??
So far I am on version 0.2.0+5de7f9e, and it seems the torch.mm still didnt support the sparse x dense -> dense ?
I also raised some discussion in https://github.com/pytorch/pytorch/issues/2389#issuecomment-342119147
Just curious about what is the fastest way to make this work?
Or it just I found the wrong function to use?