Sparse Tensors in PyTorch

What is the current state of sparse tensors in PyTorch?

2 Likes

right now, the description / summary of this PR that was merged 11 hours ago gives a good idea of the current state of things:

1 Like

But we’re not documenting them on purpose, because they might undergo some more changes in the future. The first step was to implement sprase updates for Embedding. Can I ask what’s your use case?

1 Like

I need just basic sparse matrix multiplication in order to implement a Graph ConvNet model. For example:

and

The Laplacian matrix is extremely sparse is this case.

You need sparse x sparse -> sparse multiplication, right? Right now we only have sparse x dense -> dense and sparse x dense -> sparse, because that’s what we needed for sparse Embedding updates. You can open a feature request if you want.

I need sparse x dense -> dense. So I can use PyTorch in this case. Thanks a lot!

torch.mm and torch.spmm should work then

But it’s not in autograd yet?

Is there a fast way to add it?

a fast and local way is for you to write an autograd function for yourself.

1 Like

That’s my question too (now on 21st Sept). Can anyone comment on the current state of sparse tensors in PyTorch?

Thank you

4 Likes

I would like to update variable with sparse gradients. and it’s normal. I know that wasn’t support by tensorflow. so how about pytorch/. Thank you!

1 Like

What is the status of Sparse Support in PyTorch? I’d like to contribute to the module. What are the current design strategies in place? There seems to be no detail or comprehensive discussion on this aspect.

https://pytorch.org/docs/stable/sparse.html#sparse-coo-tensors