Hi,
I’ve seen some recent progress on SparseTensors, and I have some questions:
Do You have a roadmap for the whole sparse module? I’m curious if you’re planning to use cuSPARSE and if there is any way to contribute to this module.
I’d need a backward pass for sparse_mat @ dense_mat. What are my options? Can I write a Function using the available spmm for forward, and add some quick-and-dirty cython code to do the backward? This would mean returning a sparse grad, is this supported?
Thank You for your time, I’m really enjoying PyTorch so far.
torch.smm impleemnts sparse * dense -> sparse, and it assumes your sparsity is strong enough that it’ll help (you’ll need a really sparse tensor in the forward op). Sparse gradients are supported, and are implemented in the Embedding module.
for now sparse tensor support basically evolves with the needs of our respective projects. Basic GPU support is being worked on - it will rely on cuSPARSE for some operations.
returning sparse gradients from backward should work
in addition to the thread @ebetica pointed to, since pytorch supports hybrid tensors (i.e. tensors with both sparse and dense dimensions), we may add a sparse * dense -> hybrid function in the future (where “hybrid” here means one sparse dimension and one dense dimension). That would probably be more efficient in this case.
I’ve just noticed that there is also an implementation of SparseLinearhere. Can I use those c functions to create a Function which will simply call self._backend.SparseLinear_accGradParameters, just like EmbeddingFunction does (link)? Or there is some catch here?
I know that You guys probably have SparseLinear module on your roadmap, but I wanted to play around with it during the weekend, hence the question.