Sparse Multiplication (sparse x sparse -> sparse)

Does pytorch currently support sparse x sparse -> sparse operation?? If not, can someone please suggest what is the best workaround to do it using gpu?

1 Like

Are you looking for matrix multiplication, or hadamard product? Sparse * sparse matrix multiplication isn’t implemented yet: https://github.com/pytorch/pytorch/issues/5262 but the hadamard product should work.

I don’t know of a good workaround for that operation on the GPU.

Sorry, I should have specified, I am looking for Hadamard product. If I use xy(where x and y are sparse tensors) it works but somehow computation time increases when I use GPU. So I was wondering is there some function that I should use instead of just writing it as xy?

You’re writing it in the correct way, yes. For small inputs the CPU will probably be faster than the GPU but the GPU should be faster for large inputs. How large are your inputs?

currently, I was testing it for small matrices only i.e 1284000 but I am aiming for something like 12810M. Hopefully gpu should be faster for them.