I am trying to multiply a sparse COO PyTorch tensor (which requires grad) with a dense matrix, using this method: torch.sparse.mm. I am getting the following error with the following lines of code:
# sparse is a PyTorch sparse_coo tensor,
# dense is a dense matrix. Both have requires_grad
# set to True. This line of code works normally.
output = torch.sparse.mm(sparse, dense).diagonal().unsqueeze(1)
# both dense and sparse matrices are functions of
# the w tensor. This line of code results in an error.
output_w = grad(output, w, grad_outputs=torch.ones_like(output),
create_graph=True)[0]
To this, I get the following error:
RuntimeError: The backward pass for this operation requires the 'self' tensor to be strided, but a sparse tensor was given instead. Please either use a strided tensor or set requires_grad=False for 'self'
Pytorch support for sparse-tensor operations is, forgive me, sparse.
Per the documentation, you can backward() (or grad()) through torch.sparse.mm(), but, per various tests, you cannot do so with create_graph = True.
Poking around the internet, I find a github that claims to be a “PyTorch
Extension Library of Optimized Autograd Sparse Matrix Operations”
that contains a suggestively-named torch_sparse.spmm() function.