Backpropagation for sparse matrix seems problematic (colab provided)

Hi, everyone,

I find when I index the sparse matrix, it performs well in the forward, but something wrong in the backpropagation. I guess the in the sparse matrix setting is problematic. The gradients for the sparse matrix are empty.

To reproduce, I put up a toy case in colab here: colab toy case
But the key part is:

#construct sparse mat
ind = torch.LongTensor([[0, 1, 1,3],
                          [2, 1, 2,3],
vals = torch.FloatTensor([3, 4, 5,9]).requires_grad_(True)
sp1 = torch.sparse.FloatTensor(ind, vals, torch.Size([5,5])).to(device)

sp1 = sp1.detach().requires_grad_(True)
losses = []
for i in range(sp1.shape[0]):
    loss = sp1[i].to_dense().sum()
l = sum(losses)

# check the gradients of sp1
sp1.register_hook(lambda grad: print("==grad== sp1 :\n", grad)) # sp1's gradients are all zeros, which is problematic


printed output:

==grad== sp1 :
 tensor(indices=tensor([], size=(2, 0)),
       values=tensor([], size=(0,)),
       device='cuda:0', size=(5, 5), nnz=0, layout=torch.sparse_coo)

Thanks for your help! If you need more info let me know.