How to process the backward of sparse


(yixian liu) #1

I want to add a sparse tensor and a dense tensor. But when I run backward(), I’m faced with the error “Function AddBackward0 returned an invalid gradient at index 1 - expected type torch.cuda.sparse.FloatTensor but got torch.cuda.FloatTensor”. This error only happen in GPU. When I run with cpu variable, everything is good. So how can I process the sparse tensor with dense tensor in a graph to compute the gradient.

The following is the test code.

one_hot_spare_rep = torch.FloatTensor([[0, 0, 0], [9, 0, 10]]).cuda(3).to_sparse()
one_hot_spare_rep.requires_grad = True
vocab_dist = torch.ones(one_hot_spare_rep.size()).cuda(3)
vocab_dist.requires_grad = True
vocab_dist = one_hot_spare_rep + one_hot_spare_rep
vocab_dist.norm().backward()