Handling of Sparse Tensors

Does PyTorch take tensor sparsity into account when doing operations on these tensors?

I created a PyTorch script that created two torch.sparse.FloatTensor()s at various sparsity levels (0,100,10 interval) and multiplied them together using torch.sparse.mm().

out1/out2 = torch.sparse.FloatTensor(indexTensor,valuesTensor).to_dense().to(device=cuda)

out2 = torch.transpose(out2, 0, 1).to(device=cuda)
torch.sparse.mm(out1, out2)

According to Nvidia’s profiler, every sparsity level used the same number of floating point operations, which suggests no performance difference between operations using sparse and operations using non sparse tensors.

Also tried this with a regular tensor and has the same results.