Sparse multiplication slow?

Hello,
I’m working on a project where I have mulitplications Sparse x dense, where the sparse matrix is fixed (same case than the post : Autograd for sparse matmul: getting either cuda memory leak or ‘buffers have already been freed’ error )

I have used the hack from this post to be able to compute the backward.
However, I tried running a test to see how much time I would gain, and the outcome is pretty disappointing.

Inputs : Identity (5484*5484) - which is a very sparse matrix -, random vector (5484, 256)
looping 100 times;
Sparse : 1.2 sec
dense : 1.6sec

Is it normal ? It would mean than sparse representation is mostly useful for storage.

Here is the entire code :
import torch
import torch.nn as nn
from torch.autograd import Variable
import time

class LeftMMFixed(torch.autograd.Function):
    """
    Implementation of matrix multiplication of a Sparse Variable with a Dense Variable, returning a Dense one.
    This is added because there's no autograd for sparse yet. No gradient computed on the sparse weights.
    """

    def __init__(self):
        super(LeftMMFixed, self).__init__()
        self.sparse_weights = None

    def forward(self, sparse_weights, x):
        if self.sparse_weights is None:
            self.sparse_weights = sparse_weights
        return torch.mm(self.sparse_weights, x)

    def backward(self, grad_output):
        sparse_weights, = self.sparse_weights
        return None, torch.mm(sparse_weights.t(), grad_output)

class FixedSparseLinMod(nn.Module):
    """
    A module that takes a sparse matrix and does the left matrix multiplication."""
    def __init__(self, sparse_mat):
        super(FixedSparseLinMod, self).__init__()
        inds, vals, dims = sparse_mat
        i = torch.LongTensor(inds)
        v = torch.FloatTensor(vals)
        s = torch.Size(dims)
        self.sparse_mat = nn.Parameter(torch.sparse.FloatTensor(i, v, s).cuda(), requires_grad=False)
        self.mm = LeftMMFixed()

    def forward(self, x):
        return self.mm(self.sparse_mat, x.t()).t()

if __name__=="__main__":
    N_sensor=5484
    N=100

    indices = [  list(range(N_sensor)), list(range(N_sensor)) ]
    values = [1]*N_sensor
    size = (N_sensor, N_sensor)

    adj_dense=Variable(torch.eye(N_sensor)).cuda()
    adj_sbmm = FixedSparseLinMod([indices, values, size])

    x = Variable(torch.randn(256, N_sensor)).cuda()
    y = x.t()

    torch.cuda.synchronize()
    t1=time.time()

    for i in range(N):
        out = adj_sbmm(x)

    torch.cuda.synchronize()
    t2=time.time()

    for i in range(N):
        out = torch.matmul(adj_dense, y)

    torch.cuda.synchronize()
    t3=time.time()

    print("sparse :{}".format(t2-t1))
    print("dense : {}".format(t3-t2))
1 Like

Sparse Tensors are useful when the dimensions you are dealing with are much larger than say 5k. More like 50k would be useful.

Hi, thanks for your answer.
To avoid any misunderstanding, your are talking about 50k50k matrices, so 2.5 billions dimension ?
Becaus here, the sparse matrix is 5k
5k=25 Millions dimensions

only one of the dimensions is sparse, so i mean one of the dimensions – i.e.e for example 50k x 256

With Identity matrix, for instance, both dimensions are sparse