Sparse x Dense -> Dense matrix multiplication

(Oleksandr Shchur) #1

Hi everyone,

I am trying to implement graph convolutional layer (as described in Semi-Supervised Classification with Graph Convolutional Networks) in PyTorch.

For this I need to perform multiplication of the dense feature matrix X by a sparse adjacency matrix A (sparse x dense -> dense). I don’t need to compute the gradients with respect to the sparse matrix A.

As mentioned in this thread, should work in this case, however, I get the
TypeError: Type torch.sparse.FloatTensor doesn't implement stateless method addmm

class GraphConv(nn.Module):
    def __init__(self, size_in, size_out):
        super(GraphConv, self).__init__()
        self.W = nn.parameter.Parameter(torch.Tensor(size_in, size_out))
        self.b = nn.parameter.Parameter(torch.Tensor(size_out))

    def forward(self, X, A):
        return, X), self.W) + self.b

A  # torch.sparse.FloatTensor, size [N x N]
X  # torch.FloatTensor, size [N x size_in]
A = torch.autograd.Variable(A, requires_grad=False) 
X = torch.autograd.Variable(X, requires_grad=False)
# If I omit the two lines above, I get invalid argument error (torch.FloatTensor, Parameter) for

gcn = GraphConv(X.size()[1], size_hidden)
gcn(X, A)  # error here

Is there something I am doing wrong, or is this functionality simply not present in PyTorch yet?

(Trevor Killeen) #2

Hi Oleksandr - its a little hard to tell from your post which mm is actually triggering the error. Could you possibly provide a script for that triggers the issue? We do support mm for sparse x dense I believe.

(Oleksandr Shchur) #3

Here is a simple script that reproduces the error:

I get the following error when running this code

(Oleksandr Shchur) #4

Here is an even clearer example

(Trevor Killeen) #5

I think there are a few things here:

  1. I believe the issue with the first thing is that internally when calling mm on Variables, it calls torch.addmm and we don’t have the proper function defined on Sparse Tensors. (We actually have this implemented, but I think its named incorrectly)
  2. I’m less certain about nn.Parameter, I’ll have to let someone else answer that.

I will file an issue for #1.

(Oleksandr Shchur) #6

Thanks a lot, Trevor!

(Nitish Gupta) #7

Any updates on this?

(Oleksandr Shchur) #8

Here is the GitHub issue on this topic Apparently, this functionality is not supported at the moment, so we should wait for the feature to be added.

(Nitish Gupta) #9

There is already a patch for You can use that in the meantime. I am using it currently and it if working fine.

(Shyam Upadhyay) #10

Has there been any updates on doing sparse x dense operation with Variables?

I can do the following,

import torch
from torch.autograd import Variable as V

i = torch.LongTensor([[0, 1, 1], [1 ,1 ,1]])
v = torch.FloatTensor([3, 4, 5])
m = torch.sparse.FloatTensor(i, v, torch.Size([2,3]))
m2 = torch.randn(3,2)
print(, m2))

But adding this breaks it,

M = V(m)
M2 = V(m2)
print(, M2))

Traceback (most recent call last):
  File "", line 11, in <module>
    print(, M2))
RuntimeError: Expected object of type Variable[torch.sparse.FloatTensor] but found type Variable[torch.FloatTensor] for argument #1 'mat2'