RuntimeError: Could not run 'aten::gt.Scalar' with arguments from the 'SparseCPUTensorId' backend. 'aten::gt.Scalar' is only available for these backends: [CPUTensorId, QuantizedCPUTensorId, VariableTensorId]

I have the following problems when using the Graph attention Networks(GAT) framework.
RuntimeError: Could not run ‘aten::gt.Scalar’ with arguments from the ‘SparseCPUTensorId’ backend. ‘aten::gt.Scalar’ is only available for these backends: [CPUTensorId, QuantizedCPUTensorId, VariableTensorId].

This is the code.
def forward(self, input, adj):
h = torch.mm(input, self.W)
N = h.size()[0]
a_input = torch.cat([h.repeat(1, N).view(N * N, -1), h.repeat(N, 1)], dim=1).view(N, -1, 2 * self.out_features)
e = self.leakyrelu(torch.matmul(a_input, self.a).squeeze(2)
zero_vec = -9e15*torch.ones_like(e)
attention = torch.where(adj > 0, e, zero_vec)

This is a pychar error :
attention = torch.where(adj > 0, e, zero_vec)
File “/home/Scse09064/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/tensor.py”, line 28, in wrapped
return f(*args, **kwargs)
RuntimeError: Could not run ‘aten::gt.Scalar’ with arguments from the ‘SparseCPUTensorId’ backend. ‘aten::gt.Scalar’ is only available for these backends: [CPUTensorId, QuantizedCPUTensorId, VariableTensorId].

Hope to have a friend to help me solve this problem, I will be very grateful!

Hi,

As mentioned in your other post, you cannot do adj > 0 as this is not implemented I’m afraid :confused:

Hi,thank you. It’s really about adj。
There are what i need:
adj=tensor([[0.1667, 0.0000, 0.0000, …, 0.0000, 0.0000, 0.0000],
[0.0000, 0.5000, 0.0000, …, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.2000, …, 0.0000, 0.0000, 0.0000],
…,
[0.0000, 0.0000, 0.0000, …, 0.2000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, …, 0.0000, 0.2000, 0.0000],
[0.0000, 0.0000, 0.0000, …, 0.0000, 0.0000, 0.2500]])

There are at present:
adj=tensor(indices=tensor([[ 0, 8, 20, …, 2983, 2991, 3024],
[ 0, 0, 0, …, 3024, 3024, 3024]]),
values=tensor([0.0556, 0.0500, 0.0556, …, 0.0588, 0.1250, 0.1111]),
size=(3025, 3025), nnz=29281, layout=torch.sparse_coo)

Now I’m trying to figure out how to convert these

You can call .to_dense() on the sparse Tensor to convert it into a full dense Tensor.

3 Likes

it’s the problem with sparse tensor,and after i use .to_dense(),the problem is solved,thank you very much! :stuck_out_tongue:

Where is .to_dense() added? I have the same problem as you :sob:

PyTorch 1.0.0 seems to have already supported the to_dense operation as seen in the docs.

What if you need the sparse tensor for memory constraints? Isn’t that the whole point in sparse Tensors?

1 Like

I got the same problem. Facing a memory constraint when using dense.