Multiply sparse tensor with dense tensor on GPU

I have the following implementation of page rank using sparse tensors:

i = torch.LongTensor(idx)
values = torch.FloatTensor([1] * len(idx))
M = torch.sparse.FloatTensor(i.t(), values, torch.Size([4847571, 4847571]))
N = M.shape[1]
v = torch.rand(N, 1).float()
values = torch.FloatTensor([(1 - self.d)/N] * len(indices))
temp = torch.sparse.FloatTensor(i.t(), values, torch.Size([4847571,
                                                                   4847571]))
if torch.cuda.is_available():
     v = v.cuda()
     M = M.cuda()
     temp = temp.cuda()

v = v / torch.norm(v, 1)
M_hat = self.d * M + temp
for i in range(self.num_iter):
     v = torch.mm(M_hat, v)

On CPU everything runs fine. On GPU I am getting the following error:

    v = torch.mm(M_hat, v)
RuntimeError: sub_iter.strides(0)[0] == 0 INTERNAL ASSERT FAILED at /pytorch/aten/src/ATen/native/cuda/Reduce.cuh:706, please report a bug to PyTorch.

Is this a known issue? Should I do something differently?

This is my configuration:

Collecting environment information...
PyTorch version: 1.5.0
Is debug build: No
CUDA used to build PyTorch: 10.2
OS: Ubuntu 18.04.4 LTS
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
CMake version: version 3.10.2
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: Tesla P100-PCIE-16GB
Nvidia driver version: 440.82
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.18.3
[pip3] torch==1.5.0

Thanks very much for the help!

Could you try to use torch.sparse.mm, which is the equivalent of torch.mm for a sparse and dense matrix?

Also, could you post all shapes and values of the tensors so that we can reproduce this error?
The error message isn’t really helpful in this case.

Also, could you post all shapes and values of the tensors so that we can reproduce this error?
The error message isn’t really helpful in this case.

Yes, sorry about that but it’s really hard to give you the weights since they were parsed using this graph. The shapes can be deduced from the code though. M and M_hat are (4847571, 4847571) and v is (4847571, 1).

Actually the error is the same as this one and it has been solved in the current master.

Thanks for taking the time though!