Torch.mul with sparse and dense arguments

consider code snippet

import torch


print(torch.__version__)

u = torch.tensor([0, 1, 0, 2, 0])

v = torch.mul(u.unsqueeze(-1), u.unsqueeze(0))
print(v)

w = torch.mul(u.unsqueeze(-1).to_sparse_coo(), u.unsqueeze(0))
print(w)
print(w.to_dense())
print(w._nnz())

output for me is as follows

2.1.1
tensor([[0, 0, 0, 0, 0],
        [0, 1, 0, 2, 0],
        [0, 0, 0, 0, 0],
        [0, 2, 0, 4, 0],
        [0, 0, 0, 0, 0]])
tensor(indices=tensor([[1, 3],
                       [0, 0]]),
       values=tensor([0, 0]),
       size=(5, 5), nnz=2, layout=torch.sparse_coo)
tensor([[0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0],
        [0, 0, 0, 0, 0]])
2

is that an expected behaviour? seems rather a bug

Hi Maxim!

This does look like a bug to me.

As near as I can tell broadcasting of sparse tensors is not supported,
although I can’t find much in the way of documentation or discussion
of this.

Here is a tweaked version of your example code run on two different
versions of pytorch:

Version 2.1.1:

>>> import torch
>>> print (torch.__version__)
2.1.1
>>>
>>> u = torch.tensor ([0, 1, 0, 2, 0])
>>>
>>> torch.mul (u.unsqueeze(-1), u.unsqueeze(0))                                 # works
tensor([[0, 0, 0, 0, 0],
        [0, 1, 0, 2, 0],
        [0, 0, 0, 0, 0],
        [0, 2, 0, 4, 0],
        [0, 0, 0, 0, 0]])
>>> torch.mul (u.unsqueeze(-1).expand (5, 5).to_sparse_coo(), u.unsqueeze(0))   # works on 2.1.1
tensor(indices=tensor([[1, 1, 1, 1, 1, 3, 3, 3, 3, 3],
                       [0, 1, 2, 3, 4, 0, 1, 2, 3, 4]]),
       values=tensor([0, 1, 0, 2, 0, 0, 2, 0, 4, 0]),
       size=(5, 5), nnz=10, layout=torch.sparse_coo)
>>> torch.mul (u.unsqueeze(-1).to_sparse_coo(), u.unsqueeze(0).expand (5, 5))   # error or incorrect result
tensor(indices=tensor([[1, 3],
                       [0, 0]]),
       values=tensor([0, 0]),
       size=(5, 5), nnz=2, layout=torch.sparse_coo)
>>>
>>> u.unsqueeze(-1).to_sparse_coo().expand (5, 5)                               # can't expand sparse tensor
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: expand is unsupported for Sparse tensors

And version 1.11.0:

>>> import torch
>>> print (torch.__version__)
1.11.0
>>>
>>> u = torch.tensor ([0, 1, 0, 2, 0])
>>>
>>> torch.mul (u.unsqueeze(-1), u.unsqueeze(0))                                 # works
tensor([[0, 0, 0, 0, 0],
        [0, 1, 0, 2, 0],
        [0, 0, 0, 0, 0],
        [0, 2, 0, 4, 0],
        [0, 0, 0, 0, 0]])
>>> torch.mul (u.unsqueeze(-1).expand (5, 5).to_sparse_coo(), u.unsqueeze(0))   # works on 2.1.1
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: mul operands have incompatible sizes
>>> torch.mul (u.unsqueeze(-1).to_sparse_coo(), u.unsqueeze(0).expand (5, 5))   # error or incorrect result
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: mul operands have incompatible sizes
>>>
>>> u.unsqueeze(-1).to_sparse_coo().expand (5, 5)                               # can't expand sparse tensor
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: sparse tensors do not have strides

I reproduce the error you see on 2.1.1, while pytorch flags the unsupported
operation (albeit somewhat opaquely) on 1.11.0.

Here’s a related github issue:

Note the following statement:

torch.mul now supports broadcasting over dense dimensions.

So it looks like sparse-dense multiplication can broadcast one direction
but not the other on 2.1.1 (as illustrated in the sample code I posted).

Note that expand() is sort of like broadcasting, and you can’t expand()
sparse tensors.

I think it would be appropriate to file a github issue for this bug you’ve found.

Best.

K. Frank

Hi, thanks for response. the statement you’he cited makes sense and clarifyes the accident. that all feels like choking with documentation, which misses details in particular case on supported argument types. following pytorch doc provides a list of functions which support coo_tensorts, among which torch.mul situated, however, not in case when broadcasting needed, it seems. torch.sparse — PyTorch 2.1 documentation.