Loss for sparse tensor

Hello, I would like to create a simple loss function for two sparse
tensors

def criterion_sparse(x,y):
      return torch.sparse.sum(
                      torch.pow(torch.subtract(x,y).coalesce(),2)
)

I get the error:
NotImplementedError: Could not run ‘aten::is_coalesced’ with arguments from the ‘CPU’ backend. This could be because the operator doesn’t exist for this backend.

although all functions are listed as compatible with sparse tensors.
What can I do?

Another broader question is: There is basically nothing implemented for the torch.sparse module. Neither activation functions, loss functions, a broad range of autograd supporting Pytorch functions, nor loss or linear layers. Does it make sense to work with torch.sparse, is there any other framework for pytorch or do we have to stick with tensorflow?

Hi Max!

What version of pytorch are you using? I cannot reproduce your specific
issue on 1.12.0:

>>> import torch
>>> torch.__version__
'1.12.0'
>>> _ = torch.manual_seed (2022)
>>> x = torch.randn (3, 5).to_sparse().requires_grad_ (True)
>>> y = torch.randn (3, 5).to_sparse().requires_grad_ (True)
>>> torch.sparse.sum (torch.pow (torch.subtract (x, y).coalesce(), 2))
tensor(29.7927, grad_fn=<SumBackward0>)

torch.sparse is a work in progress with a number of things that are
not (yet) implemented (although torch.sparse does more than your
blanket statement would seem to suggest).

In general, I would not expect torch.sparse to work on “real”
neural-network problems. If your use case could potentially benefit
from torch.sparse, it could be worth trying, but as you note, many
features won’t work.

Best.

K. Frank