# Logarithm of a sparse tensor

Hello,
I am currently building a layered version of probabilistic circuits and for that I have to represent weighted edges using a tensor. In order to reduce memory consumption i would like to use a sparse tensor for that. However, i find it very difficult to work with those. For instance, the logarithm of a sparse tensor is not supported. Is there anyway that I can access/update elements of a sparse tensor in a nice way?

I already read into the torch_sparse package, but didn’t find the things I seek there.

Hi Tom!

Indeed, `.log()` is not implemented for sparse tensors.

Note that `.log()` does not map `0.0` to `0.0` (it maps it to `-inf`), so it
doesn’t preserve a tensor’s sparsity. Not implementing it for sparse
tensors would therefore seem to be a legitimate design choice.

It appears (by examination of a handful of such operations) that
element-wise operations that do not map `0.0` to `0.0` are left
unimplemented for sparse tensors. For example, `.exp()` and
`.cos()` are not implemented, while `.sin()` and `.sqrt()` are.

Best.

K. Frank

Hey Frank, thanks for the answer, I think you are right!
Is there an efficient way on how to do it anyway? In my Case the, the existence of Elements in the sparse Tensor will Not Change ever. I already figured Out that one can modify the values by using the Output of .values() directly and constructing a new sparse Tensor. However, I belief this is fairly ineffiecent. Do you know another way?

Best,
Tom

Hi Tom!

Well, if you say so.

But what, logically, do you want the semantics of `my_sparse.log()`
to be? `log (0.0)` is not `0.0`, so how would it make sense for it to
stay zero?

Or, if you want to say that `-inf` somehow doesn’t count, what would
you want the semantics of `my_sparse.cos()` to be?

You can modify `.values()` inplace: Consider:

``````>>> torch.__version__
'2.3.1'
>>> my_dense = torch.arange (-1.0, 1.1, 0.5)
>>> my_dense.cos()
tensor([0.5403, 0.8776, 1.0000, 0.8776, 0.5403])
>>> my_sparse = my_dense.to_sparse()
>>> my_sparse
tensor(indices=tensor([[0, 1, 3, 4]]),
values=tensor([-1.0000, -0.5000,  0.5000,  1.0000]),
size=(5,), nnz=4, layout=torch.sparse_coo)
>>> my_sparse.values().cos_()
tensor([0.5403, 0.8776, 0.8776, 0.5403])
>>> my_sparse.to_dense()
tensor([0.5403, 0.8776, 0.0000, 0.8776, 0.5403])
``````

Best.

K. Frank

Thanks, I wasn’t aware of the inplace version of the API. This solved many of my problems.