Hello,
I am currently building a layered version of probabilistic circuits and for that I have to represent weighted edges using a tensor. In order to reduce memory consumption i would like to use a sparse tensor for that. However, i find it very difficult to work with those. For instance, the logarithm of a sparse tensor is not supported. Is there anyway that I can access/update elements of a sparse tensor in a nice way?
I already read into the torch_sparse package, but didn’t find the things I seek there.
Indeed, .log() is not implemented for sparse tensors.
Note that .log() does not map 0.0 to 0.0 (it maps it to -inf), so it
doesn’t preserve a tensor’s sparsity. Not implementing it for sparse
tensors would therefore seem to be a legitimate design choice.
It appears (by examination of a handful of such operations) that
element-wise operations that do not map 0.0 to 0.0 are left
unimplemented for sparse tensors. For example, .exp() and .cos() are not implemented, while .sin() and .sqrt() are.
Hey Frank, thanks for the answer, I think you are right!
Is there an efficient way on how to do it anyway? In my Case the, the existence of Elements in the sparse Tensor will Not Change ever. I already figured Out that one can modify the values by using the Output of .values() directly and constructing a new sparse Tensor. However, I belief this is fairly ineffiecent. Do you know another way?