Different behaviour between sparse.FloatTensor and fancy indexing of regular Tensor

This code should produce the same tensors printed at the end.

#some random values and indices
m_size = 2
indices = torch.randint(low =0, high=m_size, size=(2, m_size+100)) #plenty of indices
values = torch.rand(m_size+100) #plenty of random values

#build the sparse tensor
sparse_tensor = torch.sparse.FloatTensor(indices, values, (m_size, m_size))
sparse_tensor = sparse_tensor.to_dense()

#build the regular tensor
regular_tensor = torch.zeros((m_size, m_size))
regular_tensor[indices[0].tolist(), indices[1].tolist()] = regular_tensor[indices[0].tolist(), indices[1].tolist()] + values

#compare tensors
print((sparse_tensor != regular_tensor).sum())  #number of differences
print(sparse_tensor) #
print(regular_tensor)

If indices contain duplicates, the sparse.FloatTensor will sum the corresponding values up. Although surprising, this is actually good, at least in my case. However with the sparse.FloatTensor I can not get a gradient through the to_dense() call later. So I can not use it.

So my intention is to build a regular Tensor more traditional by using fancy indexing. Howerver, I could not get it to behave the same as the sparse.FloatTensor. The resulting matrices look different.

What are my chances?

Hi,

For indexing, you don’t need lists, so you can remove the tolist().
The problem here is that you set multiple values at once and so there is a race condition and only one of them is set at the end.
You should use index_put_() with accumulate=True to get what you want:

regular_tensor.index_put_((indices[0], indices[1]), values, accumulate=True)
1 Like

You are doing gods work. Thank you!