On the latest version of PyTorch you can use Tensor.put_
with accumulate=True
:
http://pytorch.org/docs/master/tensors.html#torch.Tensor.put_
You will need to translate your indices into linear indexes though. Most other in-place indexing functions (like index_add_
) have undefined behavior for duplicate indices.
For example:
l = torch.autograd.Variable(torch.LongTensor(10, 10).zero_())
m1 = torch.LongTensor(1, 255, 255).random_(0, 10)
m2 = torch.LongTensor(1, 255, 255).random_(0, 10)
# compute linear index
m3 = torch.autograd.Variable(m1 * 10 + m2)
# make ones the same size as index
values = torch.autograd.Variable(torch.LongTensor([1])).expand_as(m3)
l.put_(m3, values, accumulate=True)