Modify array with list of indices

Suppose I have a list of indices and wish to modify an existing array with this list. Currently the only way I can do this is by using a for loop as follows. Just wondering if there is a faster/ efficient way.

torch.manual_seed(0)
a = torch.randn(5,3)
idx = torch.Tensor([[1,2], [3,2]]).to(torch.long)
for i,j in idx:
    a[i,j] = 1

I initially assumed that gather or index_select would go some way in answering this question, but looking at documentation this doesn’t seem to be the answer.

In my particular case, a is a 5 dimensional vector and idx is a Nx5 vector. So the output (after subscripting with something like a[idx]) I’d expect is a (N,) shaped vector.

This (unanswered) question is similar: More efficient way of indexing to avoid loop

(this is a crosspost from stackoverflow.)

This should work:

a[idx[:, 0], idx[:, 1]] = 1.
7 Likes

@sachinruk Thanks for mentioning, and I would like to know how to extend this to higher dimension.
If we have a 4-D tensor of [batch, channel, height, width], and a set of [x, y] coordinates with shape [batch, num_points, 2] , how to select this 4-D tensor without loop?

Here is my implementation with loop:

Suppose the 4-D feature map [10, 256, 64, 64] in dimension, and coordinates is [10, 68, 2] in dimension(for each batch, there are 68 points that we wanted to select from feature map)

        coord_features = torch.zeros(10, 68, 256)
        feature_map = feature_map.transpose(1,2).transpose(2,3) #reshape to [10, 64, 64, 256]
        for i in xrange(coords.shape[0]): #loop through each sample in batch
            for j in xrange(coords.shape[1]): #loop through each points
                #select coordinate on feature map
                coord_features[i][j] = feature_map[i][coords[i][j][1].type(torch.int64)][coords[i][j][0].type(torch.int64)]

2 Likes

@ptrblck’s answer does work, but turns out this was the answer I was looking for as shown on SO:
a[idx.t().chunk(chunks=2,dim=0)] = 1

To do this with a batch dimension, you can just create index tensors for the other dimensions as needed.

For a simple example, let’s say we have a tensor data of grayscale images with size b, h, w and a set of random pixels coordinates coords with size b, 2 of pixel coordinates (one pixel per image in the batch) that we want to zero out. We can do this with:

bselect = torch.arange(data.size(0), dtype=torch.long)
data[bselect, coords[:, 0], coords[:, 1]] = 0

Your example is a little more complicated, because you have multiple points per image, but you can just do something like bselect = torch.range(batch, dtype=torch.long)[:, None].expand(batch, num_points).view(-1).

For the channel dimension, I think you can just use a single integer 0, but you may need to create a vector of zeros of the same length as bselect.