Indexing with byte tensor into another tensor

So here I am wondering why i cannot write the following

                labels = labels.byte()     # shape [20]
                pred = pred.byte()    # shape [20]
                print('labels', labels.shape)
                print('pred', pred.shape)
                true_positives = torch.zeros_like(labels)
                true_positives[labels[pred] == 1] = 1
             

runtime error: the shape of the mask[19] at index 0 does not match the shape of the indexed tensor [20] at index 0

I am confused oO

If this is not the correct way to index into tensors, what is ?

thank you

1 Like

Try something like

...
idx = (labels[pred] == 1).nonzero().squeeze(1)
true_positives[idx] = 1

that does not work correctly, it runs but the results are odd.

I mean what the hell, why is this basic operation so hard to accomplish?

I actually didn’t get what you expect the results to be. I’m just giving you some tips to make it run, so you can debug if needed.

both tensors of dimension 1 with 20 elements contain either 1 or 0.

I want to index with pred into labels and compare the entries of labels at indices at which pred has ones, to be either 1 or 0

I don’t know if I got it right, but maybe what you want to do is:

true_positives = torch.zeros_like(labels)
idx_ones_pred = (pred == 1).nonzero().squeeze(1)
idx_labels_filtered = (labels[ones_pred] == 1).nonzero().squeeze(1)
true_positives[idx_labels_filtered] = 1

thank you, I will try that. I cannot believe that this is so difficult to accomplish since it is such a common task.

label = torch.tensor([1, 1, 0, 0])
pred = torch.tensor([1, 0, 1, 0])
result = labels[pred]

yields

result = tensor([1, 1, 1, 1])

why?

result = labels[pred] = 1

does not work either

That’s how indexing works.
The values at pred[0] and pred[1] are both 1.
If you index something at position 0 and 1 (given by pred),
you will get a tensor containing (label[1], label[0], label[1], label[0]) = (1, 1, 1, 1)

label = torch.tensor([1, 1, 0, 0])
pred = torch.tensor([1, 0, 1, 0])
print(label[pred])  # tensor([1, 1, 1, 1])
print(label[torch.tensor([0, 1, 2, 3])])  # tensor([1, 1, 0, 0])
print(label[torch.tensor([3, 2, 1, 0])])  # tensor([0, 0, 1, 1])

Have a look at the numpy indexing docs for other examples.

ok thank you, but then, whats the easiest way to select from label only the entries in which pred is 1 and leave the remaining entries untouched? index_select?

This should work:

label[(pred == 1).nonzero()] = 10
print(label)
2 Likes

thank you. My intention was to find out the true positives, false positives etc. and set entries in the associated tensors to one. I wrote now the following

                pred = pred.byte()
                labels = labels.byte()
                true = pred.nonzero().squeeze()
                false = (~pred).nonzero().squeeze()
                
                tp = torch.zeros_like(pred)
                tn = torch.zeros_like(pred)
                fn = torch.zeros_like(pred)
                fp = torch.zeros_like(pred)
                
                tp[true[pred[true] == labels[true]]] = 1
                fp[true[pred[true] != labels[true]]] = 1
                tn[false[pred[false] == labels[false]]] = 1
                fn[false[pred[false] != labels[false]]] = 1

is this the most concise way of doing that? oO

I think if you would like to create a confusion matrix, this could would be easier:

nb_classes = 2
conf_mat = torch.zeros(nb_classes, nb_classes)
pred = torch.randint(0, 2, (10,))
label = torch.randint(0, 2, (10,))

for l, p in zip(label, pred):
    conf_mat[l, p] += 1
print(conf_mat)

Currently your script would just set the values for the metrics to 1, instead of counting them.
I’m not sure, if that’s what you want.

2 Likes

well I want to use these tensors to index into a rank 4 tensors of batches of images and color them in according to them being an example of true positive etc, i.e. I write this for an image tensor of shape 4xCxHxW

images[tp,:] = ...   # manipulating the color channels
images[tn,:] = ...
...

and I was convinced that it works by visual inspection. However, know I am not sure anymore. By looking at your example above about the nature of indexing, I suspect that indexing into images does not yield what I want, namely the specific image at the position at which the indexing tensor features ones. Or does it?

For index tensors containing only zeros and ones that would mean, that I would only ever access the first and second image in the batch. However, my test runs showed colors for the entire batch. But then, wouldnt this indexing behaviour contradict the by you before mentioned way how indexing works?

should
The values at label[0] and label[1] are both 1 .:smile::

I think it’s possible.
However, as I’m not completely sure how you would like to index your image tensors, could you please post a sample image batch, the corresponding indices, and what the result should look like?
I guess you could use the .nonzero() method to get all necessary indices, but if you post a small example, I could write some dummy code. :wink:

idx = torch.tensor([0, 1, 0, 1, 0, 1, 0 ,1])
(idx == 1).nonzero()  # gives all indices of nonzero values
(idx == 0).nonzero()  # gives all indices of zero values

@DoubtWang Thanks for the catch! :wink:

I want to subdivide my batch of images into groups of true-positives, true-negatives, etc. according to some criterion (binary classification) which is not important for the purpose of this post. Lets say I have a tensor of predictions (the result of my forward pass) and labels of equal size

pred = torch.tensor([1, 0, 0, 1])
labels = torch.tensor([1, 1, 0, 0])

So the first entry is an example of true-positive, etc. Then I get the indices of every such instance like so

# transform into byte tensors for logical operations
pred = pred.byte()
labels = labels.byte()
# find indices where pred == 1 and pred == 0
true = pred.nonzero().squeeze()
false = (~pred).nonzero().squeeze()

# initialize all four tensors with according size to hold either 1 or 0 depending if the sample is in fact true-positive, etc. mutually exclusive across all four tensors              
tp = torch.zeros_like(pred)
tn = torch.zeros_like(pred)
fn = torch.zeros_like(pred)
fp = torch.zeros_like(pred)
                
# set the entries to 1 at the respective index
tp[true[pred[true] == labels[true]]] = 1
fp[true[pred[true] != labels[true]]] = 1
tn[false[pred[false] == labels[false]]] = 1
fn[false[pred[false] != labels[false]]] = 1

So the result should read in agreement with the pred and label tensor

tp = torch.tensor[(1, 0, 0, 0)]
fp = torch.tensor[(0, 0, 0, 1)]
tn = torch.tensor[(0, 0, 1, 0)]
fn = torch.tensor[(0, 1, 0, 0)]

Now I use those four tensors to filter the image batch and color code the images either green, blue, yellow or red by setting the respective color channel to zero

# not touching the H or W dimension, only batch size dimension and channel dimension
images[tp,0] = 0
images[tp,2] = 0
images[tn,0] = 0
images[tn,1] = 0
images[fp,2] = 0
images[fn,1] = 0
images[fn,2] = 0

and this seems to work. But I am wondering why. Are those four tensors treated as logical tensors for the access?

According to the indexing rules I suspected that only the first and second images are accessed like that.

Then what is the difference to indexing the images with the actual index?

images[0,0] = 0
images[0,2] = 0
images[2,0] = 0
images[2,1] = 0
images[3,2] = 0
images[3,1] = 0
images[1,2] = 0

Thanks for the detailed code and explanations!
Now I understand the problem and use case a bit better.

Yes, you are right in that the tp, fp, tn, fn tensors are treated as logical indices.
This advanced indexing behavior is different to the vanilla indexing and is explained a bit better in the numpy advanced indexing docs.

In that case your other example would also work as you intended:

labels[pred.byte()]

Sorry for not getting it :wink:

1 Like

no thank you for answering me so patiently and informatively. Your regular help is much appreciated!

1 Like