How to compute the IoU of a Tensor and a List

I have the following custom loss function for an SSD model. Its not fully implemented yet:

class SSDLoss(torch.nn.Module):
    def __init__(self, device=torch.device("cpu"), background_ratio=6):
        super(SSDLoss, self).__init__()
        self.device = device
        self.background_ratio = background_ratio

    def forward(self, predictions_list, targets, anchor_boxes, num_classes=2):
        # convert bboxes to ssd outputs
        # compare

        true_targets = targets_to_bboxes(anchor_boxes, targets, device=self.device)
        
        print('true_targets: ', true_targets)
        print('predictions_list: ', predictions_list)
        print('true_targets size: ', len(true_targets))
        print('predictions_list size: ', len(predictions_list))
        
        loss = 0

        return loss, true_targets

true_targets (Torch tensor) are my ground truth bounding boxes and prediction_list are my predicted bounding boxes (Python list). These are the outputs of the 4 print statements.

This is true_targets and its shape:

true_targets:  tensor([[[0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.],
         [2., 2., 2.,  ..., 2., 2., 2.]],

        [[0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.],
         [2., 2., 2.,  ..., 2., 2., 2.]],

        [[0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.],
         [2., 2., 2.,  ..., 2., 2., 2.]],

        [[0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.],
         [2., 2., 2.,  ..., 2., 2., 2.]]])

torch.Size([4, 5, 9040])

This is predictions_list and its length:

predictions_list:  [tensor([[[[ 9.9442e-02,  3.3226e-01,  1.2670e-01,  ...,  9.9745e-02,
            2.7174e-01,  1.0590e-01],
          [ 2.3663e-01,  3.1417e-01,  1.3010e-01,  ...,  1.9857e-02,
           -6.4091e-02,  3.8148e-01],
          [ 3.0299e-01,  3.2582e-01,  2.4724e-01,  ...,  2.2392e-01,
            6.4675e-02,  7.2070e-03],
          ...,
          [ 3.5731e-03,  2.8286e-01,  7.7916e-01,  ..., -9.8466e-02,
            2.4779e-01, -2.9651e-01],
          [ 2.7502e-01,  1.5084e-01,  3.5121e-01,  ...,  2.8136e-01,
           -4.1414e-03,  2.6915e-01],
          [ 1.5553e-01,  1.6558e-02,  2.2879e-01,  ...,  4.3227e-01,
            6.3580e-02,  4.4533e-02]],

         [[ 1.7918e-01, -1.2231e-01,  1.1551e-02,  ..., -4.5121e-02,
           -1.4862e-01, -2.5158e-01],
          [-2.8866e-02,  1.9356e-01,  2.1619e-01,  ..., -2.0484e-01,
           -4.7227e-01, -2.5157e-01],
          [ 2.7363e-02, -3.8109e-02,  1.8184e-02,  ..., -2.9303e-01,
           -5.7727e-01, -3.6232e-01],
          
          ...,
          [ 6.8932e-01,  5.6035e-01,  4.3400e-02,  ..., -1.2163e-01,
           -8.7152e-02,  2.3896e-01],
          [ 3.3151e-01,  3.1092e-01,  3.2296e-01,  ...,  5.2497e-02,
           -2.1015e-01, -2.9203e-01],
          [ 8.8831e-02,  1.5703e-01,  2.1559e-01,  ..., -5.8313e-02,
           -8.0042e-02, -1.4649e-01]]]], grad_fn=<ConvolutionBackward0>)]

predictions_list:  2

I tried to comvert predictions_list to a torch tensor like this torch.as_tensor(predictions_list) but it gave me this output: only one element tensors can be converted to Python scalars

I want to calculate the IOU between the ground truths and predictions. How can I go about it?

Hi,
Can you please see if the following thread helps?

1 Like

Hi Srishti

Your answers talk about converting list of tensors to numpy array. I am trying to do an IoU so doesn’t it make more sense for me to convert my list to a torch tensor?

I was thinking on some different lines, but yes best is to convert the list predictions_list into a tensor. For this, I find torch.cat a nice way.

Like this -

# creating a list of tensors
import torch
list_of_tensors = []
x = torch.tensor([1.0, 2, 3])
list_of_tensors.append(x)

x = torch.tensor([4.0, 5, 6])
list_of_tensors.append(x)

list_of_tensors

out -

[tensor([1., 2., 3.]), tensor([4., 5., 6.])]

Converting to tensor-

list_of_tensors = [t.unsqueeze(0) for t in list_of_tensors] # unsqueeze to add an extra dim 
t = torch.cat(list_of_tensors, dim=0)
print(t)

out -

tensor([[1., 2., 3.],
        [4., 5., 6.]])

Please let me know if this doesn’t help you.

I don’t quite understand the necessity of this:

# creating a list of tensors
import torch
list_of_tensors = []
x = torch.tensor([1.0, 2, 3])
list_of_tensors.append(x)

x = torch.tensor([4.0, 5, 6])
list_of_tensors.append(x)

list_of_tensors

That’s just a list of tensors I created to demonstrate the usage.

For your use, it would hence be -

predictions_list = [t.unsqueeze(0) for t in predictions_list]
t = torch.cat(predictions_list, dim=0)

Does this produce any error?

This is what I got
Sizes of tensors must match except in dimension 0. Expected size 45 but got size 23 for tensor number 1 in the list.

Which probably means the two tensors in predictions_list aren’t of same size. Is that it?

If yes, it requires a bit complicated workaround than if it’s not.
See-

true_targets.shape: torch.Size([4, 5, 9040])
prediction_list.length: predictions_list: 2

What about the sizes of two tensors in predictions_list?

print(len(predictions_list[0])): 4

Both tensors have a size of 4.

Not the length exactly, the sizes need to match.
Please share the output of -

print(predictions_list[0].shape)
print(predictions_list[1].shape)

Hi Ms. Srishti,

now I can solve it !
Thanks for your supportive suggestion.