Index out of range

Hello im confused as why I am getting and index error even though when I check my output of my dataset class before i put it into a dataloader and it is of size [N]. I know that i must account for the background which should be index 0 so perhaps that may be the issue. But that leaves me with more questions because I have another model with the exact code that does not have a for loop to gather multply labels and it works. Soon as I add a for loop to the dataset class I get the error even though they are the exact same datasets with the same outputs.

MODEL Documentation
During training, the model expects both the input tensors, as well as a targets (list of dictionary), containing:

  • boxes (FloatTensor[N, 4]): the ground-truth boxes in [x1, y1, x2, y2] format, with 0 <= x1 < x2 <= W and 0 <= y1 < y2 <= H.
  • labels (Int64Tensor[N]): the class label for each ground-truth box
    ERROR CODE
IndexError                                Traceback (most recent call last)
<ipython-input-31-fffd8dd57861> in <module>()
     12         targets = [{k: v.to(device) for k, v in t.items()} for t in targets] # sending targets to the GPU
     13         bs = BATCH_SIZE
---> 14         loss_dict = model(images, targets) # passing our model a single batch of images with repective targets
     15         totalLoss = sum(loss for loss in loss_dict.values()) # adds up all the losses from the models output
     16         lossValue = totalLoss.item() # Converts tensor loss to interger Loss

5 frames
/usr/local/lib/python3.7/dist-packages/torchvision/models/detection/roi_heads.py in assign_targets_to_proposals(self, proposals, gt_boxes, gt_labels)
    586                 clamped_matched_idxs_in_image = matched_idxs_in_image.clamp(min=0)
    587 
--> 588                 labels_in_image = gt_labels_in_image[clamped_matched_idxs_in_image]
    589                 labels_in_image = labels_in_image.to(dtype=torch.int64)
    590 

GET LABEL in DATASET CLASS

    label_list =["raccoon"]
    annotation_path = self.annotation_names[index]
    annotation_tree = ET.parse(annotation_path)
    label_name = annotation_tree.find("object").find("name").text
    
    if label_name in label_list:
      label = (label_list.index(label_name)+1) 
      label = torch.tensor([label],dtype=torch.int64)

Dataset Class output at index 9

{'area': tensor(1006000),
  'boxes': tensor([[ 52,   7, 948, 999]]),
  'image_id': tensor(9),
  'iscrowd': tensor([0]),
  'labels': tensor([1])})