'list' object has no attribute 'tensors'

Hi, I am trying to get region proposals from faster rcnn rpn, but I am getting the error of the title. I don’t understand what the images passed to it should look like. Right now it is a batch of images of size [B, C, H, W]. The images are first acquired from Dataloader and then processed like so. I have 12 images for each label in a batch. Does anyone know what I am doing wrong here?

def get_proposals(model, features, images):

   batch_of_images = torch.stack(images, dim=0)

   proposals, proposal_losses = model.rpn(images, features)

   return proposals
Cell In[15], line 24
     20 images = batch['visual_embeddings']
     22 features = get_the_features(detector, images)
---> 24 proposals = get_proposals(detector, features, images)
     26 break
     27 plt.imshow(images[0][0].permute(1, 2, 0))

Cell In[15], line 13, in get_proposals(model, features, images)
      9 batch_of_images = torch.stack(images, dim=0)
     11 #features = {key: value.to_sparse() for key, value in features.items()}
---> 13 proposals, proposal_losses = model.rpn(images, features)
     15 return proposals

File ~/.local/lib/python3.9/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
   1516     return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
   1517 else:
-> 1518     return self._call_impl(*args, **kwargs)

File ~/.local/lib/python3.9/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
   1522 # If we don't have any hooks, we want to skip the rest of the logic in
   1523 # this function, and just call forward.
   1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1525         or _global_backward_pre_hooks or _global_backward_hooks
   1526         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527     return forward_call(*args, **kwargs)
   1529 try:
   1530     result = None

File ~/.local/lib/python3.9/site-packages/torchvision/models/detection/rpn.py:361, in RegionProposalNetwork.forward(self, images, features, targets)
    359 features = list(features.values())
    360 objectness, pred_bbox_deltas = self.head(features)
--> 361 anchors = self.anchor_generator(images, features)
    363 num_images = len(anchors)
    364 num_anchors_per_level_shape_tensors = [o[0].shape for o in objectness]

File ~/.local/lib/python3.9/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
   1516     return self._compiled_call_impl(*args, **kwargs)  # type: ignore[misc]
   1517 else:
-> 1518     return self._call_impl(*args, **kwargs)

File ~/.local/lib/python3.9/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
   1522 # If we don't have any hooks, we want to skip the rest of the logic in
   1523 # this function, and just call forward.
   1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1525         or _global_backward_pre_hooks or _global_backward_hooks
   1526         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527     return forward_call(*args, **kwargs)
   1529 try:
   1530     result = None

File ~/.local/lib/python3.9/site-packages/torchvision/models/detection/anchor_utils.py:117, in AnchorGenerator.forward(self, image_list, feature_maps)
    115 def forward(self, image_list: ImageList, feature_maps: List[Tensor]) -> List[Tensor]:
    116     grid_sizes = [feature_map.shape[-2:] for feature_map in feature_maps]
--> 117     image_size = image_list.tensors.shape[-2:]
    118     dtype, device = feature_maps[0].dtype, feature_maps[0].device
    119     strides = [
    120         [
    121             torch.empty((), dtype=torch.int64, device=device).fill_(image_size[0] // g[0]),
   (...)
    124         for g in grid_sizes
    125     ]

AttributeError: 'list' object has no attribute 'tensors' ```

Any help would be greatly appreciated!

I actually managed to solve this issue by creating an image list object

image_list = ImageList(batch_of_images, [480, 640])

however I run into a new problem

RuntimeError: The expanded size of the tensor (460440) must match the existing size (76740) at non-singleton dimension 1. Target sizes: [2, 460440]. Tensor sizes: [1, 76740]