Need help finding good resources to help me load in Video Data in Pytorch

Hi there,
Recently, I wanted to extend the faster R CNN to a large video dataset that I found called ILSVRC2015.

So, when I started I got it to work on individual images. While I had created the dataset loading functionality as almost per pytorch documentation on object detection, I am not quite sure how to translate or alter that class to a video dataset. I am not really looking for anything specific, but if someone can tell me what I would usually need to include when loading a video dataset class that is different than the class for loading an image dataset, then I would appreciate it. I was having trouble finding good resources that answer this using pytorch, so any documentation/resources will also help.

I have shown my code for loading in individual Image Data incase you are willing to give me specific advice on what changes when switching to processing Video Data. if you want the entire github file for context please respond and I will share it with you.

class FruitDetectDataset(object):
  def __init__(self, id_labels, id_bounding_boxes, transforms, mode):

    assert len(id_labels) == len(id_bounding_boxes)
    assert sorted(id_labels.keys()) == sorted(id_bounding_boxes.keys())
    self.imgs_key = sorted(id_labels.keys())

    if (mode == "train"):
      self.imgs_key = self.imgs_key[:int(len(self.imgs_key) * 0.8)]
      self.imgs_key = self.imgs_key[int(len(self.imgs_key) * 0.8):]

    self.id_labels = id_labels
    self.id_bounding_boxes = id_bounding_boxes
    self.full_image_file_paths = glob.glob("/content/Fruit Defects Dataset /Train/*/*/*.jpeg")

    self.transforms = transforms

  def __getitem__(self, idx):

    img_path = ffile_path(self.imgs_key[idx], self.full_image_file_paths) 
    img = cv2.cvtColor(cv2.imread(img_path, cv2.IMREAD_COLOR), cv2.COLOR_BGR2RGB).astype(np.float32) / 255.0

    boxes = convert_min_max(torch.as_tensor(self.id_bounding_boxes[self.imgs_key[idx]], dtype=torch.float32))
    labels = torch.as_tensor(self.id_labels[self.imgs_key[idx]], dtype=torch.int64)
    image_id = torch.tensor([idx])
    area = find_area_bb(boxes)

    target = {}
    target["boxes"] = boxes
    target["labels"] = labels
    target["image_id"] = image_id
    target["area"] = area
    #Query about transforms for labels of images
    if self.transforms: 
      sample = {
                'image': img,
                'bboxes': target['boxes'],
                'labels': labels

      sample = self.transforms(**sample)
      img = sample['image']
      target['boxes'] = torch.stack(tuple(map(torch.tensor, zip(*sample['bboxes'])))).permute(1, 0)
    return img, target

  def __len__(self):
    return len(self.imgs_key) ```

Sarthak Jain