Dataloader [] operator first indexed by 0

Hi,
I build a dataset MyDataset who herite from torch.utils.Dataset. I’m would like to use Transfer Learning for object detection in videos. But MyDataset[0] is not define because I would like to begin on the middle of my video; so on the 3300th frame for example. But when I’m launching a dataloader, the next(iterator function call MyDataset[0] and I have a key error. How can I set the dataloader to have a first index on my 3300 frame ?
Thanks for your help !

I think you’d need to split your dataset before you give it to the data loader… so in psuedo code

  1. have dataset of whole video

  2. Split or slice dataset at frame 3300

  3. load into launcher as normal and train

you can see this actually in their example where they split the data into a train and test dataset of the images… test dataset is last 50 images

    # use our dataset and defined transformations
    dataset = PennFudanDataset('PennFudanPed', get_transform(train=True))
    dataset_test = PennFudanDataset('PennFudanPed', get_transform(train=False))

    # split the dataset in train and test set
    indices = torch.randperm(len(dataset)).tolist()
    dataset = torch.utils.data.Subset(dataset, indices[:-50])
    dataset_test = torch.utils.data.Subset(dataset_test, indices[-50:])

    # define training and validation data loaders
    data_loader = torch.utils.data.DataLoader(
        dataset, batch_size=2, shuffle=True, num_workers=4,
        collate_fn=utils.collate_fn)

    data_loader_test = torch.utils.data.DataLoader(
        dataset_test, batch_size=1, shuffle=False, num_workers=4,
        collate_fn=utils.collate_fn)

https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html

Thanks for your answer !
I would not like to use the whole video because all images are not annotated. And also because the preprocessing should be very slow.
Do you have an other solution ?
Than you very much

What format is the video to be fed in? frame by frame so images? You can write your Dataset class to handle annotated and unannotated images and “ignore” those frames… I would keep annotated and unannotated images in totally separate Dataset instances myself… you can see in the example I pasted they seem to just create two different loaders and datasets from the same core data (plus some transforms on the training set)… you could easily just slice off all non-annotated frames using a collate, a transform… or just use some intelligence in the Dataset class itself to “drop” data which you cannot use for training

Thank you very much I will try !