I’m trying to use torchvision.datasets.Kinetics but I’m getting this Assertion Error.
This is the initialization of the dataset
test_dataset = torchvision.datasets.Kinetics(root=root_path, frames_per_clip=33,
step_between_clips=5, num_workers = 40, split='test', transform = transforms.RandomResizedCrop(size=224))
This is error I get:
AssertionError: Caught AssertionError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/bairouk/anaconda3/envs/deep/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/home/bairouk/anaconda3/envs/deep/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/bairouk/anaconda3/envs/deep/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/bairouk/anaconda3/envs/deep/lib/python3.10/site-packages/torchvision/datasets/kinetics.py", line 234, in __getitem__
video, audio, info, video_idx = self.video_clips.get_clip(idx)
File "/home/bairouk/anaconda3/envs/deep/lib/python3.10/site-packages/torchvision/datasets/video_utils.py", line 373, in get_clip
assert len(video) == self.num_frames, f"{video.shape} x {self.num_frames}"
AssertionError: torch.Size([34, 720, 1280, 3]) x 33
Thank you.