Hello,
I encounter an issue with RuntimeError
but could not solve it. My code structure looks like
1 for data in data_loader:
2 # data has shape (batch, num, channel, height, width)
3 b, n, c, h, w = data.size()
4 data = data.view(b*n, c, h, w)
5 out = model(data) # out has shape (b*n, feat_dim)
I set batch size to 3, and drop_last=False
for data_loader
. num
is the number of frames, which is 10
for example, so (num, channel, height, width)
is a sequence of frames in the same video. Each time the initial data
has shape (3, 10, 3, 224, 224)
and I reshape it into (30, 3, 224, 224)
by line 4 so that it can be processed by a 2d CNN. However, the last batch has the number 2
, so data
becomes (2, 10, 3, 224, 224)
. When reshaping the last batch by line 4, the systems returns an error
RuntimeError: invalid argument 2: size '[20 x 3 x 224 x 224]' is invalid for input with 2949120 elements at /pytorch/torch/lib/TH/THStorage.c:41
Any idea about what is wrong and how to fix this issue?
Thanks!