I am doing action recognition with mediapipe keypoints.
These are the shapes of some of my tensors:
torch.Size([3, 3, 75])
torch.Size([3, 6, 75])
torch.Size([3, 10, 75])
torch.Size([3, 11, 75])
torch.Size([3, 9, 75])
torch.Size([3, 4, 75])
torch.Size([3, 21, 75])
The height of each tensor varies as they refer to the number of frames for each sample. If I understand correctly, my in_features for the layer self.fc_pre = nn.Sequential(nn.Linear(3*75, fc_size), nn.Dropout(p=0.2))
below should be 3*75; however, I get a size mismatch error.
RuntimeError: size mismatch, m1: [1 x 1350], m2: [225 x 512] at C:/w/1/s/tmp_conda_3.6_095855/conda/conda-bld/pytorch_1579082406639/work/aten/src\THC/generic/THCTensorMathBlas.cu:290
I am also using a batch size of 1 for the meantime.
Any advice for me? Thank you in advance.