RuntimeError: size mismatch, m1: [1 x 1350], m2: [225 x 512] at C:/w/1/s/tmp_conda_3.6_095855/conda/conda-bld/pytorch_1579082406639/work/aten/src\THC/generic/THCTensorMathBlas.cu:290

I am doing action recognition with mediapipe keypoints.
These are the shapes of some of my tensors:

torch.Size([3, 3, 75])
torch.Size([3, 6, 75])
torch.Size([3, 10, 75])
torch.Size([3, 11, 75])
torch.Size([3, 9, 75])
torch.Size([3, 4, 75])
torch.Size([3, 21, 75])

The height of each tensor varies as they refer to the number of frames for each sample. If I understand correctly, my in_features for the layer self.fc_pre = nn.Sequential(nn.Linear(3*75, fc_size), nn.Dropout(p=0.2)) below should be 3*75; however, I get a size mismatch error.

RuntimeError: size mismatch, m1: [1 x 1350], m2: [225 x 512] at C:/w/1/s/tmp_conda_3.6_095855/conda/conda-bld/pytorch_1579082406639/work/aten/src\THC/generic/THCTensorMathBlas.cu:290

I am also using a batch size of 1 for the meantime.

Any advice for me? Thank you in advance.

It seems self.fc_pre takes a tensor dimension of [1 x 1350] as an input.
Since 1350 is a multiple of 225, try reshaping the input tensor by

input = input.view(1, -1, 225)
1 Like

Hello, I tried making my in_features 3 x8x75 and it worked. I first changed the height of all my tensors to 8 using this code


if height < 8:
            source_pad = F.pad(tensor1, pad=(0, 0, 0, 8 - height))
        else:
            source_pad = F.pad(tensor1, pad=(0,0, 0, 8 - height))