Can not load the pretrain models

I do not change the layers of the model,but when i load the state_dict ,it show that’size mismatch for block1.rep.0.pointwise.weight: copying a param with shape torch.Size([128, 64]) from checkpoint, the shape in current model is torch.Size([128, 64, 1, 1])…
But the size of [128,64] and [128,64,1,1] is different?
Thank you very much

Is a bit different. It’s the same amount of numbers within the tensor but the two last dimensions are “empty”. You can reshape the first one to be the same size as the second one. Perhaps this view function could help you.

If it still is tricky, post some code / github link and we can take it from there