Sizes of tensors must match except in dimension 1

Dear everyone: when i train the model. it reported Expected size 7 but got size 5 for tensor number 1 in the list. I don’t know what expected object is needed to be 7.
Could you please help me to fix this issue?
Thanks, best wishes

Traceback (most recent call last):
File “train_meld.py”, line 238, in
test_loss, test_acc, test_label, test_pred, test_mask, test_fscore, attentions = train_or_eval_model(model, loss_function, test_loader, e)
File “train_meld.py”, line 83, in train_or_eval_model
log_prob, _, alpha, alpha_f, alpha_b, _ = model(r1, r2, r3, r4, audio_feature, x5, x6, x1, o2,
File “F:\Anaconda3\lib\site-packages\torch\nn\modules\module.py”, line 1110, in _call_impl
return forward_call(*input, **kwargs)
File “F:\Projects\declare\Cosmic_mfcc_rnn\CommonsenseGRUModel.py”, line 94, in forward
mixed_feature = torch.cat((r, audio_feature), dim=-1) # [seq_len, batch_size, dim_mixed_feature]
RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 7 but got size 5 for tensor number 1 in the list.

The shape mismatch is raised in:

mixed_feature = torch.cat((r, audio_feature), dim=-1) 

which seems to try to concatenate two 3-dimensional tensors in dim2.
Based on this all dimensions should have the same size besides dim2 which is not the case as seen in this small examples:

r = torch.randn(2, 7, 7)
audio_feature = torch.randn(2, 5, 1)
mixed_feature = torch.cat((r, audio_feature), dim=-1) 
> RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 7 but got size 5 for tensor number 1 in the list.

As you can see, the size in dim1 is different and set to 7 and 5 in these tensors.

Dear @ptrblck : thank you for your suggestions. To be honest. I follow your other answer about the similar issues. RuntimeError: Sizes of tensors must match except in dimension 1.

I know the issue. However,none of the samples on r have that dimension is 7. Since any sample without r does not have a dimension of 7, why is there a prompt for a dimension of 7?

I’m sorry, I’m constantly trying to debug to find the problem. Can you give me some advice, where should I place breakpoints to effectively identify the problem. Thank you for answering this really stupid sounding question.

Thanks. best wishes

I would suggest to add a debug print statement right before the failing torch.cat operation as:

print(r.shape)
print(audio_features.shape)
mixed_features = torch.cat((r, audio_features), dim=-1)

and check if the shape might change in a specific iteration.

I appreciate your detailed advice. Thanks. Best wishes