Transformer for audio classification unpack error

Hi. I’m using transformer for audio classification. but there is problem with multi_head_attention_forward function.

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in multi_head_attention_forward(query, key, value, embed_dim_to_check, num_heads, in_proj_weight, in_proj_bias, bias_k, bias_v, add_zero_attn, dropout_p, out_proj_weight, out_proj_bias, training, key_padding_mask, need_weights, attn_mask, use_separate_proj_weight, q_proj_weight, k_proj_weight, v_proj_weight, static_k, static_v)
4132 q_proj_weight=q_proj_weight, k_proj_weight=k_proj_weight,
4133 v_proj_weight=v_proj_weight, static_k=static_k, static_v=static_v)
-> 4134 tgt_len, bsz, embed_dim = query.size()
4135 assert embed_dim == embed_dim_to_check
4136 # allow MHA to have different sizes for the feature dimension

ValueError: too many values to unpack (expected 3)

please help.

The nn.MultiHeadAttention module returns attn_outpu and attn_output_weights as described in the docs. Based on the error message it seems you might be trying to unpack a wrong number of return values.

1 Like