Only support when num_heads is even in transformer

This is a very strange error I’m getting:

Exception has occurred: RuntimeError
Only support when num_heads is even in transformer

It’s being called in my forward method even though I’m not doing inference (just training):

def forward(self, indices, mask):
        x = self.embed(indices)
        x = self.encoder(x, src_key_padding_mask=mask)

I have the number of heads set to 1 and it’s never caused an issue in the past. Same code works when nhead=2.

Edit: I have verified that this behavior changed going from v.1.11 to v.1.12. Is this change in behavior documented somewhere? I can’t find it.

I also met the same error, and I solved this by installing a newer pytorch. Specifically, 1.12 doesn’t work but 1.13 works in my case. Hope that helps.

1 Like

I met the same error when i am using v2.0. Unfortunately, my system strictly needs num_heads = 1

  • It is strange, because when nn.Module.training is set to True(i’m training my model), no error occurred.
  • When I apply .eval() for non-dropout model evaluation, error returns : RuntimeError: Only support when num_heads is even in transformer.
  • More strangely, when I abort padding_mask input, this error disappears and model can work. However, for attention model, I really need mask!
#this would avoid error, but is not a wise choice.
if self.training:
    encode_X = self.transformer_encoder(X,src_key_padding_mask=mask)
else:
    encode_X = self.transformer_encoder(X)
return encode_X
``

Did you solve this problem? Can you share your workaround?

I have encountered a similar problem and I guess it is fixed in v2.1.1 because my script only raised a warning.

Edit: I can test it if you have a snippet.