Transfomer model IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

Hi.

I’m trying to use the Transfomers class as in Transformer — PyTorch 1.10.1 documentation

I have some pair of of embeddings of 512 and 256 dims. My problem is that when doing this to a given pair:

transformer_model = nn.Transformer(nhead=16, num_encoder_layers=12)
a = torch.Size([512])
b  = torch.Size([256])
out = transformer_model(a, b)

I get the following:

IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

I have seen some related issues, but dont know how to fix it in this case. How could I solve this dimension problem?

When using just the encoder, I face another issue:

encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8)
transformer_encoder = nn.TransformerEncoder(encoder_layer, num_layers=6)
a = torch.Size([512])
transformer_encoder(a)

I get:

ValueError: not enough values to unpack (expected 3, got 1)

How should I reshape the tensor in this second case. Is something like this v = torch.reshape(a, (1, 1, 512)) the right approach? Also, could I directly project the output to 256 dims?

Thanks.

As given in the linked docs, the shapes for src and tgt are expected as:

  • src: [S,N,E], [N, S, E] if batch_first.
  • tgt: [T,N,E], [N, T, E] if batch_first.

while a and b seem to have a single dimension with a shape of 256 which raises the indexing error.

1 Like