I am using the transformer example from https://pytorch.org/tutorials/beginner/transformer_tutorial.html
I noticed that if ‘d_model’ in PositionalEncoding function is an odd number, then it throws an error.
‘*** RuntimeError: The expanded size of the tensor (31) must match the existing size (32) at non-singleton dimension 1. Target sizes: [5000, 31]. Tensor sizes: [5000, 32]’
So, are we supposed to use only even numbered ‘d_dim’ or is it an issue with the function?