Positional Encoding

Hi everyone. I implemented the positional encoding class just like in the pytorch tutorial:

class PositionalEncoding(nn.Module):

def __init__(self, d_model, max_len):
    super().__init__()

    position = torch.arange(max_len).unsqueeze(1)
    div_term = torch.exp(torch.arange(0, d_model, 2) * (-math.log(10000.0) / d_model))
    pe = torch.zeros(max_len, 1, d_model)
    pe[:, 0, 0::2] = torch.sin(position * div_term)
    pe[:, 0, 1::2] = torch.cos(position * div_term)
    self.register_buffer('pe', pe)

def forward(self, x:
    x = x + self.pe[:x.size(0)]
    return self.dropout(x)

Then I run this:

rand = torch.randint(10, (BATCH_SIZE, 20))
emb = PositionalEncoding(EMBED_LEN, MAX_SEQ_LEN)
out = emb(rand)
print(out.shape)

I expect positional encoding output here. What I end up with is:
->The size of tensor a (20) must match the size of tensor b (768) at non-singleton dimension 2

As with the paper don’t I need to feed input directly to the positional encoder? Is this error expected? What’s the best way to test if this function operates as expected?

Here is the tutorial I believe you are referring to:

https://pytorch.org/tutorials/beginner/transformer_tutorial.html

Some important context:

Args:
            x: Tensor, shape [seq_len, batch_size, embedding_dim]

Note the batch_size is on dim=1. And there is an embedding dim. Yet, the example you provided has batch_size on dim=0 and no embedding dim.

Hi, I implemented this exact thing with batch_size on dim 0 recently. Here is the code, it may be helpful.

For the forward method

return self.dropout(token_embedding + self.pos_encoding[:, :token_embedding.shape[1]])