Embedding in the NLP tutorial

I am trying to follow the third NLP tutorial here. I’m having trouble understanding the relationship between the input and hidden feature sizes in the encoder module.

The relevant code is:


class EncoderRNN(nn.Module):
    def __init__(self, input_size, hidden_size):
        super(EncoderRNN, self).__init__()
        self.hidden_size = hidden_size

        self.embedding = nn.Embedding(input_size, hidden_size)
        self.gru = nn.GRU(hidden_size, hidden_size)

    def forward(self, input, hidden):
        embedded = self.embedding(input).view(1, 1, -1)
        output = embedded
        output, hidden = self.gru(output, hidden)
        return output, hidden

    def initHidden(self):
        return torch.zeros(1, 1, self.hidden_size, device=device)

Won’t the shape of embedded be (1, 1, input_size * hidden_size)? If so, how can this be fed into a GRU expecting and input of shape (1, 1, hidden_size)?

I’m trying something like:


model = EncoderRNN(input_size=5, hidden_size=2)
model(input=torch.tensor([[[1,2,3,4,0]]]), 
      hidden=torch.tensor([[[0,0,0,0,0,0]]]))

but I get RuntimeError: input.size(-1) must be equal to input_size. Expected 2, got 10, as expected.

So I am probably not calling model correctly. What am I missing?