Looking at the AttnDecoderRNN
object from the Seq2Seq tutorial http://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html , we have the embedding layer that’s initialized as
self.embedding = nn.Embedding(self.output_size, self.hidden_size)
And then the embedding get used in at the decoder when fed with the input
(i.e. the source input Variable) at
def forward(self, input, hidden, encoder_outputs):
embedded = self.embedding(input).view(1, 1, -1)
embedded = self.dropout(embedded)
My question is why is it that the input the self.embedding
is the input but the input_size
when we initialize the self.embedding
is self.output_size
?