Question regarding the Seq2Seq example

Hello everybody:

First I want to give you a thumbs up, because pytorch rocks!

Now to my question (plz be forgiving I am a total pytorch newbie):
I want to adapt the seq2seq example to a time series prediction model.

I retraced the code and in the AttnDecoderRNN class the forward function (attached to this message) gets a encoder_output variable as input but never uses it in any way.
Is there a reason for this?

Thanks in advance,
Florian

    def forward(self, input, hidden, encoder_output, encoder_outputs):
        embedded = self.embedding(input).view(1, 1, -1)
        embedded = self.dropout(embedded)

        attn_weights = F.softmax(
        self.attn(torch.cat((embedded[0], hidden[0]), 1)))
    attn_applied = torch.bmm(attn_weights.unsqueeze(0),
                             encoder_outputs.unsqueeze(0))

    output = torch.cat((embedded[0], attn_applied[0]), 1)
    output = self.attn_combine(output).unsqueeze(0)

    for i in range(self.n_layers):
        output = F.relu(output)
        output, hidden = self.gru(output, hidden)

    output = F.log_softmax(self.out(output[0]))
    return output, hidden, attn_weights

seems weird. @spro would know.

That’s left over from an old implementation, more up to date version here https://github.com/spro/practical-pytorch/blob/master/seq2seq-translation/seq2seq-translation.ipynb

Thank you, that helps!