Possible bug in seq2seq tutorial?

Hi, I was following the Seq2Seq tutorial from here:
https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html

While going through the forward of AttnDecoderRNN, I noticed this:
attn_weights = F.softmax(
self.attn(torch.cat((embedded[0], hidden[0]), 1)), dim=1)

Shouldn’t the attn_weights be computed by using the encoder_outputs? The weights depend on how much the current decoder hidden state matches with all the encoder states. Did I miss something?