in the seq2seq tutorial…
I figure it out…can someone explain the attention code in the AttnDecoderRNN?