Question about the Seq2Seq Tutorial

In http://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html#sphx-glr-intermediate-seq2seq-translation-tutorial-py

def evaluate(encoder, decoder, sentence, max_length=MAX_LENGTH):


for ei in range(input_length):
encoder_output, encoder_hidden = encoder(input_variable[ei],
encoder_hidden)
encoder_outputs[ei] = encoder_outputs[ei] + encoder_output[0][0]

def train(input_variable, target_variable, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion, max_length=MAX_LENGTH):


for ei in range(input_length):
encoder_output, encoder_hidden = encoder(
input_variable[ei], encoder_hidden)
encoder_outputs[ei] = encoder_output[0][0]

in evaluate()

it writes encoder_outputs[ei] = encoder_outputs[ei] + encoder_output[0][0],
which is different from that in train(),
i don’t know why. Who can help me?
Thank you very much.:grinning::grinning::grinning:
Zhou Xiao

This is exactly the same, since encoder_outputs is initialized with zeros.

1 Like