In the seq2seq tutorial in here, the source sentence was fed into the encoder one token by one token using a for loop:
for ei in range(input_length): encoder_output, encoder_hidden = encoder( input_variable[ei], encoder_hidden) encoder_outputs[ei] = encoder_output
This is different from other tutorials like here: the whole sentence is fed into the encoder only once and get the encoding.
Are these two examples equivalent (getting the identical encodings)? Thanks!