Why EOS token in Encoder input?

Hi,

In the encoder-decoder sequence to sequence model, why there has to be a eos token for the encoder input?

For decoder, sos token is important in the autoregressive formulation since you get a token and the context to predict the next token and eos is required to stop, but the encoder doesn’t require an eos right?

The tutorial here says you need eos token in both encoder and decoder input: https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html

1 Like

Just replying to give the question more visibility since I had the same question. I found this post online where it says:

[…] Terminating the input in an end-of-sequence (EOS) token signals to the encoder that when it receives that input, the output needs to be the finalized embedding. […]

But this doesn’t make sense to me. Once the last word has been processed, the resulting hidden state is the last one. I cannot see any need to "signal the encoder to finalize the embedding (i.e., hidden state).

Have you compared the results with and without adding EOS at the input sequences?

2 Likes

I have the same question, why do we need EOS and SOS(BOS), have you figured it out?

I still don’t see the need for EOS in case of the encoder. So I don’t use it :slight_smile:

I think it’s depend on the implementation, some need EOS some don’t need it, but mostly to be safe just add to end of sentences.

But why, though? I can see that it probably won’t do any harm, but why would it be truly needed for the encoder?