Text generation - input best practice

I am currently working on a project: I want to generate text with a LSTM using Pytorch. My model is working but I have a question about the methodology:

I’m using the BPTTIterator and something seems weird to me: you have to give one example with all your text in it then it will provide your neural network with the current word and the target word for each step. In that way, I’m not using SOS and EOS tokens.

I’m wondering the impact on my model. Will it changes something if I don’t use those tokens? I know It won’t be able to stop the sentence automatically but will it change the performance of my model? What is the best practice?

Thanks