Passing sentence or word by word to lstm?

I am sorry if this is a stupid question, but I have a seq to seq model which predicts word by word in a sentence.
So far I am feeding each word token to lstm, one at a time (in a loop, yes, please don’t hate me yet).
I am trying to figure out how to use pack_padded_sequence (https://gist.github.com/Tushar-N/dfca335e370a2bc3bc79876e6270099e), but I think I got confused - do I feed the whole sentence then into lstm in this case?
How do you do then teacher-forcing vs actually generating predictions (or just no teacher forcing)?
Thanks so much!