Question in backpropagation

Non teacher forcing
for _ in range(len(transSent)):
(transVocabScores,hidden) = decoder(word, hidden)
output,index = torch.max(transVocabScores, 1)
word = index

When i backpropagate here, will the framework take care of backpropagating throughout all previous predictions? [along with through hidden layers ]

Teacher forcing:
for i in range(len(transSent)):
(transVocabScores,hidden) = decoder(word, hidden)
output,index = torch.max(transVocabScores, 1)
word = transSent_prepared[i]

The alternative is teacher_forcing ie using the existing outputs in which case since the source of the ‘word’ is not a previous prediction but the true output itself.