Adding time-step feature to input of sequence generator LSTM

I’m training an LSTM using word embeddings as input and output, and wanted to try adding a feature to my input to represent the position in the sequence that the word has, with the intention of teaching the network to predict words that appear at the end of the sentence. Does anyone know if this is likely to improve the performance of the model? If so, how would one go about it? I was thinking about just adding a value to the input tensor to represent the psotion and scaling it between 0 and 1. Would this be a good approach?