Updating loss function largely drags down the speed

Hi,

I am implementing a sequence to one LSTM model. At the training step, I try to build the loss function based on more predictions, i.e. we use the first prediction to predict the next, and use the new prediction for the next and so on. For each time applying the model, we have a loss added.

``````loss = 0
for i in range(loss_step+1):   # loss_step=0 means the common loss function
backup = Train_batch[:,1:,:]   # backup the second to the final items of the sequence batch