RNN learn within epoch but no between epochs

Hi,
I’ve been working on an RNN model to predict the next word, but something weird is hapenning with the loss. It decreases within an epoch, but when a new epoch starts, it go back to the beginning. Here is my code:

for epoch in range(n_epochs):
    current_loss = 0
    current_loss_test = 0
    random.shuffle(batches)
    for i,batch in enumerate(batches):
        loss = torch.zeros(1,dtype=torch.float,requires_grad=True)
        hidden_1 = rnn_1.initHidden()
        optimizer_1.zero_grad()
        for tensor in batch[0]: #For each word
            output_1, hidden_1 = rnn_1(tensor, hidden_1) 
        loss = two_dim_loss(output_1,output_2,batch[1], hidden_1,hidden_2)
        loss.backward()
        optimizer_1.step()
        hidden_1s.append(hidden_1)
        current_loss += loss.item()

The graph shows the effect that I’m talkign about:
loss

I guess it might come due to feeding the hidden output back to the model during the epoch, while you are reinitializing it for each new epoch.
How were you planning on handling the hidden state during testing?

Hi @ptrblck, thank you for your answer.
I’m feeding the hidden output back again to the model because the idea is use 4 words to predict the next one. For that, I pass the first word and a hidden with 0s, and then the hidden output and the next word and repeat that process 4 times. Is there a better way to do it?
The problem is, I do that process (hidden output back again) for each batch within an epoch and the problem arises when I change epochs. The only thing that I do when changing epochs is set the current_loss to 0 (for printing reasons, nothing important). I really don’t know what can be. Do you have any idea?