RNN question (trying to backward through graph)

I am training RNN network.
The network relies on a “hidden” RNN state variable that is saved from cycle to cycle.
I guess when it uses loss.backward(), it will backpropagate through N, where N is the number of cycles since the hidden RNN state has been initiated.

Is it possible to just have the loss.backward() work on the last cycle, even if I don’t reinitialize the hidden state variable.

Hi,

If you only need to backprop the last cycle, you can .detach() the hidden state between each cycle. That way, no gradient will flow back to the previous cycles.