Hello,
Thank you for your quick reply but I don’t quite understand what you mean, sorry? Just to clarify, I want to utilize the updated hidden states for subsequent RNN steps while also being able to backpropagate through the network at various intervals (i.e. in an online learning fashion). So, I do initialize the states before the training commences (as the example above illustrates), but I want to be able to use the updated states for subsequent forward passes without reinitializing or detaching the states from the graph at each forward pass, if that makes sense. If you reckon you have a solution for the above, a quick code snippet would be really helpful! Thank you for your time!
Andrei