Neural Language Processing with just one time initialization

Hi everyone,

Could anyone help me how I can train a model in Pytorch with a hidden layer for all the sequences that just one time is initialized with zero at the first forward pass and then not be reseted to zero for each sequence? A simple code would be highly appreciated!!

You can have the forward function return the hidden state as well as the output state and then pass it back into the function every time.

def forward(self, input, hidden):
   return output, hidden

then each time you can just pass the returned hidden back into the forward function.

Thanks, so this way the hidden would be treated as an input

You should do it outside of your epoch loop like this:

hidden = #your hidden initialization function or variable
for e in range(epochs)
1 Like

Super clear, thanks, I hope it will be resolved