Hidden state lstm

Can I use def __init__hidden (self, batch) instead:

hidden = torch.randn (1, batch, 512) .to (device)
       return hidden

hidden - which was obtained from the previous sequence. that is, do not make the new hidden, but use the previous one each time? The fact is that I am exploring a continuous sequence and it seems to me that hidden should be continuous.

And another question, in my version, do I have the right to do randn initialization or do I need to do it with zeros?

I’m not sure to understand the question clearly, but it seems you would like to initialize the hidden state once and then just use if for the whole training?

Yes, initialize once per epocha, and do not show once per batch

Would it be similar to this example?

What do you mean by “do not show once per batch”?
What hidden state should be passed instead for the next data batch?

Take a look:

def forward(self, out):
        out = self.fc1(out)
        out = torch.transpose(out,0,1)
        hidden = self.__init__hidden(batch)
        out,hidden = self.gru(out, hidden)
        out = self.fc3(hidden)
        out = out.reshape(batch,3)
        return out
    
    def __init__hidden(self, batch):
        hidden = torch.randn(1, batch, 512).to(device)
        return hidden

When I call outputs = net (wn), def forward is triggered.
This happens for every sequence in batch.

And each time we initialize the hidden variable, which calls def __init__hidden. I want def __init__hidden to work the first time, and for the second and next passes hidden should be remembered from the previous sequence (out,hidden = self.gru(out, hidden)).

u can use a constant to remember the hidden for the next sequence.
this constant will not update.