How to deal with hidden states of an LSTM?

Hello guys, I am new there to PyTorch and ML in general and I have a question that I am not able to find the answer to it. If any of you would be kind to help me with this I would really appreciate it.

I have a time series in which each date is correlated with the preview one, and base on that I am trying to predict action 1 and action 2. But the problem is that I am not sure how to deal with the hidden state. So far as I understand is that that hidden state si basically all the information that an LSTM had to learn from previews experience, and it should use that previews knowledge in the future prediction. So if I understand it correctly my questions are.

  1. When do I reset the hidden state, for example at the end of each episode, that means my LSTM is going to forget all the previews knowledge and that basically means that all the preview work is for nothing and it needs to do it over again?
  2. How to deal with multiple batches? Let’s suppose that I will initialize my hidden state like this.
    torch.zeros(2, 300, 3). That 300 means the number of batches, and because of that if it will be to evaluate my model with just one value instead of 300 values is going to crash because the shape does not match. How should I approach this? Because every single example and tutorial that I’ve seen basically just reinitialize the hidden state with 0 and with the properly batch_size. But again, if I will do this, does not mean that I am going to forget all the previews experience?