Batching for RNN

Hi everyone,
I have time series measurement data and want to model the behavior of the device using RNNs. I want to train using LSTM and GRU. I am not sure how to feed my data to the training loop and therefore am asking for some support on that side.

My data is as mentioned time-series measurement data. It is complex valued and i want to feed my RNN following versions of the data: Real part, Imaginary part and absolute value. And of each of those, i want to have the current version and delayed version with memory depth m. I stack everything on top of each other. After generating the data tensor it is of shape

(300000, 3 * (m+1))

Now I create a dataloader from that by DataLoader(dataset, batch_size=batch_size, shuffle=False)
The batch_size i chose to be 1000. During training, i therefore pass a tensor of (1000, 3*(m+1)) to the LSTM, which was previously configured to have input_size=3*(m+1), hidden_size=h. From my understanding this is equivalent to splitting my measurement data into sequences of length 1000 and train the coefficients on that, before passing the next 1000 samples.
From LSTM — PyTorch 2.0 documentation however i see that the desired input for the LSTM is (N, L, H) which in my case would be (1000, ??, 3*(m+1)), but i am unsure what
i should do with the L=sequence length. As you might see i am confused on what to do and how to handle my data so any help is appreciated!

Feel free to ask for more details, and also point me to other resources where my question might be answered. Thanks in advance