LSTMCell batch size confusion

Hi all,

I have a dataset which is fed into my model with batch_size 32, now being an nlp problem, the input samples are sentences.
I have padded the sentence to 60 words so that while batching the dimensions are consistent, taking the embeddings of each word = 300.
The input size is (32, 60, 300)

I am trying to feed this whole thing in an LSTMCell, who according to pytorch documentation takes the tensor of dimension of (batch, input_size). Now I have a confusion that how the input tensor should be reshaped or sliced off((32, 300) or (60, 300)) in order to feed it to LSTMCell.

If I iterate over the tensor feeding sentences one by one, still the input will be of shape (60, 300) ; but 60 is not the batch size(as it should be according pytorch docs for LSTMCell) here. This is the exact confusion.