Hello, LSTM in Pytorch needs batch_size for the input, hidden and cell. If you look at the documentation, they all have to have the following dimension.
input of shape (seq_len, batch_size, input_size):
hidden of shape (num_layers * num_directions, batch, hidden_size)
cell of shape (num_layers * num_directions, batch, hidden_size)
As you can see above, if the input is in batches, the hidden and cell also need to be in batches to account for the different batches in the input because each batch in the input will have its own hidden and cell.
Now, coming back to your first question. Yes setting batch_size is like mini-batch. Example if batch size is 3, then each of your input is a group of 3 sentences like I love Pytorch , I love Keras , I love NLP.
Setting batch_size = 1 means have just One sentence in each input.