Converting model for usage of batch input

Hi! I am wondering if make sense to feed in the complete mini-batch during training at once to the model.
This might increase the utilization of the GPU for RNNs?
Or does pytorch this automatically if it detects a loop around you model?

If so, is there a possibility to convert a a model taking one sequence into a model taking a batch of sequences?

Do you have to split and concat your tensors-flow in your model?
How to archieve that with nn.Linear and nn.LSTM for instance?

Thank you very much!

Not sure to clearly understand, did you set a batch size in your dataloader?

Yes, that’s why I need my model eating batches.

Ok, I think I got it. Most of the builtin torch modules support both: Single samples and batches as input out of the box. :wink: