Batching character-level tensors with token-level tensors

Hi,

I’m trying to train an NLP model that has both a character-level LSTM and a token-level LSTM. My embeddings that result from these two LSTMs are concatenated together and then passed to another LSTM.

Would anyone know if there’s any efficient way to batch the training examples so that each batch iterator have the same sentences and words in the sample place?

Thanks in advance