User level LSTM/Conv1d

Hi,

I try to infer a person’s gender using all of her/his tweets. I want to map each tweet into an embedding space and then fed them into LSTM tweet by tweet. But how do I do it in LSTM? I mean the input for a LSTM is like seq_len(num of words in a sentence), batch(batch size), input_size(embedding dimension). I need to get LSTM output for each tweet of each user and process from there. Is there a way to do this for all users at the same time? A for loop?

My original thought was that I use an LSTM to encode all these tweets with dimension seq_len * (batch_sizeT) * embedding_dimension. So the batch_size can be set temporarily as (batch_sizeT). After I get the hidden representations with dimension (batch_sizeT) * hidden_dimension (2D matrix), you can reshape it to dimension batch_size T * hidden_dimension (3D tensor). But the thing is different users have different number of tweets. The solution above doesnt sound right.