From what I under stand,if we put sentences into RNN or LSTM, the sentence should be in order,but from the output, the data is 1,4,6,2,5,3,which pack the sequence in a vertical way.If we put this packed sequence into RNN, how does the RNN read the data?
That is right.
LSTM is essentially a recurrent neural network, which first inputs the first step, then the second step,β¦
And it gets the input value based on the length
From what I understand, my input is 1 2 3,4 5 6 ,but from the tensor in the data, it is 1,4,6,2,5,3 which is in a different order from what I expected which is 1 2 3 4 5 6. The batch_sizes also confuses me.Am I under standing this properly?
So it deals with data in a vertical way?
if we store the data in a vertical way, and input it in the RNN, like[I like your cooking]and [i like apple], if we pack this 2 sequence. Then the input to the RNN will be [I I] at the first step, and not the[ I like your cooking] as the first batch. Is this the right understanding?
But if it is like this, the input order is not correct.
I think:
the first step, LSTM cell can not process the one batch data, so the input data is [i, i].
why do you say that the input order is not correct ?
So what is the correct input order ?
But if we pack these 2 sequence,the result is [I,I,love,eat,your,apple,cooking],if we put this into RNN,how does the batch_size work to reconstruct a right input order?Why donβt this function directly pack the sentence like[ I ,love ,your ,cooking, I, eat ,apple]
I guess I understand your point now,it deals witl coloumn just to make sure it reads data step by step,like [i,i], however, the LSTM gets the input still in the sequence of words,the final input is still [I,love, your ,cooking],the batch sizes tells how many data we have at each step then it knows how to input the data.Is this right way to understand this?