I follow the DQN tutorial and trained a CNN to play a game, now I want to switch to LSTM but got a problem.
I found the tutorial for LSTM, and it recommends using (time sequence, Batch size, features). In my code, my input to LSTM is (4,32,100), where 4 means 4 consecutive frames, 32 is the batch size, 100 is a vector representing the current state.
Then I add a nn.Linear layer after LSTM, and the input size of linear layer is 432lstm_output_size, and here comes the problem. In trainig, the batch size is 32->(4,32,100), but in testing, the batch size is1 -> (4,1,100), which will cause an error.
I tried to train the LSTM with batch size 1, but it will take significantly longer time, is there any way that can let me train the LSTM with batch=32 and do inference with batch=1?