That is what I thought as well but it does not. I am printing the true label list and the predicted label list at the end and my model does not always predict the same class.
Maybe I found one problem in my model which causes this. Maybe my model does not know when a sequence inside a batch ends and the next one starts and it is treating it like a whole sequence? I am having trouble understanding how to make my model effectively use batch training. Different sources say different things about implementing hidden state reset and none work for me.