RNN On CIFAR10. The size of tensor a (10) must match the size of tensor b (100) at non-singleton dimension 1

Hello. I’m studying neural networks, and the task is to create a NN with this architecture. But the thing is that the error comes in

ValueError: Expected input batch_size (300) to match target batch_size (100).

According to assignment, I need to use MSELoss. Tried to fix it by transposing output values, but nothing works. Tried to change function to CrossEntropyLoss, but no luck either. Transposing helped, but the accuracy values did not increase and the error did not drop.
What could be the problem?
How do I get the correct architecture in such a trivial problem without transpositions and get an improvement?
It’s worth noting that the NN works on MNIST without any problems. This only happens on CIFAR10.

Link to Google Colab: https://colab.research.google.com/drive/1IIVlUoUgcpHYv3psChjLlmqyZu_XI98q?usp=sharing

You are currently flattening the input tensors such that the channel dimension would move into the batch dimension:

images = images.view(-1, seq_len, input_size)

and I guess this could yield the shape mismatch for an uneven batch size (the last one in the epoch).
Also, make sure this is indeed the use case you would like to apply, as it’s usually unwanted to move treat the different color channels of an input image as separate samples in the batch.