Simple Classifier size mismatch

yes you are correct. well, there is still the batch norm, relu and maxpool in between but more or less, yes.
it help to calculate and trace the shape of your image tensor throughout your network.
At the point your image tensor leaves self.layer2 your image tensor is of shape: batch_size x 32 x 56 x 56. In this shape you could just pass it through another conv layer. But if you want to pass it through an fully connected layer, you need to flatten this tensor first.
Your

out = out.reshape(out.size(0), -1)

does this for you.
So if you flatten your tensor it becomes batch_size x 32 * 56 * 56 or batch_size x 100352

I explained this recently using some example code. If you want to check this out here is a link.

If you are still not quite getting it or have any more questions, please feel free to ask more!