Testing on a different dataset - Batch Size Issues

The problem is because you trained on 32x32x3 images. After the 2nd maxpool you get an output of 20x5x5, hence you reshape to x.view(-1, 500). But since MNIST images are 28x28x3 the shape after 2nd maxpool becomes 20x4x4 which after reshaping becomes torch.Size([64, 500]) causing your batch shape change from 100 to 64. You should resize your inputs to 32x32x3 inorder to use this net which is trained on 32x32x3 SVHN images.