Testing on a different dataset - Batch Size Issues

I am learning about domain adaptation.
I picked a problem where I need to train my NN on SVHN and test on MNIST.

First issue is that SVHN is RGB and MNIST is grayscale. I resolved it by using this transform
transforms.Lambda(lambda x: x.repeat(3, 1, 1)).

I successfully train my model, and when I need to test it I get the following error:
ValueError: Expected input batch_size (64) to match target batch_size (100).

Here is my network configuration:

self.conv1 = nn.Conv2d(3, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)        
self.conv2_drop = nn.Dropout2d()        
self.fc1 = nn.Linear(500, 50)
self.fc2 = nn.Linear(50, 10)

Here is my forward method:

    def forward(self, x):
        print("1", x.shape)
        x = F.relu(F.max_pool2d(self.conv1(x), 2))
        print("2", x.shape)
        x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
        print("3", x.shape)
        x = x.view(-1, 500)
        print("4", x.shape)
        x = F.relu(self.fc1(x))
        x = F.dropout(x, training=self.training)
        x = self.fc2(x)
        return F.log_softmax(x)

Running it on train, I get

1 torch.Size([50, 3, 32, 32])
2 torch.Size([50, 10, 14, 14])
3 torch.Size([50, 20, 5, 5])
4 torch.Size([50, 500])

But on test, I get this, after which I get the specified error:

1 torch.Size([7, 3, 32, 32])
2 torch.Size([7, 10, 14, 14])
3 torch.Size([7, 20, 5, 5])
4 torch.Size([7, 500])
1 torch.Size([100, 3, 28, 28])
2 torch.Size([100, 10, 12, 12])
3 torch.Size([100, 20, 4, 4])
4 torch.Size([64, 500])

I am also attaching the train and test methods used:

def train( model, device, train_loader, optimizer, epoch):
    model.train()
    for batch_idx, (data, target) in enumerate(train_loader):
        data, target = data.to(device), target.to(device)
        optimizer.zero_grad()
        output = model(data)
        loss = F.nll_loss(output, target)
        loss.backward()
        optimizer.step()
        if batch_idx % log_interval == 0:
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                epoch, batch_idx * len(data), len(train_loader.dataset),
                       100. * batch_idx / len(train_loader), loss.item()))
            
def test( model, device, test_loader):
    model.eval()
    test_loss = 0
    correct = 0
    with torch.no_grad():
        for data, target in test_loader:
            data, target = data.to(device), target.to(device)
            output = model(data)
            test_loss += F.nll_loss(output, target, reduction='sum').item()  # sum up batch loss
            pred = output.argmax(dim=1, keepdim=True)  # get the index of the max log-probability
            correct += pred.eq(target.view_as(pred)).sum().item()

    test_loss /= len(test_loader.dataset)

    print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
        test_loss, correct, len(test_loader.dataset),
        100. * correct / len(test_loader.dataset)))

Any help appreciated.

The problem is because you trained on 32x32x3 images. After the 2nd maxpool you get an output of 20x5x5, hence you reshape to x.view(-1, 500). But since MNIST images are 28x28x3 the shape after 2nd maxpool becomes 20x4x4 which after reshaping becomes torch.Size([64, 500]) causing your batch shape change from 100 to 64. You should resize your inputs to 32x32x3 inorder to use this net which is trained on 32x32x3 SVHN images.

Yes, true. Thanks Raghul.