How do my transforms affect my model?

Hey,

I don’t understand why my transformations lead to my model failing to run (RuntimeError: mat1 and mat2 shapes cannot be multiplied (4x44944 and 400x120)). Generally, I stick to the documentation of training a classifier (Training a Classifier — PyTorch Tutorials 1.13.0+cu117 documentation) only that I transformed my data differently.

transform = transforms.Compose([transforms.RandomRotation(30),
                                transforms.RandomResizedCrop(224),
                                transforms.RandomHorizontalFlip(),
                                transforms.ToTensor(),
                                transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

Looking at the shape of my batch tensor I don’t see why the model as indicated in the documentation now doesn’t run anymore. Do these transformation change something that is relevant to the model?

class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = torch.flatten(x, 1) # flatten all dimensions except batch
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

In general, I know how to fix this error, but I don’t understand what makes my tensor so big all of a sudden. Since nothing is changed compared to the documentation it must have something to do with my previous transforms. Can someone explain, what happened?

I would be happy about any help!
Cheers

The tutorial uses CIFAR10 where samples have a spatial size of 32x32 while you are resizing your samples to 224x224 and are thus increasing the number of features in the flattened activation.