Dimensional error and question about features

I’m trying to build a network for my thesis that splits VGG into three modules (smaller networks) and train it with THE CIFAR10 dataset. I’m kind of new to pytorch wo please be kind. I probably made HUGE mistakes, so it would be great to know the proper way to fix them so that I can get a better understanding on the field.

My data loaders:

# INPUT DATA
transform = transforms.Compose(
    [transforms.ToTensor(), transforms.Normalize(
        (0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]
)

trainset = torchvision.datasets.CIFAR10(
    root='./data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(
    trainset, batch_size=b_size, shuffle=True)
testset = torchvision.datasets.CIFAR10(
    root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(
    testset, batch_size=b_size, shuffle=True)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')  # noqa: E501

My network so far (only the first module/subnet):

class Module1(nn.Module):

  def __init__(self):
    super(Module1, self).__init__()
    self.conv1 = nn.Conv2d(3, 32, 5, 1, 2)
    self.conv2 = nn.Conv2d(32, 64, 5, 1, 2)

    self.conv3 = nn.Conv2d(64, 128, 5, 1, 2)
    self.conv4 = nn.Conv2d(128, 128, 5, 1, 2)

    self.pool = nn.MaxPool2d(2, 2)
    self.fc = nn.Linear(128, 10)

    self.features = nn.Sequential(*list(self.children())[:-1])

  def forward(self, x):
    x = self.pool(F.relu(self.conv2(F.relu(self.conv1(x)))))
    x = self.pool(F.relu(self.conv4(F.relu(self.conv3(x)))))
    x = x.view(-1, 128) # reshape
    x = self.fc(x)
    self.features = self.features(x)
    return x

Optimizer and loss function (I will need to implement my own loss function later on, but for now I’m using CrossEntropyLoss):

optimizer = optim.Adam(mod1.parameters(), lr=lr)
criterion = nn.CrossEntropyLoss()

Then, when I try to iterate:

for epoch in range(epochs):
  for i, data in enumerate(trainloader, 0):

      inputs, labels = data[0].to(device), data[1].to(device)
      optimizer.zero_grad()

      x = mod1(inputs) # Error HERE
      z = mod1.features(x.detach())
      loss = criterion(inputs, labels)
      loss.backward()
      optimizer.step()

I get this error:

RuntimeError: Expected 4-dimensional input for 4-dimensional weight 32 3 5 5, but got 2-dimensional input of size [1024, 10] instead

Now, I do not know what to do to fix this dimensional issue because my samples are supposed to be 3x32x32.

Also, I am not sure at all about my ‘features’ parameter in the network (my supervisor told me to use it but since it is a hand-made network I tried to emulate this)… but my code has yet to reach the line where I actually use it, so who knows maybe it works?

The error is raised, because you are trying to feed the output tensor again to the complete model via self.features.
Remove self.features as it’s not doing anything at the moment.
Once this bug is fixed, you’ll run into a shape mismatch, as our activation won’t have 128 features to be suitable for self.fc.
You can check the output by adding print statements after each layer in your forward method.
Also, I would recommend to use x = x.view(x.size(0), -1) to detect these errors earlier.

1 Like

Yes, it works now! And I solved the second problem checking your nice answer here (x.view(x.size(0), -1) indeed).

Now, for the next module I need to stop the current gradient since I want to use local backprop only, is the correct approach to just use detach?

Yes, detach will stop the gradient from flowing into all previous parameters.

1 Like